id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2310.18615
Temporally Disentangled Representation Learning under Unknown Nonstationarity
In unsupervised causal representation learning for sequential data with time-delayed latent causal influences, strong identifiability results for the disentanglement of causally-related latent variables have been established in stationary settings by leveraging temporal structure. However, in nonstationary setting, existing work only partially addressed the problem by either utilizing observed auxiliary variables (e.g., class labels and/or domain indexes) as side information or assuming simplified latent causal dynamics. Both constrain the method to a limited range of scenarios. In this study, we further explored the Markov Assumption under time-delayed causally related process in nonstationary setting and showed that under mild conditions, the independent latent components can be recovered from their nonlinear mixture up to a permutation and a component-wise transformation, without the observation of auxiliary variables. We then introduce NCTRL, a principled estimation framework, to reconstruct time-delayed latent causal variables and identify their relations from measured sequential data only. Empirical evaluations demonstrated the reliable identification of time-delayed latent causal influences, with our methodology substantially outperforming existing baselines that fail to exploit the nonstationarity adequately and then, consequently, cannot distinguish distribution shifts.
Xiangchen Song, Weiran Yao, Yewen Fan, Xinshuai Dong, Guangyi Chen, Juan Carlos Niebles, Eric Xing, Kun Zhang
2023-10-28T06:46:03Z
http://arxiv.org/abs/2310.18615v2
# Temporally Disentangled Representation Learning under Unknown Nonstationarity ###### Abstract In unsupervised causal representation learning for sequential data with time-delayed latent causal influences, strong identifiability results for the disentanglement of causally-related latent variables have been established in stationary settings by leveraging temporal structure. However, in _nonstationary_ setting, existing work only partially addressed the problem by either utilizing observed auxiliary variables (e.g., class labels and/or domain indexes) as side-information or assuming simplified latent causal dynamics. Both constrain the method to a limited range of scenarios. In this study, we further explored the Markov Assumption under time-delayed causally related process in _nonstationary_ setting and showed that under mild conditions, the independent latent components can be recovered from their nonlinear mixture up to a permutation and a component-wise transformation, _without_ the observation of auxiliary variables. We then introduce NCTRL, a principled estimation framework, to reconstruct time-delayed latent causal variables and identify their relations from measured sequential data only. Empirical evaluations demonstrated the reliable identification of time-delayed latent causal influences, with our methodology substantially outperforming existing baselines that fail to exploit the nonstationarity adequately and then, consequently, cannot distinguish distribution shifts. ## 1 Introduction Causal reasoning for time-series data is a long-lasting yet fundamental task [1; 2; 3]. The majority of the studies focus on the temporal causal discovery among observed variables [4; 5; 6]. However, in many real-world scenarios, the observed data (e.g., image pixels in videos) instead of having direct causal edges, are generated by some causally related latent temporal processes or confounders. Learning causal relations has practical use cases, which benefit a lot of downstream tasks. However, estimating latent causal structures among those unobserved variables purely from observations without appropriate class of assumptions is an extremely challenging task (i.e. the latent variables are generally not identifiable) [7; 8]. Under the topic of unsupervised representation learning via nonlinear Independent Component Analysis (ICA), some strong identifiability results of the latent variables have been established [9; 10; 11; 12; 13; 14] by introducing side information such as class labels and domain indices. Specifically focusing on time-series data, history information is also widely used as the side information for the identifiability of latent processes [15; 16; 17; 18]. However, existing studies mainly focused on and derived identifiability results in stationary settings [10; 16] (Fig 1 (a)) or nonstationary settings with explicitly observed domain indices [17; 18; 12] (Fig 1 (b)). One can immediately tell the infeasibility of those two scenarios that general time-series data is usually nonstationary and the side information (class labels and domain indices) is usually unobserved. That is particularly true when considering real-world data such as video or signal sequences. It doesn't make any sense to assume that there exists a stationary transition function that is applied to the whole video clip. Take a very simple video clip of a mouse2[19] as an example, it is fairly clear that such a simple motion example can be divided into at least two phases (1) active phase in which the mouse is moving and (2) inactive phase in which the mouse is laying down. Instead of using a complex transition function to describe the whole video clip, a more reasonable assumption is that the same transition function is shared within the same phase, but across different phases, the transition functions are different, in other words, the transition function can be expressed as a function of the domain index. Also, it is worth mentioning that if such domain or phase indices is latent or unobserved, then we cannot directly utilize the existing framework to learn the latent causal dynamics. That is again a more realistic case that in general, the domain indices within a video are not accessible without expensive human annotation. Footnote 2: [https://dattalab.github.io/moseq2-website/images/sample-extraction.gif](https://dattalab.github.io/moseq2-website/images/sample-extraction.gif). Recently, HMNLICA [14] attempted to resolve the problem by introducing Markov Assumption on the nonstationary discrete domain variable, they assumed the domain indices follow a first-order Markov Chain and estimated the domain information purely from observed data. However, HMNLICA assumes temporally mutually independent sources in the data-generating process (conditioning on domain indices), i.e. they don't allow latent variables to have time-delayed causal relations in between (Fig 1 (c)). Such an assumption imposed a huge negative impact on the usability of those methods. Considering the video of the little mouse example, the \(\mathbf{x}_{t}\)s are the observed video frames, \(\mathbf{z}_{t}\)s can be the independent motion dynamics or causal process such as position, velocity, (angular) momentum, etc, and \(c_{t}\)s are the phases or actions such as standing up (active) and laying down (inactive). To accommodate for general sequential data, time-delayed temporal dependence should be considered in the latent \(\mathbf{z}_{t}\) space, otherwise, it is impossible to model a complex video data's temporal relation purely from discrete, domain indices. Also to make sure that the latent independent components can be recovered, temporally conditional Independence should also be enforced, i.e. Each dimension of \(\mathbf{z}_{t}\) is conditionally independent given the history \(\mathbf{z}_{\text{history}}\). To this end, a natural question is: _How can we establish identifiability of nonlinear ICA for general sequential data with nonstationary causally-related process without observing auxiliary variable?_ To answer this question, we first formulate the latent nonstationary states as a discrete Markov process and further explore the Markov Assumption [20] which is introduced for identifiability of nonlinear ICA in HMNLICA [14] and provided stronger identifiability result corresponding to the conditional emission distribution (i.e. the transition function of different domains) and the transition matrix of Figure 1: Graphic models for three different settings in causally related time-delayed time series data with a visual illustration. (a) is a _stationary_ setting in which the transition function \(\mathbf{z}_{t+1}=f_{z}(\mathbf{z}_{t})\) stays universally the same. (b) is the setting widely explored in existing work, in which the transition function \(f_{z}\) changes according to different domains (denoted as \(c_{t}\)), and all those domain indices are observed. (c) capture the unobserved domain indices by introducing a Markov chain on \(c_{t}\). (d) is a more general form to model the time series data in this work. It allows nonstationary settings and it doesn’t require the domain indices to be observed. the Markov process. Specifically, we generalized the identifiability of Hidden Markov Models in [20] to accommodate time-delayed causally-related non-parametric transitions in latent space (Thm. 1). Then we utilize the linear independence (Thm. 2) to further establish the identifiability of \(\mathbf{z}_{t}\). The main contributions of this work can be summarized as follows: * To our best knowledge, this is the first identifiability result that can handle the nonstationary time-delayed causally-related latent temporal processes without the auxiliary variable. We formulate the problem, especially the nonstationary states into the Markov process, establish identifiability purely from observed data, and then show strong identifiability of latent independent components. * We present NCTRL, Nonstationary Causal Temporal Representation Learning, a principled framework to recover time-delayed latent causal variables and identify their relations from measured sequential data under unobserved different distribution shifts. * Experiments on both synthetic and real-world datasets demonstrate the effectiveness of the proposed method in recovering the latent variables. ## 2 Problem Formulation ### Time Series Generative Model Assume we observe \(n\)-dimensional time-series data at discrete time steps, \(\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{T}\}\), where each \(\mathbf{x}_{t}\in\mathcal{X}\) is generated from time-delayed causally related hidden components \(\mathbf{z}_{t}\in\mathbb{R}^{n}\) by the invertible mixing function: \[\mathbf{x}_{t}=\mathbf{g}(\mathbf{z}_{t}). \tag{1}\] In addition to latent components \(\mathbf{z}_{t}\), there is an extra hidden variable \(c_{t}\) which is discrete with cardinality \(\lfloor c_{t}\rfloor=C\), it follows a first-order Markov process controlled by a \(C\times C\) transition matrix \(\mathbf{A}\), in which the \(i,j\)-th entry \(A_{i,j}\) is the probability to transit from state \(i\) to \(j\). \[c_{1},c_{2},\ldots,c_{t}\sim\text{Markov Chain}(\mathbf{A}) \tag{2}\] For \(i\in\{1,\ldots,n\}\), \(z_{it}\), as the \(i\)-th component of \(\mathbf{z}_{t}\), is generated by (some) components of history information \(\mathbf{z}_{t-1}\), discrete nonstationary indicator \(c_{t}\), and noise \(\epsilon_{it}\). \[z_{it}=f_{i}(\{z_{j,t-1}\,|\,z_{j,t-\tau}\in\mathbf{Pa}(z_{it})\},c_{t}, \epsilon_{it})\quad with\quad\epsilon_{it}\sim p_{\epsilon_{i}} \tag{3}\] where \(\mathbf{Pa}(z_{it})\) is the set of latent factors that directly cause \(z_{it}\), which can be any subset of \(\mathbf{z}_{\text{Hx}}=\{\mathbf{z}_{t-1},\mathbf{z}_{t-2},\ldots,\mathbf{z}_ {t-L}\}\) up to history information maximum lag \(L\). The components of \(\mathbf{z}_{t}\) are mutually independent conditional on \(\mathbf{z}_{\text{Hx}}\) and \(c_{t}\). ### Identifiability of Latent Causal Processes and Time-Delayed Latent Causal Relations We define the identifiability of time-delayed latent causal processes in the representation function space in **Definition 1**. Furthermore, if the estimated latent processes can be identified at least up to permutation and component-wise invertible nonlinearities, the latent causal relations are also immediately identifiable because conditional independence relations fully characterize time-delayed causal relations in a time-delayed causally sufficient system, in which there are no latent causal confounders in the (latent) causal processes. Note that invertible component-wise transformations on latent causal processes do not change their conditional independence relations. **Definition 1** (Identifiable Latent Causal Processes).: _Formally let \(\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{T}\}\) be a sequence of observed variables generated by the true temporally causal latent processes specified by \((f_{i},p(\epsilon_{i}),\mathbf{A},\mathbf{g})\) given in Eqs. (1), (2), and (3). A learned generative model \((\hat{f}_{i},\hat{p}(\epsilon_{i}),\hat{\mathbf{A}},\hat{\mathbf{g}})\) is observationally equivalent to \((f_{i},p(\epsilon_{i}),\mathbf{A},\mathbf{g})\) if the model distribution \(p_{\hat{g},\hat{p}_{i},\hat{\mathbf{A}},\hat{\mathbf{g}}}(\{\mathbf{x}_{1}, \mathbf{x}_{2},\ldots,\mathbf{x}_{T}\})\) matches the data distribution \(p_{f_{i},p_{\epsilon_{i}},\mathbf{A},\mathbf{g}}(\{\mathbf{x}_{1},\mathbf{x }_{2},\ldots,\mathbf{x}_{T}\})\) everywhere. We say latent causal processes are identifiable if observational equivalence can lead to identifiability of the latent variables up to permutation \(\pi\) and component-wise invertible transformation \(T\):_ \[p_{\hat{f}_{i},\hat{p}_{\epsilon_{i}},\hat{\mathbf{A}},\hat{ \mathbf{g}}}(\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{T}\})=p_{f_{i },p_{\epsilon_{i}},\mathbf{A},\mathbf{g}}(\{\mathbf{x}_{1},\mathbf{x}_{2}, \ldots,\mathbf{x}_{T}\}) \tag{4}\] \[\Rightarrow\hat{\mathbf{g}}^{-1}(\mathbf{x}_{t})=T\circ\pi\circ \mathbf{g}^{-1}(\mathbf{x}_{t}),\quad\forall\mathbf{x}_{t}\in\mathcal{X},\] _where \(\mathcal{X}\) is the observation space._ Identifiability Theory In this section, we showed that under mild conditions, the latent variable \(\mathbf{z}_{t}\) is identifiable up to permutation and a component-wise transformation. The theoretical results can be divided into two parts (1) identifiability of the nonstationarity and (2) identifiability of the independent components. As introduced above, the major challenge comes from the unobserved domain indices or nonstationary indicators (\(c_{t}\) in our graphic models). We first establish the identifiability of the different conditional distributions from the observed data and then show that the latent variables \(\mathbf{z}\) are identifiable. The complete proofs can be found in Appendix A. ### Identifiability of Nonstationary Hidden States Gassiat et al.[20] showed that the conditional emission distributions in Hidden Markov Models and the transition matrix are identifiable up to label swapping. We first generalize it to the autoregressive setting to accommodate for the time-delayed causal relation, i.e. we showed the identifiability of conditional emission distributions \(p(\mathbf{x}_{t}|\mathbf{x}_{t-1},c)\). **Theorem 1**.: _(identifiability of the nonstationarity with Markov Assumptions) Suppose the observed data is generated following the nonlinear ICA framework as defined in Eqs. (1), (2) and (3). Suppose the following assumptions (Markov Assumptions) hold:_ * _For the Markov process, the number of latent states,_ \(C\)_, is known._ * _The transition matrix_ \(\mathbf{A}\) _is full rank._ _Use \(\mu_{1},\ldots,\mu_{C}\in\mathbb{R}^{n}\) to denote nonparametric probability distributions of the \(C\) emission distributions \(\mu_{c}=p(\mathbf{x}_{t}\,|\,\mathbf{x}_{t-1},c)\). Then the parameters \(\mathbf{A}\) and \(M=(\mu_{1},\ldots,\mu_{C})\) are identifiable given the distribution, \(\mathbb{P}^{(4)}_{\mathbf{A},M}\), of at least 4 consecutive observations \(\mathbf{x}_{t},\mathbf{x}_{t+1},\mathbf{x}_{t+2},\mathbf{x}_{t+3}\), up to label swapping of the hidden states, that is:_ _If \(\widetilde{\mathbf{A}}\) is a \(C\times C\) transition matrix and if \(\widetilde{\pi}(c)\) is a stationary distribution of \(\widetilde{\mathbf{A}}\) with \(\widetilde{\pi}(c)>0\)\(\forall c\in\{1,\ldots,C\}\), and if \(\widetilde{M}=(\tilde{\mu}_{1},\ldots,\tilde{\mu}_{C})\) are \(C\) probability distributions on \(\mathbb{R}^{n}\) that verify the equality of the distribution functions \(\mathbb{P}^{(4)}_{\widetilde{\mathbf{A}},\widetilde{M}}=\mathbb{P}^{(4)}_{ \mathbf{A},M}\), then there exists a permutation \(\sigma\) of the set \(\{1,\ldots,C\}\) such that for all \(k,l=1,\ldots,C\) we have \(\tilde{A}_{k,l}=A_{\sigma(k),\sigma(l)}\) and \(\tilde{\mu}_{k}=\mu_{\sigma(k)}\)._ For notational simplicity, and without loss of generality, we can assume the components are ordered such that \(c=\sigma(c)\). That leads us to the identifiability of the nonstationarity in the system i.e. up to label swapping of the hidden states, the conditional emission distributions \(p(\mathbf{x}_{t}|\mathbf{x}_{t-1},c_{t})\) and transition matrix \(\mathbf{A}\) are identifiable, hence providing us a bridge to further leverage the temporal independence condition in the latent space to establish the identifiability result for demixing function or in other words the latent variables \(\mathbf{z}_{t}\). ### Identifiability of Latent Causal Processes To incorporate nonlinear ICA into the Markov Assumption we define the emission distribution \(p(\mathbf{x}_{t}\,|\,\mathbf{x}_{t-1},c)\) as a deep latent variable model. First, the latent independent component variables \(\mathbf{z}_{t}\in\mathbb{R}^{n}\) are generated from a factorial prior, given the hidden state \(c_{t}\) and previous \(\mathbf{z}_{t-1}\), as \[p(\mathbf{z}_{t}\,|\,\mathbf{z}_{t-1},c_{t})=\prod_{k=1}^{n}p(z_{kt}\,|\, \mathbf{z}_{t-1},c_{t}). \tag{5}\] Second, the observed data \(\mathbf{x}_{t}\) is generated by a nonlinear mixing function as in Eq. (1) which is assumed to be bijective with inverse given by \(\mathbf{z}_{t}=\mathbf{g}(\mathbf{x}_{t})\). Let \(\eta_{kt}(c_{t})\triangleq\log p(z_{kt}|\mathbf{z}_{t-1},c_{t})\), and assume that \(\eta_{kt}(c_{t})\) is twice differentiable in \(z_{kt}\) and is differentiable in \(z_{l,t-1}\), \(l=1,2,...,n\). Note that the parents of \(z_{kt}\) may be only \(c_{t}\) and a subset of \(\mathbf{z}_{t-1}\); if \(z_{l,t-1}\) is not a parent of \(z_{kt}\), then \(\frac{\partial\eta_{k}}{\partial z_{l,t-1}}=0\). **Theorem 2**.: _(identifiability of the independent components) Suppose there exists an invertible function \(\tilde{\mathbf{g}}^{-1}\), which is the estimated demixing function that maps \(\mathbf{x}_{t}\) to \(\hat{\mathbf{z}}_{t}\), i.e.,_ \[\hat{\mathbf{z}}_{t}=\hat{\mathbf{g}}^{-1}(\mathbf{x}_{t}) \tag{6}\] _such that the components of \(\hat{\mathbf{z}}_{t}\) are mutually independent conditional on \(\hat{\mathbf{z}}_{t-1}\). Let_ \[\begin{split}\mathbf{v}_{k,t}(c)&\triangleq\Big{(} \frac{\partial^{2}\eta_{kt}(c)}{\partial z_{k,t}\partial z_{1,t-1}},\frac{ \partial^{2}\eta_{kt}(c)}{\partial z_{k,t}\partial z_{2,t-1}},...,\frac{ \partial^{2}\eta_{kt}(c)}{\partial z_{k,t}\partial z_{n,t-1}}\Big{)}^{\intercal},\\ \hat{\mathbf{v}}_{k,t}(c)&\triangleq\Big{(}\frac{ \partial^{3}\eta_{kt}(c)}{\partial z_{k,t}^{2}\partial z_{1,t-1}},\frac{ \partial^{3}\eta_{kt}(c)}{\partial z_{k,t}^{2}\partial z_{2,t-1}},...,\frac{ \partial^{3}\eta_{kt}(c)}{\partial z_{k,t}^{2}\partial z_{n,t-1}}\Big{)}^{ \intercal}.\end{split} \tag{7}\] _And_ \[\begin{split}\mathbf{s}_{kt}&\triangleq\Big{(} \mathbf{v}_{kt}(1)^{\intercal},...,\mathbf{v}_{kt}(C)^{\intercal},\frac{ \partial^{2}\eta_{kt}(2)}{\partial z_{kt}^{2}}-\frac{\partial^{2}\eta_{kt}(1 )}{\partial z_{kt}^{2}},...,\frac{\partial^{2}\eta_{kt}(C)}{\partial z_{kt}^{ 2}}-\frac{\partial^{2}\eta_{kt}(C-1)}{\partial z_{kt}^{2}}\Big{)}^{\intercal},\\ \hat{\mathbf{s}}_{kt}&\triangleq\Big{(}\hat{\mathbf{ v}}_{kt}(1)^{\intercal},...,\hat{\mathbf{v}}_{kt}(C)^{\intercal},\frac{ \partial\eta_{kt}(2)}{\partial z_{kt}}-\frac{\partial\eta_{kt}(1)}{\partial z_ {kt}},...,\frac{\partial\eta_{kt}(C)}{\partial z_{kt}}-\frac{\partial\eta_{ kt}(C-1)}{\partial z_{kt}}\Big{)}^{\intercal}.\end{split} \tag{8}\] _If for each value of \(\mathbf{z}_{t}\), \(\mathbf{s}_{1t},\hat{\mathbf{s}}_{1t},\mathbf{v}_{2t},\hat{\mathbf{s}}_{2t},...,\mathbf{s}_{nt},\hat{\mathbf{s}}_{nt}\), as \(2n\) function vectors \(\mathbf{s}_{k,t}\) and \(\hat{\mathbf{s}}_{k,t}\), with \(k=1,2,...,n\), are linearly independent, then \(\hat{\mathbf{z}}_{t}\) must be an invertible, component-wise transformation of a permuted version of \(\mathbf{z}_{t}\)._ So far, the identifiability result has been established without observing the nonstationarity indicators such as domain indices. In the next section, a novel Variational Auto-Encoder based method is introduced to estimate the demixing function \(\hat{\mathbf{g}}^{-1}\). ## 4 Nctrl: Nonstationary Causal Temporal Representation Learning In this section, we present the details of NCTRL to estimate the latent causal processes under unobserved nonstationary distribution shift, given the identifiability results in Sec 3. First, we show that our framework includes three modules, Autoregressive Hidden Markov Module, Prior Network, and Encoder-Decoder Module. Then, we provide the optimization objective of our model training including an HMM free energy lower bound, a reconstruction likelihood loss, and a KL divergence. ### Model Architecture Our framework extends Sequential Variational Auto-Encoders [21] with tailored modules to model nonstationarity, and enforces the conditions in Sec. 3 as constraints. We give the estimation procedure of the latent causal dynamics model in Eq. (3). The model architecture is showcased in Fig. 2. The framework has three major components (1) Autoregressive Hidden Markov Module (ARHMM), (2) Prior Network Module, and (3) Encoder-Decoder Module. Autoregressive Hidden Markov Module (ARHMM)The first component of our framework is ARHMM which deals with the nonstationarity with unobserved domains. As discussed in Thm 1, the transition function or the conditional emission distributions across different domains together with the Markov transition matrix \(\mathbf{A}\) are identifiable. This module estimates the transition function of different domains \(p(\mathbf{x}_{t}|\mathbf{x}_{t-1},c_{t})\) and the transition matrix \(\mathbf{A}\) of the Markov process, and ultimately decodes the optimal domain indices \(\{\hat{c}_{1},\hat{c}_{2},\ldots,\hat{c}_{T}\}\) via the Viterbi algorithm. Prior Network ModuleTo better estimate the prior distribution \(p(\hat{z}_{t}|\hat{\mathbf{z}}_{\text{Hx}},c_{t})\), let \(\mathbf{z}_{\text{Hx}}\) denote the lagged latent variables up to maximum time lag \(L\). We evaluate \(p(\hat{z}_{t}|\hat{\mathbf{z}}_{\text{Hx}},c_{t})=p_{\epsilon}\left(\hat{f}_{z} ^{-1}(\hat{z}_{t},\hat{\mathbf{z}}_{\text{Hx}},\hat{\mathbf{\theta}}_{c_{t}}) \right)\left|\frac{\partial\hat{f}_{z}^{-1}}{\partial\hat{z}_{t}}\right|\) by learning a holistic inverse dynamics \(\hat{f}_{z}^{-1}\) that takes the estimated change factors for dynamics \(\hat{\mathbf{\theta}}_{c_{t}}\) as inputs. Conditional independence of the estimated latent variables \(p(\hat{\mathbf{z}}_{t}|\hat{\mathbf{z}}_{\text{Hx}})\) is enforced by summing up all estimated component densities when obtaining the joint \(p(\mathbf{z}_{t}|\mathbf{z}_{\text{Hx}},c_{t})\) in Eq. 9. Given that the Jacobian is lower-triangular, we can compute its determinant as Figure 2: Illustration of NCTRLwith (1) Autoregressive Hidden Markov Module, (2) Prior Network, and (3) Encoder-Decoder Module. the product of diagonal terms. The detailed derivations are given in Appendix B.2. \[\log p\left(\hat{\mathbf{z}}_{t}|\hat{\mathbf{z}}_{\text{Hx}},c_{t} \right)=\underset{\underset{\text{Conditional independence}}{ \underbrace{\sum_{i=1}^{n}\log p(\hat{\epsilon}_{i}|c_{t})}}}{\underbrace{ \sum_{i=1}^{n}\log\left|\frac{\partial\hat{f}_{i}^{-1}}{\partial\hat{z}_{it}} \right|}} \tag{9}\] Encoder-Decoder ModuleThe third component is a Variational Auto-Encoder based module which utilizes reconstruction loss to enforce the invertibility of learned mixing function \(\hat{\mathbf{g}}\). Specifically, the encoder fits the demixing function \(\hat{\mathbf{g}}^{-1}\) and the decoder fits the mixing function \(\hat{\mathbf{g}}\). The implementation details are in Appendix B. ### Optimization The first training objective of NCTRL is to maximize the Log-likelihood of the observed data: \[\log p_{\theta_{\text{HMM}}}(\{\mathbf{x}_{1},\mathbf{x}_{2}, \ldots,\mathbf{x}_{T}\}) \tag{10}\] where \(\theta_{\text{HMM}}\) represents the HMM training parameters. Then the free energy lower bound can be defined as: \[-\mathcal{L}_{\text{HMM}}=\mathcal{L}(q(\mathbf{c}),\boldsymbol{ \theta}_{\text{HMM}})\triangleq\mathbb{E}_{q(\mathbf{c})}\left[\log p_{ \theta_{\text{HMM}}}(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{T}, \mathbf{c})\right]-\mathbf{H}(q(\mathbf{c})) \tag{11}\] Consistent with the theory part, the first training objective is to maximize data log-likelihood in the ARHMM module to get optimal \(q(\mathbf{c}^{\star})\). \[q(\mathbf{c}^{\star})\triangleq\operatorname*{arg\,max}_{q( \mathbf{c})}\mathcal{L}(q(\mathbf{c}),\boldsymbol{\theta}_{\text{HMM}}) \tag{12}\] which can easily be computed by the Forward-Backward algorithm and luckily all of it is differentiable to the HMM training parameters \(\boldsymbol{\theta}_{\text{HMM}}\)(transition matrix \(\mathbf{A}\) and transition function parameters \(\boldsymbol{\theta}_{f}\)). Then the second part is to maximize the Evidence Lower BOund (ELBO) for the VAE framework, which can be written as (complete derivation steps are in Appendix B.3): \[\begin{split}\text{ELBO}\triangleq&\log p_{\text{ data}}(\mathbf{X})-D_{KL}(q_{\phi}(\mathbf{Z}|\mathbf{X})||p_{\text{data}}( \mathbf{Z}|\mathbf{X}))\\ =&\underset{-\mathcal{L}_{\text{HMM}}}{\underbrace{ \mathbb{E}_{\mathbf{x}_{t}}\sum_{t=1}^{T}\log p_{\text{data}}(\mathbf{x}_{t}| \mathbf{z}_{t})}}+\underset{-\mathcal{L}_{\text{HMM}}}{\underbrace{ \mathbb{E}_{\mathbf{c}}\left[\sum_{t=1}^{T}\log p_{\text{data}}(\mathbf{z}_{t} |\mathbf{z}_{\text{Hx}},c_{t})-\sum_{t=1}^{T}\log q_{\phi}(\mathbf{z}_{t}| \mathbf{x}_{t})\right]}}.\end{split} \tag{13}\] We use mean-squared error (MSE) for the reconstruction likelihood loss \(\mathcal{L}_{\text{Recon}}\). The KL divergence \(\mathcal{L}_{\text{KLD}}\) is estimated via a sampling approach since with a learned nonparametric transition prior, the distribution does not have an explicit form. Specifically, we obtain the log-likelihood of the posterior, evaluate the prior \(\log p\left(\hat{\mathbf{z}}_{t}|\hat{\mathbf{z}}_{\text{Hx}},c_{t}\right)\) in Eq. (9), and compute their mean difference in the dataset as the KL loss: \(\mathcal{L}_{\text{KLD}}=\mathbb{E}_{\hat{\mathbf{z}}_{t}\sim q(\hat{\mathbf{ z}}_{t}|\mathbf{x}_{t})}\log q(\hat{\mathbf{z}}_{t}|\mathbf{x}_{t})-\log p \left(\hat{\mathbf{z}}_{t}|\hat{\mathbf{z}}_{\text{Hx}},c_{t}\right)\). ## 5 Experiments We evaluate the identifiability results of NCTRL on a number of simulated and real-world temporal datasets. We first introduce the evaluation metrics and baselines and then discuss the datasets we used in our experiments. Lastly, we show the experiment results discuss the performance, and make comparisons. ### Evaluation Metrics **Mean Correlation Coefficient (MCC)** To evaluate the identifiability of the latent variables, we compute the Mean Correlation Coefficient (MCC) on the test dataset. MCC is a standard metric in the ICA literature for continuous variables which measure the identifiability of the learned latent causal processes. MCC is close to 1.0 when latent variables are identifiable up to permutation and component-wise invertible transformation in the noiseless case. **Mean Square Error (MSE) for estimating A** As introduced in the theory, the \(\mathbf{A}\) is identifiable in our setting, which means that our proposed method can provide accurate estimation for the transition matrix \(\mathbf{A}\), to valid such a claim, we use mean square error (MSE) to capture the distance between the estimated \(\hat{\mathbf{A}}\) and ground truth \(\mathbf{A}\). **Accuracy for estimating \(c_{t}\)** We also test the accuracy for estimating the discrete domain indices \(c_{t}\) supplementary to the MSE for \(\mathbf{A}\) since in theory, the \(\mathbf{A}\) is identifiable but the \(c_{t}\) is generally not identifiable, which is relatively easy to understand as an analogy in Hidden Markov Models, the transition matrix is identifiable but we can only "infer" the best possible discrete variables but cannot establish identifiability for it. It is also worth mentioning that the MSE and Accuracy are influenced by the permutation, which is also true in clustering evaluation problems. Here we explored all permutations and selected the best possible assignment for evaluation. ### Baselines The following identifiable nonlinear ICA methods are used: (1) BetaVAE [22] which ignores both history and nonstationarity information. (2) i-VAE [12] and TCL [9] which leverage nonstationarity to establish identifiability but assume independent factors. (3) SlowVAE [16], and PCL [10] which exploit temporal constraints but assume independent sources and stationary processes. (4) TDRL [18] which assumes nonstationary, causal processes but with observed domain indices. (5) HMNLICA [14] which considers the unobserved nonstationary part in the data generation process but doesn't allow any causally related time-delayed relations. ### Simulated Results We generate two synthetic datasets corresponding to different complexity of the nonlinear mixing function \(\mathbf{g}\). Both synthetic datasets satisfy our identifiability conditions in the theorems following the procedures in Appendix B.4. As in Table 1, NCTRL can recover the latent processes under unknown nonstationary distribution shifts with high MCCs (>0.95). The baselines that do not exploit history (i.e., BetaVAE, i-VAE, TCL), with independent source assumptions (Slow-VAE, PCL), consider limited nonstationary cases (TDRL) distort the identifiability results. The only baseline that considers the unknown nonstationarity in the domain indices (HMNLICA) explored the Markov Assumption but doesn't allow a time-delayed causal process and hence suffers a poor result (MCC 0.58). On the other hand, the difference between dataset A and dataset B is the nonlinearity in the mixing function, dataset A has a relatively simple nonlinear mixing function, on the contrary, dataset B has more complex nonlinearity. Some variability has been observed among the relative performance ranks of different baselines. For example, i-VAE showed a great discrepancy between the two datasets, which revived the weakness of capturing complex nonlinearity in the unknown nonstationary distribution shift environments. Again we also observed that our proposed method can constantly recover the latent independent components with high MCC which indicates on both datasets the model is identifiable and the estimation algorithm is highly effective. high accuracy. For the nonstationary domain indices \(c_{t}\) even though there is no identifiability result governing the estimation accuracy, it can still be inferred pretty well since it is nothing but a decoding problem in Hidden Markov Models. ### Real-world Applications Video data - Modified CartPole EnvironmentWe evaluate NCTRL on the modified CartPole [23] video dataset and compare the performances with the baselines. Modified Cartpole is a nonlinear dynamical system with cart positions \(x_{t}\) and pole angles \(\theta_{t}\) as the true state variables. The dataset descriptions are in Appendix B.5. Similar to the synthetic dataset, we randomly initialize a Markov chain and roll out a series of \(c_{t}\), and configure the CartPole environment with respect to the \(c_{t}\). Specifically, we use five domains with different configurations of cart mass, pole mass, gravity, and noise levels. Together with the two discrete actions (i.e., left and right). By doing so, the nonstationarity is enforced, and since we can control and access all intermediate states in the system, all metrics including MCC and \(c_{t}\) accuracy together with **A** MSE can be easily calculated. We fit NCTRL with two-dimensional causal factors. We set the latent size \(n=2\) and the lag number \(L=2\). In Fig. 3, the latent causal processes are recovered, as seen from (a) high MCC for the latent causal processes; (b) the latent factors are estimated up to component-wise transformation; and (c) the latent traversals confirm the two recovered latent variables correspond to the position and pole angle. Similar to Table 1 and 2, we compare our NCTRL with baseline methods. In addition, we also compare with SKD [24], a state-of-the-art sequential disentangle representation learning method without identifiability guarantee. In Table 3 and 4 we can see that compared with TDRL, our NCTRL can recover the latent processes under unknown nonstationary distribution shifts with high MCCs (>0.95) with highly accurate estimated transition matrix **A** and high quality inferred \(c_{t}\). Specifically, by comparing the result of SKD, the MCC for SKD is better than a variety of baselines, \begin{table} \begin{tabular}{c|c|c} \hline \hline \multicolumn{3}{c|}{**Unknown Nonstationary Metrics**} \\ \hline **Dataset** & **Accuracy estimating**\(c_{t}\) & **MSE estimating A** \\ \hline A & 89.96 \(\pm\) 0.24 & 1.01 \(\times 10^{-3}\)\(\pm\) 1.67 \(\times 10^{-4}\) \\ B & 89.84 \(\pm\) 0.29 & 1.08 \(\times 10^{-3}\)\(\pm\) 1.89 \(\times 10^{-4}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Supplementary experiment results of two synthetic datasets on estimating domain indices \(c_{t}\) and transition matrix **A** in NCTRL, we run the experiments with five different random seeds and calculate the average with standard derivation. Figure 3: Modified Cartpole results: (a) MCC for causally-related factors; (b) scatterplots between estimated and true factors; and (c) latent traversal on a fixed video frame \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{5}{c}{**Mean Correlation Coefficient (MCC)**} \\ \hline **BetaVAE** & **i-VAE** & **TCL** & **SlowVAE** & **SKD** & **TDRL** & **NCTRL** \\ \hline 57.54 & 60.14 & 65.07 & 63.16 & 73.24 & 85.26 & **96.06** \\ \hline \hline \end{tabular} \end{table} Table 3: Experiment results of CartPole dataset. The best results are shown in **bold**. however, we can see the distinction between well-disentangled models and identifiable models, only the models with identifiability can find the ground truth latent variables with theoretical guarantee. Video data - MoSeq DatasetWe test NCTRL framework to analyze mouse behavior video data from Wiltschko et al. [19], which represents the original application to clustering mouse behavior3, details of this dataset are available in Appendix B.6. Since there are no ground truth independent components in this particular real-world dataset, we analyze it by several visualizations to see if different domains can be properly identified and if the patterns in the recovered independent components are consistent with the recovered domain indices. We analyze the first video clip of mouse behavior data and visualize the two phases we discovered and segmented in Fig 4. We can clearly see from Fig 4 that there are different phases with the upper one actively moving and the lower one inactive. The recovered independent components showed a consistent pattern with the recovered phase or domain indices as shown in Fig 4. Footnote 3: Dataset can be accessed via [https://dattalab.github.io/moseq2-website/index.html](https://dattalab.github.io/moseq2-website/index.html) ## 6 Related Work Causal Discovery from Time SeriesUnderstanding the causal structure in time-series data is pivotal in areas such as machine learning [1], econometrics [2], and neuroscience [3]. A bulk of the research in this realm emphasizes determining the temporal causal links among observed variables. The primary techniques employed are constraint-based methods [25], which use conditional independence tests to ascertain causal structures, and score-based methods [26, 27], where scores are utilized to oversee a search operation. Some researchers also proposed a combination of these two methods [28, 29]. Additionally, Granger causality [30] and its nonlinear adaptations [31, 32] have gained widespread acceptance in this context. \begin{table} \begin{tabular}{c|c} \hline \hline \multicolumn{2}{c}{**Unknown Nonstationary Metrics**} \\ \hline \hline **Accuracy estimating \(c_{t}\)** & **MSE estimating A** \\ \hline 79.23 \(\pm\) 5.33 & 5.01 \(\times 10^{-2}\)\(\pm\) 1.23 \(\times 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Supplementary experiment results of CartPole datasets on estimating domain indices \(c_{t}\) and transition matrix \(\mathbf{A}\) in NCTRL, we run the experiments with five different random seeds and calculate the average with standard derivation. Figure 4: Result visualization of MoSeq dataset. (Active, Inactive) show two representative video frames for the active and inactive phases and (Independent Components) visualize the discovered independent components with corresponding phases tagged with different colors. Nonlinear ICA for Time SeriesRecently, the significance of temporal structures and nonstationarities has been recognized in achieving identifiability within nonlinear ICA. Time-contrastive learning (TCL [9]) utilizes the independent sources principle, focusing on data segments' variability. On the other hand, Permutation-based contrastive (PCL [10]) offers a learning approach that distinguishes true independent sources from shuffled ones under the uniformly dependent assumption. HMNLICA [14] integrates nonlinear ICA with an HMM to address non-stationarity without segmenting data manually. The i-VAE [12] approach employs VAEs to capture the actual joint distribution between observed and auxiliary non-stationary domains, assuming an exponential families conditional distribution. The recent advancements in nonlinear ICA for time series include LEAP [17], (i-)CITRIS [33; 34], and TDRL [18]. While LEAP introduces a novel condition emphasizing non-stationary noise, TDRL delves deeper into a non-parametric environment within a nonstationary context. In contrast, CITRIS recommends utilizing intervention target data for pinpointing latent causal aspects, avoiding certain constraints but necessitating active intervention access. Sequential DisentanglementMajority of existing work about sequential disentanglement focuses on architecture based on dynamical variational autoencoder (VAE) [35]. Early works [36; 37] separate dynamic factors from static factors using probabilistic methods. Then auxiliary tasks with self-supervisory signals [38] were introduced. C-DSVAE [39] utilized contrastive penalty terms with data augmentation to introduce additional inductive biases. In R-WAE [40], Wasserstein distance was introduced to replace KL divergence. To deal with video disentanglement [41; 42] explored generative adversarial network (GAN) architectures and [43] introduced a recurrent model with adversarial loss. FAVAE, [44] proposed a factorizing VAE and [45] proposed to learn hierarchical features. Finally, SKD [24] introduced a spectral loss term that leads to structured Koopman matrices and disentanglement. ## 7 Conclusion and Discussion **Conclusion.** In this paper, we first established an identifiability theory for general sequential data with nonstationary causally-related processes under unknown distribution shifts. Then we presented NCTRL, a principled framework to recover the time-delayed latent causal variable identify their causal relations from measured data, and decode high-quality domain indices under Markov assumption. Experiment results on both synthetic datasets and real-world video datasets showed that our proposed method can recover the latent causal variables and their causal relations purely from measured data with the observation of auxiliary variables or domain indices. **Limitation.** The basic limitation of this work is that the nonstationary domain indices are assumed to follow a Markov chain. Also, this work highly relies on the latent processes to have no instantaneous causal relations but only time-delayed influences. If the resolution of the time series is much lower, then it is usually violated and one has to find a way to deal with instantaneous causal relations. Extending our theories and framework to address the scenarios when more flexibility in the domain indices transition is allowed (i.e. beyond discrete variables following a Markov chain) and to address instantaneous dependency or instantaneous causal relations will be some of our future work. **Boarder Impacts.** This work proposes a theoretical analysis and technical methods to learn the causal representation from time-series data, which facilitate the construction of more transparent and interpretable models to understand the causal effect in the real world. This could be beneficial in a variety of sectors, including healthcare, finance, and technology. In contrast, misinterpretations of causal relationships could also have significant negative implications in these fields, which must be carefully done to avoid unfair or biased predictions. ## 8 Acknowledgment This project has been graciously funded by NGA HM04762010002, NSF IIS1955532, NSF CNS2008248, NIGMS R01GM140467, NSF IIS2123952, NSF BCS2040381, an Amazon Research Award, NSF IIS2311990, and DARPA ECOLE HR00112390063. This project is also partially supported by NSF Grant 2229881, the National Institutes of Health (NIH) under Contract R01HL159805, a grant from Apple Inc., a grant from KDDI Research Inc., and generous gifts from Salesforce Inc., Microsoft Research, and Amazon Research.
2306.14911
"You might think about slightly revising the title": identifying hedges in peer-tutoring interactions
Hedges play an important role in the management of conversational interaction. In peer tutoring, they are notably used by tutors in dyads (pairs of interlocutors) experiencing low rapport to tone down the impact of instructions and negative feedback. Pursuing the objective of building a tutoring agent that manages rapport with students in order to improve learning, we used a multimodal peer-tutoring dataset to construct a computational framework for identifying hedges. We compared approaches relying on pre-trained resources with others that integrate insights from the social science literature. Our best performance involved a hybrid approach that outperforms the existing baseline while being easier to interpret. We employ a model explainability tool to explore the features that characterize hedges in peer-tutoring conversations, and we identify some novel features, and the benefits of such a hybrid model approach.
Yann Raphalen, Chloé Clavel, Justine Cassell
2023-06-18T12:47:54Z
http://arxiv.org/abs/2306.14911v1
You might think about slightly revising the title": Identifying Hedges in Peer-tutoring Interactions ###### Abstract Hedges play an important role in the management of conversational interaction. In peer-tutoring, they are notably used by tutors in dyads (pairs of interlocutors) experiencing low rapport to tone down the impact of instructions and negative feedback. Pursuing the objective of building a tutoring agent that manages report with students in order to improve learning, we used a multimodal peer-tutoring dataset to construct a computational framework for identifying hedges. We compared approaches relying on pre-trained resources with others that integrate insights from the social science literature. Our best performance involved a hybrid approach that outperforms the existing baseline while being easier to interpret. We employ a model explainability tool to explore the features that characterize hedges in peer-tutoring conversations, and we identify some novel features, and the benefits of such a hybrid model approach. ## 1 Introduction Rapport, most simply defined as the "... relative harmony and smoothness of relations between people..." (Spencer-Oatey, 2005), has been shown to play a role in the success of activities as varied as psychotherapy (Leach, 2005) and survey interviewing (Lune and Berg, 2017). In peer-tutoring, rapport, as measured by the annotation of thin slices of video, has been shown to be beneficial for learning outcomes (Zhao et al., 2014; Sinha and Cassell, 2015). The level of rapport rises and falls with conversational strategies deployed by tutors and tutes at appropriate times, and as a function of the content of prior turns. These strategies include self-disclosure, referring to shared experience, and, on the part of tutors, giving instructions in an indirect manner. Some work has attempted to automatically detect these strategies in the service of intelligent tutors (Zhao et al., 2016a), but only a few strategies have been attempted. Other work has concentrated on a "social reasoning module" (Romero et al., 2017) to decide which strategies should be generated in a given context, but indirectness was not among the strategies targeted. In this paper, we focus on the automatic classification of one specific strategy that is particularly important for the tutoring domain, and therefore important for intelligent tutors: hedging, a sub-part of indirectness that "softens" what we say. This work is part of a larger research program with the long-term goal of automatically generating indirectness behaviors for a tutoring agent. According to Brown and Levinson (1987), hedges are part of the linguistic tools that interlocutors use to produce politeness, by limiting the face threat to the interlocutor (basically by limiting the extent to which the interlocutor might experience embarrassment because of some kind of poor performance). An example is "that's _kind of_ a wrong answer". Hedges are also found when speakers wish to avoid losing face themselves, for example when saying ("_I think_ I _might_ have to add 6."). Madaio et al. (2017) found that in a peer-tutoring task, when rapport between interlocutors is low, tutes attempted more problems and correctly solved more problems when their tutors hedged instruc Figure 1: A mock conversation displaying each type of hedged formulation. tions, which likewise points towards a "mitigation of face threat" function. Hedges can also be associated with a nonverbal component, for example averted eye gaze during criticism (Burgoon and Koper, 1984). Hedges are not, however, always appropriate, as in "I _kind of think_ it's raining today." when the interlocutors can both see rain (although it might be taken as humorous). These facts about hedges motivate a way to automatically detect them and, ultimately (although not in the current work) also generate them. In both cases we first have to be able to characterize them using interpretable linguistic features, which is what we address in the current paper. Thus, in the work described here, based on linguistic descriptions of hedges (Brown and Levinson, 1987; Fraser, 2010), we built a rule-based classifier. We show that this classifier in combination with additional multimodal interpretable context-dependent features significantly improves the performance of a machine learning model for hedges, compared to a less interpretable deep learning baseline from Goel et al. (2019) using word embeddings. We also relied on a machine learning model explanation tool (Lundberg and Lee, 2017) to investigate the linguistic features related to hedges in the context of peer-tutoring, primarily to see if we could discover surprising features that the classification model would associate to hedges in this context, and we describe those below. The code of the models described in the paper is also provided. 1 Footnote 1: [https://github.com/AnonymousHedges/HedgeDetection](https://github.com/AnonymousHedges/HedgeDetection) ## 2 Related work **Hedges:** According to Fraser (2010), hedging is a rhetorical strategy that attenuates the strength of a statement. One way to produce a hedge is by altering the full semantic value of a particular expression through **Propositional hedges** (also called **Approximators** in Prince et al. (1982)), as in "You are _kind of_ wrong," that reduce prototypicality (i.e accuracy of the correspondence between the proposition and the reality that the speaker seeks to describe). Propositional hedges are related to fuzzy language (Lakoff, 1975), and therefore to the production of vagueness (Williamson, 2002) and uncertainty (Vincze, 2014). A second kind are **Relational Hedges** (also called **Shields** in Prince et al. (1982)), such as "_I think that_ you are wrong._" or "_The doctor wants you_ to stop smoking._", conveying that the proposition is considered by the speaker as subjective. In a further sub-division, **Attribution Shields**, as in "The doctor _wants you..._", the involvement of the speaker in the truth value of the proposition is not made explicit, which allows speakers not to take a stance. As described above, Madaio et al. (2017) found that tutors who showed lower rapport with their tutores used more hedged instructions (they also employed more positive feedback), however this was only the case for tutors with a greater belief in their ability to tutor. Tutees in this context solved more problems correctly when their tutors hedged instructions. No effect of hedging was found for dyads (pairs of interlocutors) with greater social closeness. However, the authors did not look at the specific linguistic forms these teenagers used. Rowland (2007) also describes the role that hedging plays in this age group, showing that students use both relational ("_I think that_ John is smart.") and propositional ("John is _kind of_ smart.") hedges for much the same shielding function of demonstrating uncertainty, to save them from the risk of embarrassment if they are wrong. The author observed that teens used few **Adaptors** (_kind of_, _somewhat_) and preferred to use **Rounders** (_around_, _close to_). However, this study was performed with an adult and two children, possibly biasing the results due to the participation of the adult investigator. Hedges have been included in virtual tutoring agents before now. (Howard et al., 2015) integrated hedges in a tutor agent for undergraduates in CS, as a way to encourage the student to take the initiative. Hedges have also been used as a way of integrating Brown and Levinson's politeness framework (Wang et al., 2008; Schneider et al., 2015) in virtual tutoring agents. Results were not broken out by strategy, but politeness in general was shown to positively influence motivation and learning, in certain conditions. **Computational methods for hedge detection:** A number of studies have targeted the detection of hedges and uncertainty in text (Medlock and Briscoe, 2007; Ganter and Strube, 2009; Tang et al., 2010; Velldal, 2011; Szarvas et al., 2012), particularly following the CoNLL 2010 dataset release (Farkas et al., 2010). However, this work is not as related to hedges in conversation, as it focuses on a formal and academic language register (Hyland, 1998; Varttala, 1999). As noted by Prokofieva and Hirschberg (2014), the functions of hedges are domain- and genre-dependent, therefore this bias towards formality implies that the existing work may not adapt well to the detection of hedges in conversation between teenagers. A consequence is that the existing work does not consider terms like "I think," since opinions rarely appear in an academic writing dataset. Instructions are also almost absent ("I think you have to add ten to both sides."), a strong limitation for the study of conversational hedges since it is in requests (including tutoring instructions) that indirect formulations mostly occur according to Blum-Kulka (1987). Prokofieva and Hirschberg (2014) also note that it is difficult to detect hedges because the word patterns associated with them have other semantic and pragmatic functions: considering "I think that you have to add x to both sides." vs "I think that you are an idiot.", it is not clear that the second use of "I think that" is an hedge marker. They advocate using machine learning approaches to deal with the ambiguity of these markers. Working on a conversational dataset, Ulinski et al. (2018) built a computational system to assess speaker commitment (i.e. at which point the speaker seems convinced by the truth value of a statement), in particular by relying on a rule-based detection system for hedges. Compared to that work, our rule-based classification model is directly detecting hedge classes, and we employ the predictions of the rule-based model as a feature for stronger machine learning models, designed to lessen the impact of the imbalance between classes. We also consider **apologies** when they serve a mitigation function (we then call them **Apologizers**), as was done by the authors of our corpus, and we also use the term **subjectivizers** as defined below, to be able to compare directly with the previous work carried out on this corpus. As far as we know, only Goel et al. (2019) have worked with a peertuoring dataset (the same one that we also use), and they achieved their best classification result by employing an Attention-CNN model, inspired by Adel and Schutze (2017). ## 3 Problem statement We consider a set D of conversations \(D=(c_{1},c_{2},...,c_{|D|})\), where each conversation is composed of a sequence of independent syntactic clauses \(c_{i}=(u_{1},u_{2},...,u_{M})\), where M is the number of clauses in the conversation. Note that two consecutive clauses can be produced by the same speaker. Each clause is associated with a unique label corresponding to the different hedge classes described in Table 1: \(y_{i}\in C\) = {**Propositional Hedges**, **Apologizers**, **Subjectivizers**, **Not hedged**}. Finally, an utterance \(u_{i}\) can be represented as a vector of features \(X=(x_{1},x_{2},...,x_{N})\), where N represents the number of features we used to describe a clause. Our first goal is to design a model that correctly predicts the label \(y_{i}\) associated to \(u_{i}\). It can be understood as the following research question: **RQ1:** "Which models and features can be used to automatically characterize hedges in a peertuoring interaction?" Our second goal is to identify, for each hedge class, the set of features \(F_{class}=\{f_{k}\}\), \(k\in[1,N]\) sorted by feature importance in the classification of \(class\). It corresponds to the following research question: **RQ2:** "What are the most important linguistic features that characterize our hedge classes in a peer-tutoring setting?" ## 4 Methodology ### Corpus **Data collection:** The dialogue corpus used here was collected as part of a larger study on the effects of rapport-building on reciprocal peer tutoring. 24 American teenagers (mean age = 13.5, min = 12, max = 15), half male and half female, came to a lab where half of the participants were paired with a same-age, same-gender friend, and the other half paired with a stranger. The participants were assigned to a total of 12 dyads in which the participants alternated tutoring one another in linear algebra equation solving for 5 weekly hour-long sessions, for a total corpus of nearly 60 hours of face-to-face interactions. Each session was structured such that the students engaged in brief social chitchat in the beginning, then one of the students was randomly assigned to tutor the other for 20 minutes. They then engaged in another social period, and concluded with a second tutoring period where the other student was assigned the role of tutor. Audio and video data were recorded, transcribed, and segmented for clause-level dialogue annotation, providing nearly 24 000 clauses. Non-speech segments (notably fillers and laughter) were maintained. Because of temporal misalignment for parts of the corpus, many paraverbal phenomena, such as prosody, were unfortunately not available to us. Since our access to the dataset is covered by a Non-Disclosure Agreement, it cannot be released publicly. However the original experimenters' Institutional Review Board (IRB) approval allows us to view, annotate, and use the data to train models. This also allows us to provide a link to a pixelated video example in the GitHub repository of the project2. Footnote 2: [https://github.com/AnonymousHedges/HedgeDetection](https://github.com/AnonymousHedges/HedgeDetection) **Data annotation:** The dataset was previously annotated by Madaio et al. (2017), following an annotation manual that used hedge classes derived from Rowland (2007) (see Table 1). Only the task periods of the interactions were annotated. Comparing the annotations with the classes mentioned in the related work section, **Subjectivizers** correspond to **Relational hedges**Fraser (2010), **Propositional hedges** and **Extenders** correspond to **Approximators**Prince et al. (1982) with the addition of some discourse markers such as _just_. **Apologizers** are mentioned as linguistic tools related to negative politeness in Brown and Levinson (1987). Krippendorff's alpha obtained for this corpus annotated by four coders was over 0.7 for all classes (denoting an acceptable inter-coder reliability according to Krippendorff (2004)). The dataset is widely imbalanced, with more than 90% of the utterances belonging to the **Not hedged** class. In reviewing the corpus and the annotation manual, however, we noticed two issues. First, the annotation of the **Extenders** class was inconsistent, leading to the **Extenders** and **Propositional hedges** classes carrying similar semantic functions. We therefore merged the two classes and grouped utterances labeled as **Extenders** and those labeled as **Propositional hedges** under the heading of **Propositional hedges**. Second, the annotation of clauses containing the tokens "just" and "would" (two terms occurring frequently in the dataset that are key components of **Propositional hedges** and **Subjectivizers** but that are not in fact hedges in all cases) was also inconsistent, leading to virtually all clauses with those two tokens being considered hedges. We therefore re-considered all the clauses associated with any of the hedge classes, as well as all the clauses in the "Not hedged" class that contained "just" or "would". The re-annotation was carried out by two annotators who achieved a Krippendorff's alpha inter-rater reliability of.9 or better for **Apologizers**, **Subjectivizers**, and **Propositional hedges** before independently re-annotating the relevant clauses. An example of a re-annotation was removing "I _would_ kill you!" from the hedge classes. ### Features **Label from rule-based classifier (Label RB):** We use the class label predicted by the rule-based classifier described in Section 4.3 as a feature. Our hypothesis is that the machine learning model can use this information to counterbalance the class imbalance. To take into account the fact that some rules are more efficient than others, we weighted the class label resulting from the rule-based model by the precision of the rule that generated it. **Unigram and bigram:** We count the number of occurrences of unigrams and bigrams of the corpus in each clause. We used the lemma of the words for unigrams and bigrams using the nltk lemmatizer Loper (2002) and selected unigrams and bigrams that occurred in the training dataset at least fifty times. The goal was to investigate, with a bottom-up approach, to what extent the use of certain words characterizes hedge classes in tutoring. In Section 5 we examine the overlap between these words and those _a priori_ identified by the rules. **Part-of-speech (POS):** Hedge classes seem to be associated with different syntactic patterns: for example, subjectivizers most often contain a personal pronoun followed by a verb, as in "I guess", "I believe", "I think". We therefore considered the number of occurrences of POS-Tag n-grams (n=1, 2, 3) as features. We used the spaCy POS-tagger and considered POS unigrams, bigrams and trigrams that occur at least 10 times in the training dataset. **LIWC:** Linguistic Inquiry and Word Count (LIWC) Pennebaker et al. (2015) is standard software for extracting the count of words belonging to specific psycho-social categories (_e.g._, emotions, religion). It has been successfully used in the detection of conversational strategies Zhao et al. (2016). We therefore count the number of occurrences of all the 73 categories from LIWC. **Tutoring moves (TM):** Intelligent tutoring systems rely on specific tutoring moves to successfully convey content (as do human tutors). We therefore looked at the link between the tutoring moves, as annotated in Madaio et al. (2017), and hedges. For tutors, these moves are (1) instructional directives and suggestions, (2) feedback, and (3) affirmations, mostly explicit reflections on their partners' comprehension, while for tutes, they are (1) questions, (2) feedbacks, and (3) affirmations, mostly tentative answers. **Nonverbal and paraverbal behaviors:** As in Goel et al. (2019), we included the nonverbal and paraverbal behaviors that are related to hedges. Specifically, we consider laughter and smiles, that have been shown to be effective methods of mitigation Warner-Garcia (2014), cut-offs indicating self-repairs, fillers like "Um", gaze shifts (annotated as 'Gaze at Partner', 'Gaze at the Math Worksheet', and 'Gaze elsewhere'), and head nods. Each feature was present twice in the feature vector, one time for each interlocutor. Inter-rater reliability for nonverbal behavior was 0.89 (as measured by Krippendorff's alpha) for eye gaze, 0.75 for smile count, 0.64 for smile duration and 0.99 for head nod. Laughter is also reported in the transcript at the word level. We separate the tutor's behaviors from those of the tutee. The collection process for these behaviors is detailed further in Zhao et al. (2016). The clause-level feature vector was normalized by the length of the clause (except for the rule-based label). This length was also added as a feature. Table 3 presents an overview of the final feature vector. ### Classification models The classification models used are presented here according to their level of integration of external linguistic knowledge. **Rule-based model:** On the basis of the annotation manual used to construct the dataset from Madaio et al. (2017), and with descriptions of hedges from Rowland (2007), Fraser (2010) and Brown and Levinson (1987), we constructed a rule-based classifier that matches regular expressions indicative of hedges. The rules are detailed in Table 7 in the Appendix. **LGBM:** Since hedges are often characterized by explicit lexical markers, we tested the assumption that a machine learning model with a knowledge-driven representation for clauses could compete with a BERT model in performance, while being much more interpretable. We relied on LightGBM, an ensemble of decision trees trained with gradient boosting Ke et al. (2017). This model was selected because of its performance with small training datasets and because it can ignore uninformative features, but also for its training speed compared to alternative implementations of gradient boosting methods. **Multi-layer perceptron (MLP):** As a simple baseline, we built a multi-layer perceptron using three sets of features: a pre-trained contextual representation of the clause SentBERT; Reimers and Gurevych (2019)) ; the concatenation of this contextual representation of the clause and a rule-based label (not relying on the previous clauses) ; and finally the concatenation of all the features mentioned in section 4.2, without the contextualized representation. **LSTM over a sequence of clauses:** Since we are working with conversational data, we also wanted to test whether taking into account the previous clauses helps to detect the type of hedge class in the next clause. Formally, we want to infer \(y_{i}\) using \(y_{i}=\max_{y\in Classes}P(y|X(u_{i}),X(u_{i-1}),...,X(u_{i-K}))\), where K is the number of previous clauses that the model will take into account. The \begin{table} \begin{tabular}{c c c c} \hline \hline Class & Definition & Example \\ \hline Subjectivizers & Words that reduce intensity or certainty & “So then I would divide by two.” \\ Apologizers & Apologies used to soften direct speech acts & “Oh sorry six b.” \\ Propositional hedges & Qualifying words to reduce intensity or certainty of utterances & “It’s actually eight.” \\ Eateders & Words used to indicate uncertainty by referring to vague categories & “It’ll be the number x or whatever variable you have.” \\ \hline \hline \end{tabular} \end{table} Table 1: Definition of the classes \begin{table} \begin{tabular}{c c c} \hline \hline Features name & Automatic extraction & Vector size \\ \hline Rule-based label & Yes & 4 \\ Unigram & Yes & \(\sim\)250 \\ Bigram & Yes & \(\sim\)250 \\ POS & Yes & \(\sim\)1200 \\ LIWC & Yes & 73 \\ Nonverbal & No & 24 \\ Tutoring moves & No & 6 \\ Total & & \(\sim\)1800 \\ \hline \hline \end{tabular} \end{table} Table 2: Distribution of the classes \begin{table} \begin{tabular}{c c c} \hline \hline Class & Definition & Example \\ \hline Subjectivizers & Words that reduce intensity or certainty & “So then I would divide by two.” \\ Apologizers & Apologies used to soften direct speech acts & “Oh sorry six b.” \\ Propositional hedges & Qualifying words to reduce intensity or certainty of utterances & “It’s actually eight.” \\ Eateders & Words used to indicate uncertainty by referring to vague categories & “It’ll be the number x or whatever variable you have.” \\ \hline \hline \end{tabular} \end{table} Table 3: List of automatically extracted and manually annotated features with their size. MLP model presented above infers \(y_{i}\) using \(y_{i}=\max_{y\in Classes}P(y|X(u_{i}))\), therefore a difference of performance between the two models would be a sign that using information from the previous clauses could help to detect the hedged formulation in the current clause. We tested a LSTM model with the same representations for clauses as for the MLP model. **CNN with attention:**Goel et al. (2019) established their best performance on hedge detection using a CNN model with additive attention over word (and not clause) embeddings. Contrary to the MLP and LSTM models mentioned above, this model tries to infer \(y_{i}\) using \(y_{i}=\max_{y\in Classes}P(y|g(w_{0}),g(w_{1}),...,g(w_{L}))\), with L representing the maximum clause length we allow, and g representing a function that turns the word \(w_{j},\;j\in[0,L]\) into a vector representation (for more details, please see Adel and Schutze (2017)). **BERT:** To benefit from deep semantic and contextual representations of the utterances, we also fine-tuned BERT Devlin et al. (2019) on our classification task. BERT is a pre-trained Transformers encoder Vaswani et al. (2017) that has significantly improved the state of the art on a number of NLP tasks, including sentiment analysis. It produces a contextual representation of each word in a sentence, making it capable of disambiguating the meaning of words like "think" or "just" that are representative of certain classes of hedges. BERT, however, is notably hard to interpret. ### Analysis tools Looking at which features improve the performance of our classification models tells us whether these features are informative or not, but does not explain how these features are used by the models to make a given prediction. We therefore produced a complementary analysis using an interpretability tool. As demonstrated by Lundberg and Lee (2017), LightGBM internal feature importance scores are inconsistent with both the model behavior and human intuition, so we instead used a model-agnostic tool. SHAP Lundberg and Lee (2017) assigns to each feature an importance value (called Shapley values) for a particular prediction depending on the extent of its contribution (a detailed introduction to Shapley values and SHAP can be found in Molnar (2020)). SHAP is a model-agnostic framework, therefore the values associated with a set of features can be compared across models. It should be noted that SHAP produces explanations on a case-by-case basis, therefore it can both provide local and global explanations. For the Gradient Boosting model, we use an adapted version of SHAP Lundberg et al. (2018), called TreeSHAP. ## 5 Experiments and results ### Experimental setting To detect the best set of features, we used LightGBM and proceeded incrementally, by adding the group of features we thought to be most likely associated with hedges. We did not consider the risk of relying on a sub-optimal set of features through this procedure because of the strong ability of LightGBM to ignore uninformative features. We use this incremental approach as a way to test our intuition about the performativity of groups of features (_i.e._ does adding a feature improve the performance of the model) with regard to the task of classification. To compare our models, we trained them on the 4-class task, and looked at the average of the weighted F1-scores for the three hedge classes (_i.e._ how well the models infer minority classes) that we report here as "3-classes", and at the average of the weighted F1-scores for the 4 classes, that we report as "4-classes". Details of the hyperparameters and experimental settings are provided in Appendix A. ### Model comparison and feature analysis **Overall results:** Table 4 presents the results obtained by the 6 models presented in Section 4.3 for the multi-class problem. Best performance (F1-score of 79.0) is obtained with LightGBM leveraging almost all the features. In the appendix (see Table 8 and Table 9) we indicate the confidence intervals to represent the significance of the differences between the models. First, and perhaps surprisingly, we notice that the use of "Knowledge-Driven" features based on rules built from linguistic knowledge of hedges in the LightGBM model outperforms the use of pre-trained embeddings within a fine-tuned BERT model (79.0 vs. 70.6), and in the neural baseline from Goel et al. (2019) (79.0 vs 64.5). The low scores obtained by the LGBM, LSTM and MLP models with pre-trained sentence embeddings versus Knowledge-Driven features might signal that the word patterns characterizing hedges are not salient in these representations (i.e. the distance between "_I think_ you should add 5." and "You should add 5." is short.). KD Features seem to provide a better separability of the classes. The combination of KD features and Pre-trained embeddings does not significantly improve the performance of the models compared to the KD Features only, which suggests that the information from the Pre-trained embeddings is redundant with the one from the KD Features. This result may be due to the high dimensionality of the input vector (868 with PCA on the KD Features; 2500 otherwise). A second finding is that the use of gradient boosting models on top of rule-based classifiers better models the hedge classes. The other machine learning models did not prove to be as effective, except for BERT. **Feature analysis using LightGBM:** Using the best performing model, Table 5 shows the role of each feature set in the prediction task. The significance of the differences is shown in Table 10 and Table 11. Compared to the rule-based model, the introduction of n-grams significantly improved the performance of our classifier, suggesting that some lexical and syntactic information describing the hedge classes was not present in the rule-based model. Looking at Table 5, we do not observe significant differences between the LGBM model using only the label rule based + (1-grams and 2-grams) and the models incorporating more features. To our surprise, neither the tutoring moves nor the nonverbal features significantly improved the performance of the model. The 2 features were included to index the specific peer tutoring context of these hedges, so this indicates that in future work we might wish to apply the current model to another context of use to see if this model of hedges is more generally applicable than we originally thought. By combining this result with the increased performance of the model using Knowledge-Driven (_i.e._ explicit) features compared to pre-trained embeddings, it would seem that hedges are above all a lexical phenomenon (_i.e._ produced by specific lexical elements). ### In-depth analysis of the informative features We trained the SHAP explanation models on LightGBM with all features. The most informative features (in absolute value) for each class are shown in Table 6, and the plots by class are presented in the Appendix. The most important features seem to be the rule-based labels, which appear in at least the fourth position for three classes (see Table 6), and in the first position for **Propositional Hedges** and **Not hedged** classes. Surprisingly, the Rule-Based label does not appear in the top 20 features for **Apologizers**. However, given that the class rarely appears in the data, the rules seldom activate, so the feature may simply be informative for a very small number of clauses. Unigrams (_Oh_, _Sorry_, _just_, _Would_, and _I_) are also present in the 5 top-ranked features. This confirms the findings mentioned in related work for the characterization of the different hedge classes (_just_ with **Propositional Hedges**, _sorry_ with **Apologizer**, \(I\) with **Subjectivizers**). The presence of _Oh_ also has high importance for the characterization of **Apologizer** (n=2), as illustrated in examples such as "_Oh_ sorry_, that's nine.". We note that the occurrences of "_Oh sorry_" as a stand-alone clause were excluded by our rule-based model because they do not correspond to an apologizer (they cannot mitigate the content of a proposition if there is no proposition associated). This example illustrates the interest of a machine learning model approach to disambiguate the function of conventional non-propositional phrases like "_Oh sorry_". In addition, SHAP highlights the importance of novel features whose function was not identified in the hedges literature: _(i)_ what LIWC classifies as **informal words** but that are mostly interjections like _ah_ and _oh_ are strongly associated with **Apologizer**, as are disfluencies (n=12); _(ii)_ the use of **POS tags** seems to be very relevant for characterizing the different classes (2-gram of POS tag features3 occur in the top-ranked features of all the \begin{table} \begin{tabular}{c|c c c} \hline \hline Models & KD Features (XPE, **U-TETE:**) & **U-TE:**) & **KIP + PTE** \\ \hline Rule-based (3-classes) & 67.6 & 0 & 0 \\ MLP-based (4-class) & 68.5 (1.6) & 35.8 (3.1) & 64.8 (1.1) \\ Attention-CNN (3-classes) & 0 & 64.5 (3.0) & 0 \\ LSTM (3-classes) & 65.1 (5.7) & 39.8 (6.0) & 65.2 (5.1) \\ BERT (3-classes) & **76.0** (2.3) & 0 \\ LGBM (3-classes) & **79.0 (1.3)** & 35.0 (2.2) & **70.1 (1.4)** \\ \hline Rule-based (4-classes) & 94.7 & 0 & 0 \\ MLP (4-classes) & 94.8 (0.3) & 89.7 (0.4) & 93.9 (0.4) \\ Attention CNN (4-classes) & 0 & 94.4 (0.2) & 0 \\ LSTM (4-classes) & 93.9 (1.4) & 89.1 (1.1) & 94.1 (1.2) \\ BERT (4- classes) & 0 & 94.9 (0.4) & 0 \\ LGBM (4-classes) & **96.7 (0.2)** & 91.0 (0.2) & **95.4 (0.2)** \\ \hline \hline \end{tabular} \end{table} Table 4: Averaged weighted F1-scores (and standard deviation) for the three minority classes and for the 4 classes, for all models. “KD” stands for “Knowledge-Driven”, meaning that the features are derived from lexicon, n-gram models and annotations. classes (see Figures in the Appendix). It means that there are some recurring syntactic patterns in each class; _(iii)_ Regarding the **utterance size**, a clause shorter than the mean is weakly associated with **directness** (n=17) while a longer clause suggests that it contains a **Subjectivizer (n=6)**. **Apologizers** are characterized by a mean clause length (n=5), with few variations from it; _(iv)_ Tutoring moves are not strong predictors of any classes: "Affirmation from tutor" is the only feature appearing as a predictor of Propositional hedges (n=20). This is consistent with the feature analysis in Table 5, suggesting that tutoring moves do not significantly improve the performance of the classifier; _(v)_**Nonverbal behaviors** do not appear as important features for the classification. This is coherent with results from Goel et al. (2019). Note that prosody might play a role in detecting instructions that trail off, but, as described, paraverbal features were not available; _(vi)___Would_ plays an important role in the production of hedges, as it is strongly associated to **Propositional hedges** (n=2). It is interesting to note that, when designing the rule-based classifier, we saw it decrease in performance when we started to include _would_ in our regular expression patterns, probably because the form is hard to disambiguate for a deterministic system. While exploring the Shapley values associated to each clause, we observed that features like tutoring moves are extremely informative for a very small number of clauses (therefore not significantly influencing the overall performance of the prediction), and more or less not informative for the rest. Inferring the global importance of a feature as a mean across the shapley values in the dataset may not be the only way to explore the behavior of gradient boosting methods. It might be more useful to cluster clauses based on the importance that SHAP gives to that feature in its classification, as this could help discover sub-classes of hedges that are differentiated from the rest by their interaction with a specific feature (in the way that some **Apologizers** are characterized by an "oh"). We also note that the explanation model is sensitive to spurious correlations in the dataset, caused by the small representation of some class: for example, "nine" (n=7) and "four" (n=20) are positive predictors of **Apologizers**. ## 6 Conclusion and future work Through our classification performance experiments, we showed that it is possible to use machine learning methods to diminish the ambiguity of hedges, and that the hybrid approach of using rule-based label features derived from social science (including linguistics) literature within a machine learning model helped significantly to increase the model's performance. Nonverbal behaviors and tutoring moves did not provide information at the sentence level; both the performance of the model and the feature contribution analysis suggested that their impact on the model output was not strong. This is consistent with results from Goel et al. (2019). However, in future work we would like to investigate the potential of multimodal patterns when we are able to better model sequentiality (_e.g._, negative feedback followed by a smile). Regarding the SHAP analysis, most of the features that are considered as important are coherent with the definition of the classes (\(I\) for subjectivizers, _sorry_ for apologizers, _just_ for propositional hedges). However, we discovered that features like utterance \begin{table} \begin{tabular}{l c c c c} \hline \hline Rank & Apologizer & Subjectivizers & Prop. hedges & Not hedged \\ \hline 1 & Function words (LIWC) & ”I” & Class label & Class label \\ 2 & ”Oh” (LIWC) & ”Yean” & ”Would” & ”Would” \\ 3 & ”Sorry” & Noun (POS) & ”Just” & ”Yean” \\ 4 & Affect (LIWC) & Class label & Function word (LIWC) & Noun (POS) \\ 5 & Clause length & Cognitive process (LIWC) & Netspeak (LIWC) & Cognitive process (LIWC) \\ \end{tabular} \end{table} Table 6: Most important clause-level features for LightGBM according to the SHAP analysis. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline Models & Label RB & + 1-gram and 2-gram & + POS & + LIWC & + TM & + Nonverbal \\ \hline 3-classes & 68.8 (0.8) & 78.2 (1.6) & 78.1 (1.3) & 79.0 (1.3) & 78.5 (2.4) & 78.7 (1.8) \\ \hline 4-classes & 95.0 (0.2) & 96.5 (0.3) & 96.5 (0.2) & 96.7 (0.2) & 96.6 (0.4) & 96.7 (0.3) \\ \hline \hline \end{tabular} \end{table} Table 5: Averaged weighted F1-scores for the three classes of hedges and the four classes, with an additive integration of KDF features in the LightGBM model. The standard deviation is computed across five folds. size can also serve as indicators of certain classes of hedges. A limitation of SHAP is that it makes a feature independence assumption, which prompts the explanatory model to underestimate the importance of redundant features (like pronouns in our work). In the future we will explore explanatory models capable of taking into account the correlation between features in the dataset like SAGE Covert et al. (2020), but suited for very imbalanced datasets. In the domain of peer-tutoring, we would like to be able to further test the link between hedges and rapport, and the link between hedges and learning gains in the subject being tutored. As noted above, this kind of study requires a fine-grained control of the language produced by one of the interlocutors, which is difficult to achieve in a human-human experience. We note that the hedge classifier can be used not just to classify, but also to work towards improving the generation of hedges for tutor agents. In future work we will explore using the classifier to re-rank generation outputs, taking advantage of the recurring syntactic patterns (see _(ii)_ in Section 5.3) to improve the generation process of hedges, and re-generating clauses that don't contain one of these syntactic patterns. ## Acknowledgments Many thanks to members of the ArticuLabo at INRIA Paris for their precious assistance. This work was supported in part by the the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute).
2307.12091
Dynamics of kink train solutions in deformed Multiple sine-Gordon models
This paper examines the effects of a thin layer of inhomogeneity on periodic solutions of the Multiple-sine-Gordon (MsG) model. We investigate the dynamics of the perturbed Double-sine-Gordon (DsG) system as a significant and more practical case of such configurations. The thin barrier acts as a potential well (potential barrier) and causes critical deformations in kink train solutions and some basic properties of the periodic solutions, such as the type of sub-kinks, their amplitude, energy and wavelength. Stability of the initial kink chain during the interaction with medium defects is analyzed using their phase diagram. Sudden changes in the profile of kink trains due to the disruption of their amplitude and wavelength are considered. The time evolution of moving kink chain solutions while interacting with medium fractures is also studied.
Marzieh Peyravi, Nematollah Riazi, Kurosh Javidan
2023-07-22T14:45:34Z
http://arxiv.org/abs/2307.12091v2
# Dynamics of kink train solutions in deformed Multiple sine-Gordon models ###### Abstract This paper examines the effects of a thin layer of inhomogeneity on periodic solutions of the Multiple-sine-Gordon (MsG) model. We investigate the dynamics of the perturbed Double-sine-Gordon (DsG) system as a significant and more practical case of such configurations. The thin barrier acts as a potential well (potential barrier) and causes critical deformations in kink train solutions and some basic properties of the periodic solutions, such as the type of sub-kinks, their amplitude, energy and wavelength. Stability of the initial kink chain during the interaction with medium defects is analyzed using their phase diagram. Sudden changes in the profile of kink trains due to the disruption of their amplitude and wavelength are considered. The time evolution of moving kink chain solutions while interacting with medium fractures is also studied. Double-sine-Gordon, Multiple-sine-Gordon, Soliton collision, Integrability. ## I Introduction The formation, stabilization, and propagation of localized waves in non-linear media such as solitary waves, solitons, and breathers are fundamental problems in many branches of science. Scattering of solitons from point-like [1; 2; 3] or extended [4; 5] defects and impurities as a fundamental problem of soliton theory has been widely investigated. Indeed, such effects play a crucial role in determining the essential physical properties of non-linear systems, and this issue has attracted many theoretical and practical interests [6; 7; 8]. The processes of soliton trains in quantum optics [9; 10], a chain of localized solutions in Josephson-Junction Arrays [11; 12], and the dynamics of soliton ratchets in perturbed background medium [13; 14; 15] are important topics, both in theory and also applications. It is clear from an experimental point of view, we have to consider spatial defects to find a realistic perspective of the system. Inhomogeneity can arise in a medium due to various defects, such as dislocations, impurities, imperfect grain boundaries, etc. Thus, one can see defects as local deviations from an ordered structure in the system [4; 16; 17]. The family of Sine-Gordon (SG) models including its modifications, have appeared in different branches of physics and engineering. There are a lot of investigations that focus on multi-sine-Gordon (MsG) models and especially the double-sine-Gordon (DsG) equation. The DsG model has been used to explain the non-linear nature of many physical phenomena like the spin dynamics in the B phase of super-fluid 3He [18; 19], charge density wave condensate models of the organic linear conductors, and dynamics of the liquid crystals [20; 21; 22], quark confinement [23], self-induced transparency phenomena [24] and many more situations. The non-linear vibrations arising in the engineering applications by considering the noise and uncertain properties [25], or propagation of ultra-short optical soliton trains [26] are modelled by different MsG equations. In real environments, medium defects in the form of disorders, impurities, or dislocations cause serious changes in the character of propagated waves. The stability of travelling waves, especially in non-linear media is not a simple and straightforward issue. In such problems, we need a phase diagram analysis as well as the frequency-amplitude relationship under the existence of medium inhomogeneities to understand the actual vibration properties of the medium. Investigation of the dynamics of these types of phenomena needs periodic solutions of MsG equations (mostly the DsG model) in the presence of medium defects and impurities. This means that we have to deal with two problems at the same time: finding periodic solutions for MsG equations and also adding medium defects to the model by considering suitable potentials and/or modifications. Motivated by this question, we have studied the interaction of periodic solutions with medium defects. In this contribution, we consider solutions of the DsG and MsG equation in \(1+1\) space-time dimensions. Our goal is to study the effect of some kinds of barriers, somewhere among the chains of the kinks and anti-kinks of the DsG and MsG systems. The amplitude and wavelength of solutions in the presence of different perturbations will be studied. We thus propose to study the modified periodic DsG and MsG solutions. The structure of our presentation is as follows: Section II belongs to review some general properties of the DsG and MsG system, including the action, the field equation, and the single kink solution. in Section III we study the effects of thin barriers on the periodic solutions of the DsG and MsG systems. In Section IV, we focus on the collision of the DsG with the wall. We close in Section V by summarizing our results and pointing out some directions for future work. ## II Basic properties of the double sine-Gordon and the multiple-sine-Gordon systems In this section, we briefly review the DsG and MsG formalism by introducing the related action, the field equation, the energy-momentum tensor, and the topological current. Following our previous study on the periodic and step-like solutions of DsG equation [27] and static properties of the MsG systems [28], we focus on the following potential (see Fig. 1): \[V(\phi)=1+\epsilon-\cos\phi-\epsilon\cos(N\phi) \tag{1}\] where \(\epsilon\) is a non-negative constant and \(N\) is an integer with \(N=0\), \(N=2\), and \(N>2\) corresponding to the sG, DsG, and MsG systems, respectively [27; 28; 29]. The harmonic term in this potential can result from the Fourier expansion of an arbitrary, periodic potential \(\widetilde{V}(\phi)=\widetilde{V}(\phi+2n\pi)\). As the Fig. 1 illustrates, the above potential find its minima at: \(\phi_{min}=2n\pi\) with \(n=0,1,2,...\). Based on the nature of the potential, more vacua will appear in the profile of the potential, as \(N\) increases, and thus, extra sub-kinks (up to \(N\)) are created in the solution, which leads to a more complicated system [27; 28]. While integrable equations are an essential part of the non-linear wave theory, most non-linear wave equations that emerge in physics and engineering are not integrable [30]. Non-integrable equations involve much richer and more complex dynamical solutions. Non-integrable equations not only lead to unstable solitary waves with complicated and even chaotic interactions via fractal scattering phenomena but also these solitary waves can collapse in more than one dimension [30]. Several powerful methods have been presented for deriving localized travelling wave solutions for non-linear equations in recent years, for example, the inverse scattering transformation, Hirota bilinear method, Backlund and Darboux transformations, sine-cosine and tanh-function methods, homogeneous balance and Lie group analysis, F-expansion method and so on. Through these methods, solitary, periodic, multiply periodic, quasi- or non-periodic wave solutions as well as kink (anti-kink) trains have been constructed as polynomials in the triangle, hyperbolic, or Jacobian elliptic functions for the DsG model (See Fig. 2) [31; 32; 33]. Several kink train solutions for the DsG model Figure 1: (a) The DsG Potential. The dashed curve is for \(\epsilon=10\), the dash-dotted curve is for \(\epsilon=1\), and the solid curve is for \(\epsilon=0\) (sG). (b) The MsG Potential for \(N=5\). The dashed curve is for \(\epsilon=10\), the dotted curve is for \(\epsilon=1\) and the solid curve is for \(\epsilon=0\). have been proposed using Jacobi's elliptic functions, such as: \[\phi=\pi+2\arctan\left[(k_{2}^{\prime})^{-1}JN\bigg{(}\frac{2k_{2}^{\prime}\sqrt{ \epsilon}}{k_{1}\sqrt{-k_{2}^{2}}}(x-x_{0}-vt),k\bigg{)}\right]. \tag{2}\] where, \(JN(u,k)\) is a suitable generalized Jacobi elliptic function, related to the potential parameters, \(k_{1,2}^{2}=\frac{1}{3}\left[(1-4\epsilon)\pm\sqrt{(1+\epsilon)^{2}+8\epsilon}\right]\), \(k^{2}=\frac{k_{1}^{2}-k_{2}^{2}}{1-k_{2}^{2}}\), and \(k_{2}^{\prime}=\sqrt{1-k_{2}^{2}}\)[34]. According to our best knowledge, there are no such solutions for the general form of the MsG system. Therefore we calculated periodic solutions for the MsG potential numerically. Indeed, most of the presented solutions for more complicated potentials, have been given in infinite series expansions. In such situations, we have to use numerical calculations/simulations to study the behavior of the solutions. Nowadays, numerical computations play an important and inevitable role in non-linear science and soliton theory [30]. Since non-integrable systems are not solvable analytically, numerical methods provide powerful tools for non-linear wave studies. The Lagrangian density of the system consisting of the MsG system for real scalar field \(\phi(x,t)\) in \((1+1)\) dimensions is in the following form [27; 28]: \[\mathcal{L}=\frac{1}{2}\partial^{\mu}\phi\partial_{\mu}\phi-\left[1+\epsilon- \cos\phi-\epsilon\cos(N\phi)\right]. \tag{3}\] From this Lagrangian density, the field equation of motion is as follows [27; 28]: \[\square\phi=-\sin\phi-N\epsilon\sin(N\phi). \tag{4}\] The energy-momentum tensor of the MsG equation can be obtained by using Noether's theorem and the invariance of the action under the space-time translation \(x^{\mu}\rightarrow^{\mu}+a^{\mu}\)[27; 28; 35; 36]: \[T^{\mu\nu}=\partial^{\mu}\phi\partial^{\nu}\phi-g^{\mu\nu}\mathcal{L}, \tag{5}\] in which \(g^{\mu\nu}=diag(1,-1)\) is the metric of the Minkowski space-time. Besides, a topological current can also be defined for a generalized form of sG through the following current [37]: \[J^{\mu}=\frac{1}{2\pi}\epsilon^{\mu\nu}\partial_{\nu}\phi, \tag{6}\] where \(\epsilon^{\mu\nu}\) is the antisymmetric tensor with \(\epsilon^{01}=1\). This current is identically conserved (\(\partial_{\mu}J^{\mu}=0\)) and the total topological charge of any localized, finite-energy solution is conserved and also quantized. We have used the topological charge and energy of the system for controlling the validity of our numerical calculations in every selected step. Figure 2: Jacobi Elliptic Functions as a function of u for \(k=0.99\). As one can check, the MsG equation possesses kink and anti-kink solutions which correspond to transitions between the spatial boundary conditions \(\phi(\pm\infty)=2n\pi\)[27; 28]. The first integral of the equation (4) for static, localized solutions reads: \[\frac{1}{2}(\frac{d\phi}{dx})^{2}=V(\phi), \tag{7}\] in which we have used the boundary conditions \(\phi(\pm\infty)=2n\pi\). We therefore have: \[x-x_{0}=\int\frac{d\phi}{\sqrt{2V(\phi)}}. \tag{8}\] Multiple sine-Gordon systems admit soliton-like kink solutions with interesting properties, although this integral cannot be carried out analytically for general \(N\) and \(\epsilon\) specially for \(N>2\). However for \(N=2\) one can show that the following exact, static, single kink (anti-kink) solution exists [27; 28; 29]: \[\phi(x)=2\arccos\left[\pm\frac{\sinh\sqrt{4\epsilon+1}x}{\sqrt{4 \epsilon+\cosh^{2}\sqrt{4\epsilon+1}x}}\right]. \tag{9}\] The \((+)\) and \((-)\) sign are correspond to the anti-kink \((\tilde{k})\) and the kink \((k)\) solutions respectively [27; 29]. Figs. 3 illustrate DsG kink/anti-kink Eq. (9) for two values of the parameter \(\epsilon(=6,10)\) as well as solutions of \(N=3\) which is obtained numerically. We can interpret and adjust our numerical solution for the kink train in DsG by comparing it with the exact solution (2). It is easy to show that solutions of the MsG equation are classified and characterized by (x-independent \(\frac{dP}{dx}=0\)) constant of integration \(P\): \[P=\frac{1}{2}\left(\frac{d\phi}{dx}\right)^{2}-V(\phi). \tag{10}\] in two types, namely: Step-like and periodic solutions (See Fig. 4). Note that this classification is independent of \(\epsilon\) and therefore holds true also for sG equation. The step-like solutions are a sequence of kinks \((kk)\) or anti-kinks \((\tilde{k}\tilde{k})\) which are characterized by \(P>0\), and the periodic solutions are a sequence of kink, anti-kink \((k\tilde{k}k\tilde{k})\) solutions that are characterized for \(-2<P<0\)[27; 28]. The general behavior of our numerical solutions is in agreement with reported solutions in Ref. [38]. Recently, a set of new kink solutions for special types of MsG models have been presented, using the mentioned Jacobi elliptic functions [39]. In such cases, our static solutions are successfully reduced to kink solutions in the Ref. [39]. Also, there are localized solutions for the triple sine-Gordon equations \((N=3)\) for specific values of the model parameters [40]. Figure 3: Single soliton solutions (a) For DsG with \(\epsilon=10\) (dash-dotted curve) and \(\epsilon=6\) (dashed curve) (b) The MsG system \((N=3)\) with \(\epsilon=10\) (dash-dotted curve) and \(\epsilon=6\) (dashed curve). In fact, one can interpret \(P\) as the "pressure" (or tension in this 1d case) when we consider such systems as a many-body interacting system [41; 42; 43]. Note that \(P\) is different from energy density \[\rho(x)=\frac{1}{2}\left(\frac{d\phi}{dx}\right)^{2}+V(\phi); \tag{11}\] which changes with position \(x\)[27; 28; 42]. However MsG potential contains infinite topological sectors, but the multiplicity of solutions is equal to \(N\) which are different in terms of the number of their sub-kinks. [38]. In addition to soliton train solutions, the model contains breather solutions for \(N^{2}<4\pi\)[44]. ## III The effect of thin barriers In this section, we report the behavior of kink train solutions of the DsG equation in the presence of a thin interval of inhomogeneity. For this purpose \(\epsilon\) is considered to be different within a finite range \(x_{1}<x<x_{2}\) in which \(x_{1}\) and \(x_{2}\) are known locations while \(x_{2}>x_{1}\). As stated before, the behavior of the solution depends on the value of parameter \(P\), which is a function of field tension (stress) \(\frac{d\phi}{dx}\) and also the potential (\(U(\phi)\)). This means that we can control the field solution by correctly choosing the value of the field (\(\phi(x)\)) and its spatial derivative (\(\frac{d\phi}{dx}\)) as initial conditions. It is clear that medium dislocations and impurities change these conditions, and therefore such disorders critically affect the field solution in the medium. The area of perturbation, where the barrier-field interaction starts (\(x_{1}\)), and terminates (\(x_{2}\)), barrier width (\(x_{2}-x_{1}\)), and its amplitude (\(\epsilon\)) are important parameters. These parameters determine the general characteristics of the solution in the perturbed area as well as the kink train configuration after passing through the barrier. Figure 4(a) shows critical changes related to the stating point of perturbation (\(x_{1}\)). By a slightly changes of \(x_{1}\), we can provide step-like kink (\(kk\)) (anti-kink \(\tilde{k}\tilde{k}\)) trains or periodic \(k\tilde{k}\) configurations, in the perturbation region. It is due to changing the initial condition for creating the kink solution in the perturbed region. The most important point is related to the slope of arriving kink (anti-kink) as interacted with the perturbation. Fig. 4(a), clearly shows that the kink (anti-kink) train is created in the perturbation region if the initial kink arrives with a positive (negative) slope. If the initial kink arrives with a very small slope, we will find \(k\tilde{k}\) configuration in the perturbed region. Figure 4(b) demonstrates the interaction of kink solution with perturbations starting from the same initial position (here \(x_{1}=30\) with \(\epsilon=5\)) but with different widths. Emerging configuration from the perturbed region depends on the characters of the solution at the end point of the perturbation region (\(x_{2}\)). In general, the final state of the solution after interacting with the barrier can be any of the possible solutions. Some kink configurations may not occur depending on the possible values for the slope of the field in the barrier. In Figure 4(b), we do not expect the formation of the state \((\tilde{k}\tilde{k})\), because the \(kk\) configuration which has been established in the perturbation region, does not have a negative slope. Thus, the final state of the chain configuration after passing the barrier can be periodic (\(k\tilde{k}\) ) or step-like (\(kk\)). We may observe the emergence of \(k\tilde{k}\) configuration if the kink reaches the end point of the perturbation area with its flat tail (Please see final \(k\tilde{k}\) configurations in the Fig. 4(b) for \(x_{2}=35,40\) and \(45\)). On the other hand, a kink train configuration may occur after the interaction, if the field solution reaches the barrier endpoint with its positive slope parts. Similarly, if the established configuration in the perturbed area does not have a positive slope, the \(kk\) train will not occur after the field-impurity interaction. Our other simulations are in agreement with this procedure. If the anti-kink configuration is created within the barrier region, we can reach \(k\tilde{k}\) or \(\tilde{k}\tilde{k}\) trains. For a \(k\tilde{k}\) configuration in the barrier, we expect to find \(k\tilde{k}\), \(kk\) or \(\tilde{k}\tilde{k}\) trains according to the final state of solution where the soliton-barrier interaction is completed. Figure 5 demonstrates the phase space diagram, for configurations of Fig. 4(b). As one can see, for the shortest barrier (\(30<x<35\), the dotted curve) phase diagram has only one ring due to the absence of sub-kinks. For different impurity widths (\(30<x<40\) and \(30<x<45\), dash-dotted, dashed curves) we find other diagrams containing twofold closed loops due to the presence of sub-kinks. It should be noted that these diagrams relate to the same type of solution but in different sectors. The noticeable point in the Fig. 5, relates to the solid open curve which corresponds to the \(kk\) train with sub-kinks. We learn from Fig. 5 that, the footprint of each kink (anti-kink) is a single or double concave (convex)-like curve in the phase space diagram. \(k\tilde{k}\) configurations create closed loops in a limited area of the phase space plane if both kink and anti-kink are located in the same sector. But \(kk\) and \(\tilde{k}\tilde{k}\) configurations are not closed curves. This means that, each kink (anti-kink) in created \(kk\) or \(\tilde{k}\tilde{k}\) trains are not located in the same sector. However the multiplicity of solutions in this model is \(N=2\), but topological charge of all kinks (anti-kinks) are the same. Thus, all created objects in a train solution, show similar behavior [38]. For finding better knowledge about the similarity and differences between created kinks (anti-kinks) during the field-impurity interaction, we have plotted Figs. 6 and 7 with more details. In these figures, similar \(k\tilde{k}\) trains hit the barriers, which are located in the same spatial position, but with different values of \(\epsilon=5\) (Fig. 6) and \(\epsilon=15\) (Fig. 7). Initial conditions for both situations are the same, but the related barrier potential of defined situations is different. For barrier with \(\epsilon=5\) a \(kk\) chain is created in the perturbed area. The solution for barrier with \(\epsilon=15\) is a \(k\tilde{k}\) train with different energy but in the same sector (\(0\leq\phi\leq 2\pi\)) as compared with initial \(k\tilde{k}\) (see Fig. 6). One can find from the phase space diagrams in Figs. 6 and 7 that, both solutions in the perturbation region have positive, zero, and negative slope parts. Thus, all types of kink chain solutions can be created after the interaction with defect (\(x\geq x_{2}\)), according to the \(\frac{d\phi}{dx}\) at the endpoint of perturbation (\(x_{2}\)). These figures show that the established solutions for these two scenarios are different. Indeed \(kk\) train (\(k\tilde{k}\) solution) is created in the perturbation region for \(\epsilon=5\) (\(\epsilon=15\)). According to the phase space diagram of the Fig. 6 (Fig. 7), created \(k\tilde{k}\) solution after (during) the field-impurity interaction is located in the different (similar) sector as compared with initial \(k\tilde{k}\) train. An interesting issue is the creation of kinks that are located in different spatial solutions but in similar topological sectors. According to the Fig. 6 (Fig. 7), kinks which are located in the regions \(x_{1}\leq x\leq x_{2}\) and \(x\geq x_{2}\) (\(x\leq x_{1}\)) are related to the same sector \(8\pi\leq\phi\leq 10\pi\) (\(0\leq\phi\leq 2\pi\)). Moreover, it is vital to emphasize the role of \(\epsilon\) value via the appearance of potential well/barrier in the medium, in addition to the initial position of perturbation. As the Figs. 4-7 demonstrate, the \(\epsilon\) value in the perturbed area has a decisive effect on the slope of \(kk\) or \(\tilde{k}\tilde{k}\) train in the barrier region, which leads to the emergence of different \(k\tilde{k}\) chains after the interaction with the barrier. In other words, the amplitude, and wavelength of the \(k\tilde{k}\) chain will change as a consequence of sub-kinks. In our simulations, \(P\) is calculated numerically by solving the relevant differential equation. Indeed, we need to know how the \(P\) value changes abruptly across the barrier. Here, we try to obtain an analytical expression for the change in \(P\) across the potential (\(\triangle P\)). We obtain our results for time-independent static solutions, however, they can be easily generalized to time-dependent solutions. Consider two regions of different \(\epsilon\) joining each other at \(x=0\). Integrating the static equation from \(x=-\delta\) to \(x=+\delta\) \[\int_{-\delta}^{+\delta}\frac{d^{2}\phi}{dx^{2}}dx=\int_{-\delta}^{+\delta}\frac{ \partial V(\phi)}{\partial\phi}dx, \tag{12}\] and letting \(\delta\longrightarrow 0\), we obtain \[\frac{d\phi}{dx}|_{0^{-}}=\frac{d\phi}{dx}|_{0^{+}} \tag{13}\] since there is no singularity in the potential around \(x=0\). Together with the continuity of \(\phi\), we are now able to obtain the relation between the \(P\)-values on either side of the junction: \[P_{1}=\frac{1}{2}(\frac{d\phi}{dx})^{2}-V_{1};\qquad\text{for $x<0$}, \tag{14}\] and \[P_{2}=\frac{1}{2}(\frac{d\phi}{dx})^{2}-V_{2};\qquad\text{for $x>0$}, \tag{15}\] using the continuity of \(\phi\) and \(\frac{d\phi}{dx}\) across the junction, we obtain \[P_{2}-P_{1}=V_{1}-V_{2}=\left(\epsilon_{1}-\epsilon_{2}\right)\left(1-\cos(2 \phi(0))\right); \tag{16}\] for the DsG equation, when \(\phi(0)\) is the value of \(\phi\) at \(x=0\)(junction point). One observes that even for \(\epsilon_{2}\neq\epsilon_{1}\), we can still have \(P_{2}=P_{1}\), i. e., if \(\phi(x=0)=n\pi\). Otherwise, the value of \(P\) will change abruptly across the junction. The minimum and maximum change in the value of \(P\) is \(0\) and \(2|\epsilon_{1}-\epsilon_{2}|\), respectively. Similar relations can be presented for the MsG models too. For a single soliton, we have \(P=0\), if we consider a media with two different \(\epsilon\) on both sides of the boundary (on the left \(\epsilon_{2}\) and the right \(\epsilon_{1}\)) [28]. Suppose that there is a single soliton on the left side of the boundary (\(P_{2}=0\)), so the Eq.(16) reduces to: \[P_{1}=\left(\epsilon_{2}-\epsilon_{1}\right)\left(1-\cos(2\phi(0))\right); \tag{17}\] In this case, it is obvious that the vital condition for having the single soliton on the right side of the boundary is \(\epsilon_{2}=\epsilon_{1}\) or \(\phi_{0}=n\pi\) (See Fig. 9). ## IV Soliton train-barrier interaction The time evolution of soliton train solution during the interaction with medium disorders and impurities is an important issue due to its practical applications. The stability of the solution against medium defects can be studied by finding the emerged object after the interaction of the incident solution with a potential barrier. We have performed several simulations to investigate the time evolution of a moving kink train after interaction with perturbation. Figure. 10(a) demonstrates the time evolution of the field solution while interacting with a specified potential as a medium defect. The initial condition is the same as what has been chosen for Fig. 7(a). From this figure, we expect to observe moving \(k\tilde{k}\) train interacting with the barrier. Therefore the initial conditions at the beginning point of perturbation (and thus the emerged solution from the barrier) are changing over time. As explained in section III the emerging solution is \(kk\), \(k\tilde{k}\), and \(\tilde{k}\tilde{k}\) trains according to the initial condition. The positive slope (ascending part) of the solution creates a \(kk\) train of interaction, while the negative slope (descending part) produces \(\tilde{k}\tilde{k}\) configuration ( See Fig. 4). It is clear that the flat parts of the emerging kink create \(k\tilde{k}\) train. A complete set of initial conditions are created by positive, zero, and negative slope parts. As one can find from the Fig. 10, a sequence of \(kk\), \(k\tilde{k}\) and \(\tilde{k}\tilde{k}\) trains are observed repeatedly, while their amplitudes are changing in time. The higher the kink slope as an initial condition, the faster the changes in the soliton amplitude after the interaction. Faster changes (compact parts) in Fig. 10(a) relate to a higher (positive or negative) slope of the initial condition at the border of perturbation. The flat part of the initial condition provides a complete set of \(k\tilde{k}\) solutions, as we can see within \(1000<t<1500\), which depends on the velocity of the moving initial kink solution. Figure 10(b) demonstrates the time evolution of the kink solution in phase space. As one can find, the phase space plot is formed by several times repeating the closed loop curve in the phase space (Fig. 7(b)), of course with some rapid changes that are seen as jumps due to field-barrier interaction. This figure contains important information about the stability of moving kink solutions while interacting with spatially limited perturbations. We have examined several initial conditions. All our simulations have indicated that limited amplitude barriers are not able to destroy the stability of soliton trains; which is an important outcome. Figure 11 demonstrates field-potential interaction for \(N=3\), the triple sG model. The general behavior of interaction is the same as what we have found for the DsG model, except the generation of three sub-kinks in the profile of the field solution. We have not found different behavior or instability condition for this model. Also, we have examined \(N=4\) and \(N=5\) solutions. No new phenomena have been observed. Time evolution of the field energy density \(\rho(x,t)\) has been analysed during the interaction with medium perturbations too. It has been shown previously that, the total energy of single soliton changes during the interaction with potential [45; 46]. Recovery of Soliton energy into its initial quantity (before the interaction) depends on the nature of the soliton model and indeed the impact of the barrier. There is not much information about the time evolution of soliton train-barrier interactions. We have set up several simulations to examine the time evolution of the energy density for soliton chain solutions of \(N=2\) and \(N=3\) models while interacting with simple rectangular potentials of different amplitudes and widths. As expected, the total energy of the soliton train can be recovered with a good approximation if the barrier amplitude is sufficiently low. The final profile of the energy density after the interaction Figure 9: A single soliton across a boundary at \(x=0\). is a simultaneous function of the barrier amplitude and the state of the soliton at the initial and final points of the interaction. Thus, it is a complicated (and also challenging) study. In general, the energy of solitons, their sub-kink contents, amplitude, and widths are seriously affected by the interaction. Figure 12 demonstrates three profiles of soliton-potential interaction at different times for a moving train with initial conditions of Fig. 8. The moving soliton collides with the potential, and as a result, the initial conditions for creating new kinks train inside the inhomogeneity region change with time. For this reason, the initial conditions also change at the endpoint of the potential region. ## V Concluding remarks Impurities, dislocations, and other environmental inhomogeneities cause decisive effects on the behavior of localized and quasi-periodic solutions of non-linear models. Such structural defects have been added to the model by different methods. We have investigated the effect of such thin barriers on the creation and evolution of multiple sine-Gordon kink trains by focusing on the dynamics of the double sine-Gordon and the triple sine-Gordon models. The analytical solution for soliton chain solutions (in series expansion) exists only for the double Sine-Gordon model. For this reason, we calculated the kink train solutions for multiple Sine-Gordon models using numerical calculations. We have checked the validity of numerical calculations by comparing our obtained results with analytical solutions of the double sine Figure 11: (a) The \(\phi\) diagram as a function of \(t\) and (b) The phase diagram, considering initial values of the solid curve in Fig. 8. Figure 10: (a) The \(\phi\) diagram as a function of \(t\) and (b) The phase diagram, with initial values of Fig. 7. Gordon model. We investigated the final configuration of kink train solutions after the initial kink chain-impurity interaction with details. The relationship between the final solution (also the kink train created in the impurity region) and the characteristics of the initial kink chain as well as the parameters of the potential barrier has been analysed. Our simulations show that regardless of the absolute location of the impurity and its width, the magnitude of the field and its slope at the start and end points of the interaction (the beginning and end of the perturbation region) are the most determining factors in shaping the final kink train solution after passing through the potential. The change in the value of model parameters (\(\epsilon\) in our study) which is determined by the nature of the impurity, determines the amplitude and wavelength of final solutions after the interaction. The dynamics and general behavior of interaction and the conditions for establishing the kink train solutions have been the same for all the investigated models (\(N=2,...,N=5\)). The only difference is the appearance of sub-kinks, which originate from the natural properties of different models. The dynamics of soliton-potential interaction can not be fully understood without considering the internal mode effects. Energy transfer between the soliton (as a ground state) and internal mode (as excited states) clearly changes the soliton's initial condition while interacting with medium impurities (as a potential barrier). Although the numerical solution automatically simulates the effects of internal modes, it does not show the effect of this phenomenon separately. For this reason, a suitable analytical model is necessary to understand the effect of internal modes on the dynamics of system evolution. Although we think that the effect is small, but this issue should be investigated in further works. ###### Acknowledgements. M. P. and K. J. acknowledge the support of Ferdowsi University of Mashhad. N. R. acknowledges the support of Shahid Beheshti University Research Council.
2310.10249
Murnaghan-Type Representations of the Elliptic Hall Algebra
We construct a new family of graded representations $\widetilde{W}_{\lambda}$ indexed by Young diagrams $\lambda$ for the positive elliptic Hall algebra $\mathcal{E}^{+}$ which generalizes the standard $\mathcal{E}^{+}$ action on symmetric functions. These representations have homogeneous bases of eigenvectors for the action of the Macdonald element $P_{0,1} \in \mathcal{E}^{+}$ generalizing the symmetric Macdonald functions. The analysis of the structure of these representations exhibits interesting combinatorics arising from the stable limits of periodic standard Young tableaux. We find an explicit combinatorial rule for the action of the multiplication operators $e_r[X]^{\bullet}$ generalizing the Pieri rule for symmetric Macdonald functions. Lastly, we obtain a family of interesting $q,t$ product-series identities which come from the analysis of certain combinatorial statistics associated to periodic standard Young tableaux.
Milo Bechtloff Weising
2023-10-16T10:19:22Z
http://arxiv.org/abs/2310.10249v1
# Murnaghan-Type Representations of the Elliptic Hall Algebra ###### Abstract We construct a new family of graded representations \(\widetilde{W}_{\lambda}\) indexed by Young diagrams \(\lambda\) for the positive elliptic Hall algebra \(\mathcal{E}^{+}\) which generalizes the standard \(\mathcal{E}^{+}\) action on symmetric functions. These representations have homogeneous bases of eigenvectors for the action of the Macdonald element \(P_{0,1}\in\mathcal{E}^{+}\) generalizing the symmetric Macdonald functions. The analysis of the structure of these representations exhibits interesting combinatorics arising from the stable limits of periodic standard Young tableaux. We find an explicit combinatorial rule for the action of the multiplication operators \(e_{r}[X]^{\bullet}\) generalizing the Pieri rule for symmetric Macdonald functions. Lastly, we obtain a family of interesting \(q,t\) product-series identities which come from the analysis of certain combinatorial statistics associated to periodic standard Young tableaux. ###### Contents * 1 Introduction * 1.1 Overview * 1.2 Acknowledgements * 2 Definitions and Notations * 2.1 Some Combinatorics * 2.2 Finite Hecke Algebra * 2.3 Positive Affine Hecke Algebra * 2.4 Positive Double Affine Hecke Algebra * 2.5 Positive Elliptic Hall Algebra * 3 DAHA Modules from Young Diagrams * 3.1 The \(\mathcal{D}_{n}\)-module \(V_{\lambda}\) * 3.2 Connecting Maps Between \(V_{\lambda^{(n)}}\) * 4 Positive EHA Representations from Young Diagrams * 4.1 The \(\mathcal{D}_{n}^{\mathrm{gh}}\)-modules \(W_{\lambda^{(n)}}\) * 4.2 Stable Limit of the \(W_{\lambda^{(n)}}\) * 4.3 \(\mathcal{E}^{+}\) Action on \(\widetilde{W}_{\lambda}\) * 5 Pieri Rule * 6 Family of Product-Series Identities ## 1 Introduction This is a version of the author's FPSAC 2024 submission. For the sake of satisfying the page limit for FPSAC most of the proofs are omitted. The complete version of this paper with full proofs will appear in the coming months. The space of symmetric functions, \(\Lambda\), is a central object in algebraic combinatorics deeply connecting the fields of representation theory, geometry, and combinatorics. In his influential paper [10], Macdonald introduced a special basis \(P_{\lambda}[X;q,t]\) for \(\Lambda\) over \(\mathbb{Q}(q,t)\) simultaneously generalizing many other important and well-studied symmetric function bases like the Schur functions \(s_{\lambda}[X]\). These symmetric functions \(P_{\lambda}[X;q,t]\), called the symmetric Macdonald functions, exhibit many striking combinatorial properties and can be defined as the eigenvectors of a certain operator \(\Delta:\Lambda\to\Lambda\) called the Macdonald operator constructed using polynomial difference operators. It was discovered through the works of Bergeron, Garsia, Haiman, Tesler, and many others [14][1][1][1] that variants of the symmetric Macdonald functions called the modified Macdonald functions \(\widetilde{H}_{\lambda}[X;q,t]\) have deep ties to the geometry of the Hilbert schemes \(\mathrm{Hilb}_{n}(\mathbb{C}^{2}).\) On the side of representation theory, it was shown first in full generality by Cherednik [1] that one can recover the symmetric Macdonald functions by considering the representation theory of certain algebras called the spherical double affine Hecke algebras (DAHAs) in type \(GL_{n}\). The positive elliptic Hall algebra (EHA), \(\mathcal{E}^{+}\), was introduced by Burban and Schiffmann [1] as the positive subalgebra of the Hall algebra of the category of coherent sheaves on an elliptic curve over a finite field. This algebra has connections to many areas of mathematics including, most importantly for the present paper, to Macdonald theory. In [16], Schiffmann and Vasserot realize \(\mathcal{E}^{+}\) as a stable limit of the positive spherical DAHAs in type \(GL_{n}\). They show further that there is a natural action of \(\mathcal{E}^{+}\) on \(\Lambda\) aligning with the spherical DAHA representations originally considered by Cherednik. In particular, the action of \(P_{0,1}\in\mathcal{E}^{+}\) gives the Macdonald operator \(\Delta\). The action of \(\mathcal{E}^{+}\) on \(\Lambda\) can be realized as the action of certain generalized convolution operators on the torus equivariant \(K\)-theory of the schemes \(\mathrm{Hilb}_{n}(\mathbb{C}^{2})\). Dunkl and Luque in [17] introduced symmetric and non-symmetric vector-valued (vv.) Macdonald polynomials. The term vector-valued here refers to polynomial-like objects of the form \(\sum_{\alpha}c_{\alpha}X^{\alpha}\otimes v_{\alpha}\) for some scalars \(c_{\alpha}\), monomials \(X^{\alpha}\), and vectors \(v_{\alpha}\) lying in some \(\mathbb{Q}(q,t)\)-vector space. The non-symmetric vv. Macdonald polynomials are distinguished bases for certain DAHA representations built from the irreducible representations of the finite Hecke algebras in type A. These DAHA representations are indexed by Young diagrams and exhibit interesting combinatorial properties relating to periodic Young tableaux. The symmetric vv. Macdonald polynomials are distinguished bases for the spherical (i.e. Hecke-invariant) subspaces of these DAHA representations. Naturally, the spherical DAHA acts on this spherical subspace with the special element \(Y_{1}+\ldots+Y_{n}\) of spherical DAHA acting diagonally on the symmetric vv. Macdonald polynomials. Dunkl and Luque in [17] (and in later work of Colmenarejo, Dunkl, and Luque [17] and Dunkl [17]) only consider the finite rank non-symmetric and symmetric vv. Macdonald polynomials. It is natural to ask if there is an infinite-rank stable-limit construction using the symmetric vv. Macdonald polynomials to give generalized symmetric Macdonald functions and an associated representation of the positive elliptic Hall algebra \(\mathcal{E}^{+}\). In this paper, we will describe such a construction (Thm. 4.9). We will obtain a new family of graded \(\mathcal{E}^{+}\)-representations \(\widetilde{W}_{\lambda}\) indexed by Young diagrams \(\lambda\) and a natural generalization of the symmetric Macdonald functions \(\mathfrak{P}_{T}\) indexed by certain labellings of infinite Young diagrams built as limits of the symmetric vv. Macdonald polynomials. For combinatorial reasons there is essentially a unique natural way to obtain this construction. For any \(\lambda\) we will consider the increasing chains of Young diagrams \(\lambda^{(n)}=(n-|\lambda|,\lambda)\) for \(n\geq|\lambda|+\lambda_{1}\) to build the representations \(\widetilde{W}_{\lambda}\). These special sequences of Young diagrams are central to Murnaghan's theorem [11] regarding the reduced Kronecker coefficients. As such we refer to the \(\mathcal{E}^{+}\)-representations \(\widetilde{W}_{\lambda}\) as Murnaghan-type. For \(\lambda=\emptyset\) we recover the \(\mathcal{E}^{+}\) action on \(\Lambda\) and the symmetric Macdonald functions \(P_{\mu}[X;q,t]\). We will show that these Murnaghan-type representations \(\widetilde{W}_{\lambda}\) are mutually non-isomorphic. The existence of these representations of the elliptic Hall algebra raises many questions about possible new relations between Macdonald theory and geometry. Other authors have constructed families of \(\mathcal{E}^{+}\)-representations [12][12]. Although there should exist a relationship between the Murnaghan-type representations \(\widetilde{W}_{\lambda}\) and those of other authors, the construction in this paper appears to be distinct from prior \(\mathcal{E}^{+}\)-module constructions. For technical reasons regarding the misalignment of the spectrum of the Cherednik operators \(Y_{i}\) we will need to restate many of the results of Dunkl and Luque in [17] using a re-oriented version of the Cherednik operators \(\theta_{i}\). This alternative choice of conventions greatly assists during the construction of the generalized Macdonald functions \(\mathfrak{P}_{T}\). The combinatorics underpinning the non-symmetric vv. Mac donald polynomials originally defined by Dunkl and Luque will be reversed in the conventions appearing in this paper. ### Overview Here we will give a brief overview of this paper. First, in Section 2 we will review relevant definitions and notations as well as recall the stable-limit spherical DAHA construction of Schiffmann-Vasserot. In Section 3 we will re-state many of the results of Dunkl-Luque but for the re-oriented Cherednik operators including describing the non-symmetric vv. Macdonald polynomials \(F_{\tau}\) and their associated Knop-Sahi relations (Prop. 3.2). We define (Def. 3.5) the DAHA modules \(V_{\lambda^{(n)}}\) and connecting maps \(\Phi_{\lambda}^{(n)}:V_{\lambda^{(n+1)}}\to V_{\lambda^{(n)}}\) which will be used in the stable-limit process. Next in Section 4, we describe the spherical subspaces \(W_{\lambda^{(n)}}\) of Hecke invariants of \(V_{\lambda^{(n)}}\) and the symmetric vv. Macdonald polynomials \(P_{T}\) including an explicit expansion of the \(P_{T}\) into the \(F_{\tau}\) (Prop. 4.4). We will use the connecting maps to define the stable-limit spaces \(\widetilde{W}_{\lambda}\) and show in Thm. 4.9 that they possess a graded action of \(\mathcal{E}^{+}\) and have a distinguished basis of generalized symmetric Macdonald functions \(\mathfrak{P}_{T}\,.\) In Section 5 we will obtain a Pieri formula (Cor. 5.4) for the action of \(e_{r}[X]^{\bullet}\) on the generalized Macdonald functions \(\mathfrak{P}_{T}\). Lastly in Section 6, we will look at an interesting family of \((q,t)\) product-series identities (Thm. 6.3) which follow naturally from the combinatorics in the prior sections of the paper. ### Acknowledgements The author would like to thank their advisor Monica Vazirani for her consistent guidance. The author would also like to thank Erik Carlsson, Daniel Orr, and Eugene Gorsky for helpful conversations about the elliptic Hall algebra and the geometry of Hilbert schemes. The author was supported during this work by the 2023 UC Davis Dean's Summer Research Fellowship. ## 2 Definitions and Notations ### Some Combinatorics We start with a description of many of the combinatorial objects which we will need for the remainder of this paper. **Definition 2.1**.: A _partition_ is a (possibly empty) sequence of weakly decreasing positive integers. Denote by \(\mathbb{Y}\) the set of all partitions. Given a partition \(\lambda=(\lambda_{1},\ldots,\lambda_{r})\) we set \(\ell(\lambda):=r\) and \(|\lambda|:=\lambda_{1}+\ldots+\lambda_{r}.\) For \(\lambda=(\lambda_{1},\ldots,\lambda_{r})\in\mathbb{Y}\) and \(n\geq n_{\lambda}:=|\lambda|+\lambda_{1}\) we set \(\lambda^{(n)}:=(n-|\lambda|,\lambda_{1},\ldots,\lambda_{r}).\) We will identify partitions as defined above with _Young diagrams_ of the corresponding shape in English notation i.e. justified up and to the left. Fix a partition \(\lambda\) with \(|\lambda|=n\). We will require each of the following combinatorial constructions for types of labelling of the Young diagram \(\lambda\). If a diagram \(\lambda\) appears as the domain of a labelling function then we are referring to the set of boxes of \(\lambda\) as the domain. * A non-negative _reverse Young tableau_\(\mathrm{RYT}_{\geq 0}(\lambda)\) is a labelling \(T:\lambda\to\mathbb{Z}_{\geq 0}\) which is weakly decreasing along rows and columns. * A non-negative _reverse semi-standard Young tableau_\(\mathrm{RSSYT}_{\geq 0}(\lambda)\) is a labelling \(T:\lambda\to\mathbb{Z}_{\geq 0}\) which is weakly decreasing along rows and strictly decreasing along columns. * A _standard Young tableau_\(\mathrm{SYT}(\lambda)\) is a labelling \(\tau:\lambda\to\{1,\ldots,n\}\) which is strictly increasing along rows and columns. * A non-negative _periodic standard Young tableau_\(\mathrm{PSYT}_{\geq 0}(\lambda)\) is a labelling \(\tau:\lambda\to\{jq^{b}:1\leq j\leq n,b\geq 0\}\) in which each \(1\leq j\leq n\) occurs in exactly one box of \(\lambda\) and where the labelling is strictly increasing along rows and columns. Here we order the formal products \(jq^{m}\) by \(jq^{m}<kq^{\ell}\) if \(m>\ell\) or in the case that \(m=\ell\) we have \(j<k.\) Note that \(SYT(\lambda)\subset\mathrm{PSYT}_{\geq 0}(\lambda).\) \begin{tabular}{|l|l|l|l|l|l|} \hline \(17q^{7}\) & \(15q^{5}\) & \(16q^{5}\) & \(11q^{3}\) & \(7q^{1}\) & \(2q^{0}\) \\ \hline \(14q^{6}\) & \(12q^{4}\) & \(13q^{4}\) & \(9q^{2}\) & \(8q^{0}\) & \\ \hline \(10q^{2}\) & \(4q^{1}\) & \(5q^{4}\) & \(6q^{1}\) & & \\ \hline \(3q^{1}\) & \(1q^{0}\) & & & & \\ \hline \end{tabular} **Definition 2.2**.: Given a box, \(\square\), in a Young diagram \(\lambda\) we define the content of \(\square\) as \(c(\square):=a-b\) where \(\square=(a,b)\) as drawn in the \(\mathbb{N}\times\mathbb{N}\) grid. Let \(\tau\in\mathrm{PSYT}_{\geq 0}(\lambda)\) and \(1\leq i\leq n\). Whenever \(\tau(\square)=iq^{b}\) for some box \(\square\in\lambda\) we will write \(c_{\tau}(i):=c(\square)\) and \(w_{\tau}(i):=b\). Let \(1\leq j\leq n-1\) and suppose that for some boxes \(\square_{1},\square_{2}\in\lambda\) that \(\tau(\square_{1})=jq^{m}\) and \(\tau(\square_{2})=(j+1)q^{\ell}\). Let \(\tau^{\prime}\) be the labelling defined by \(\tau^{\prime}(\square_{1})=(j+1)q^{m}\), \(\tau^{\prime}(\square_{2})=jq^{\ell}\), and \(\tau^{\prime}(\square)=\tau(\square)\) for \(\square\in\lambda\setminus\{\square_{1},\square_{2}\}\). If \(\tau^{\prime}\in\mathrm{PSYT}_{\geq 0}(\lambda)\) then we write \(s_{j}(\tau):=\tau^{\prime}\). Let \(\Psi(\tau)\in\mathrm{PSYT}_{\geq 0}(\lambda)\) be the labelling defined by whenever \(\tau(\square)=kq^{a}\) then either \(\Psi(\tau)(\square)=(k-1)q^{a}\) when \(k\geq 2\) or \(\Psi(\tau)(\square)=nq^{a+1}\) when \(k=1\). We give the set \(\mathrm{PSYT}_{\geq 0}(\lambda)\) a partial order defined by the following cover relations. * For all \(\tau\in\mathrm{PSYT}_{\geq 0}(\lambda)\), \(\Psi(\tau)>\tau\). * If \(w_{\tau}(i)<w_{\tau}(i+1)\) then \(s_{i}(\tau)>\tau\). * If \(w_{\tau}(i)=w_{\tau}(i+1)\) and \(c_{\tau}(i)-c_{\tau}(i+1)>1\) then \(s_{i}(\tau)>\tau\). Define the map \(\mathfrak{p}_{\lambda}:\mathrm{PSYT}_{\geq 0}(\lambda)\to\mathrm{RYT}_{\geq 0}(\lambda)\) by \(\mathfrak{p}_{\lambda}(\tau)(\square)=b\) whenever \(\tau(\square)=iq^{b}\). We will write \(\mathrm{PSYT}_{\geq 0}(\lambda;T)\) for the set of all \(\tau\in\mathrm{PSYT}_{\geq 0}(\lambda)\) with \(\mathfrak{p}_{\lambda}(\tau)=T\in\mathrm{RYT}_{\geq 0}(\lambda)\). **Example 2**.: \(\Psi\) **Lemma 2.3**.: Let \(\lambda\in\mathbb{Y}\) and \(T\in\mathrm{RYT}_{\geq 0}(\lambda)\). There are unique \(\min(T),\mathrm{top}(T)\in\mathrm{PSYT}_{\geq 0}(\lambda;T)\) such that for all \(\tau\in\mathrm{PSYT}_{\geq 0}(\lambda)\) with \(\mathfrak{p}_{\lambda}(\tau)=T\), \(\min(T)\leq\tau\leq\mathrm{top}(T)\). **Example 3**.: Given \(T=\) \begin{tabular}{|l|l|l|l|l|l|} \hline \(7\) & \(5\) & \(5\) & \(2\) & \(1\) & \(0\) \\ \hline \(6\) & \(5\) & \(5\) & \(0\) & \(0\) \\ \hline \(2\) & \(1\) & \(1\) & \(0\) & & \\ \hline \(1\) & \(0\) & & & & \\ \hline \(17q^{7}\) & \(12q^{5}\) & \(13q^{5}\) & \(10q^{2}\) & \(6q^{1}\) & \(1q^{0}\) \\ \hline \(16q^{6}\) & \(14q^{5}\) & \(15q^{5}\) & \(2q^{0}\) & \(3q^{0}\) & \\ \hline \(2q^{6}\) & \(4q^{5}\) & \(6q^{5}\) & \(14q^{0}\) & \(16q^{0}\) & \\ \hline \(7q^{2}\) & \(10q^{1}\) & \(11q^{1}\) & \(15q^{0}\) & \\ \hline \(9q^{1}\) & \(13q^{0}\) & & & & \\ \hline \end{tabular} **Definition 2.4**.: Let \(\lambda\in\mathbb{Y}\) with \(|\lambda|=n\) and \(T\in\mathrm{RYT}_{\geq 0}(\lambda)\). Define \(\nu(T)\in\mathbb{Z}_{\geq 0}^{n}\) to be the vector formed by listing the values of \(\mathrm{T}\) in decreasing order. Define \(S(T)\in\mathrm{SYT}(\lambda)\) by ordering the boxes of \(\lambda\) according to \(\square_{1}\leq\square_{2}\) if and only if * \(T(\square_{1})>T(\square_{2})\) or * \(T(\square_{1})=T(\square_{2})\) and \(\square_{1}\) comes before \(\square_{2}\) in the column-standard labelling of \(\lambda\). Define the statistic \(b_{T}\in\mathbb{Z}_{\geq 0}\) by \[b_{T}:=\sum_{i=1}^{n}\nu(T)_{i}(c_{S(T)}(i)+i-1).\] Lastly, define the composition \(\mu(T)\) of \(n\) so that the Young subgroup \(\mathfrak{S}_{\mu(T)}\) of \(\mathfrak{S}_{n}\) is the stabilizer subgroup of \(\min(T)\) i.e. the group generated by the \(s_{i}\in\mathfrak{S}_{n}\) such that the entries \(iq^{a}\) and \((i+1)q^{b}\) occur in the same row of \(\min(T)\) for some \(a,b\geq 0\). **Remark 1**.: For every \(T\in\mathrm{RYT}_{\geq 0}(\lambda)\) we can recover \(T\) from the pair \((S(T),\nu(T))\) by labelling \(\lambda\) with the entries of \(\nu(T)\) following the order of \(S(T)\). Further, the standard Young tableau \(S(T)\) is the largest such tableau following the partial order defined in Definition 2.2. **Example 4**.: For \(T\in\mathrm{RYT}_{\geq 0}(6,5,4,2)\) as in Example 3 we have that \[S(T)=\begin{array}{|c|c|c|c|c|c|}\hline 1&3&5&8&12&17\\ \hline 2&4&6&14&16\\ \hline 7&10&11&15&\\ \hline 9&13&\\ \hline\end{array}\qquad\in\mathrm{SYT}(6,5,4,2),\] \(\nu(T)=(7,6,5,5,5,5,2,2,1,1,1,1,0,0,0,0,0)\in\mathbb{Z}_{\geq 0}^{17},\) \(b_{T}=0+0+15+15+30+30+8+20+5+8+10+15+0+0+0+0+0=156,\) and \(\mu(T)=(1,2,1,1,1,2,1,1,1,2,2,1,1)\). **Definition 2.5**.: Let \(\lambda\in\mathbb{Y}\), with \(|\lambda|=n\) and \(\tau\in\mathrm{PSYT}_{\geq 0}(\lambda)\) with \(T=\mathfrak{p}_{\lambda}(\tau)\). An ordered pair of boxes \((\Box_{1},\Box_{2})\in\lambda\times\lambda\) is called an _inversion pair_ of \(\tau\) if \(S(T)(\Box_{1})<S(T)(\Box_{2})\) and \(i>j\) where \(\tau(\Box_{1})=iq^{a}\), \(\tau(\Box_{2})=jq^{b}\) for some \(a,b\geq 0\). The set of all inversion pairs of \(\tau\) will be denoted by \(\mathrm{Inv}(\tau)\). We will use the shorthand \(\mathrm{I}(T)\) for the set \(\mathrm{Inv}(\min(T))\). **Example 5**.: In the labelling \[\begin{array}{|c|c|c|c|c|c|}\hline 17q^{7}&12q^{5}&13q^{5}&10q^{2}&6q^{1}&1q^{0}\\ \hline 16q^{6}&14q^{5}&15q^{5}&2q^{0}&3q^{0}\\ \hline 11q^{2}&7q^{1}&8q^{1}&4q^{0}&\\ \hline 9q^{1}&5q^{0}&\\ \hline\end{array}\qquad\text{we have that the pairs $(17q^{7},12q^{5})$,}\] \((14q^{5},13q^{5})\), and \((5q^{0},4q^{0})\) are all inversions. Here we have referred to boxes according to their labels. ### Finite Hecke Algebra Here we give a review of the finite Hecke algebras in type A and give a description of their irreducible representations. **Definition 2.6**.: Define the _finite Hecke algebra_\(\mathcal{H}_{n}\) to be the \(\mathbb{Q}(q,t)\)-algebra generated by \(T_{1},\ldots,T_{n-1}\) subject to the relations * \((T_{i}-1)(T_{i}+t)=0\) for \(1\leq i\leq n-1\) * \(T_{i}T_{i+1}T_{1}=T_{i+1}T_{i}T_{i+1}\) for \(1\leq i\leq n-2\) * \(T_{i}T_{j}=T_{j}T_{i}\) for \(|i-j|>1\). We define the special elements \(\overline{\theta}_{1},\ldots,\overline{\theta}_{n}\in\mathcal{H}_{n}\) by \(\overline{\theta}_{1}:=1\) and \(\overline{\theta}_{i+1}:=tT_{i}^{-1}\overline{\theta}_{i}T_{i}^{-1}\) for \(1\leq i\leq n-1\). Further, define \(\overline{\varphi}_{1},\ldots,\overline{\varphi}_{n-1}\) by \(\overline{\varphi}_{i}:=(tT_{i}^{-1})\overline{\theta}_{i}-\overline{\theta}_ {i}(tT_{i}^{-1})\). For a permutation \(\sigma\in\mathfrak{S}_{n}\) and a reduced expression \(\sigma=s_{i_{1}}\cdots s_{i_{r}}\) we write \(T_{\sigma}:=T_{i_{1}}\cdots T_{i_{r}}\). **Definition 2.7**.: Let \(\lambda\in\mathbb{Y}\) with \(|\lambda|=n\). By an abuse of notation we will label by \(\lambda\) the \(\mathcal{H}_{n}\)-module spanned by \(\tau\in\mathrm{SYT}(\lambda)\) defined by the following relations: * \(\overline{\theta}_{i}(\tau)=t^{\varepsilon_{r}(i)}\tau\) * If \(s_{i}(\tau)>\tau\) then \(\overline{\varphi}_{i}(\tau)=(t^{\varepsilon_{r}(i)}-t^{\varepsilon_{r}(i+1)} )s_{i}(\tau)\). * If the labels \(i,i+1\) are in the same row in \(\tau\) then \(T_{i}(\tau)=\tau\). * If the labels \(i,i+1\) are in the same column in \(\tau\) then \(T_{i}(\tau)=-t\tau\). **Lemma 2.8**.: Let \(\lambda\in\mathbb{Y}\), \(n\geq n_{\lambda}\), and \(\Box_{0}\in\lambda^{(n+1)}/\lambda^{(n)}\). There is a \(\mathcal{H}_{n}\)-module map \(\mathfrak{q}_{\lambda}^{(n)}:\lambda^{(n+1)}\to\lambda^{(n)}\) given by \[\mathfrak{q}_{\lambda}^{(n)}(\tau):=\begin{cases}\tau|_{\lambda^{(n)}}&\tau( \Box_{0})=n+1\\ 0&\tau(\Box_{0})\neq n+1.\end{cases}\] ### Positive Affine Hecke Algebra We will need the following basic notions about affine Hecke algebras in type A. **Definition 2.9**.: Define the _positive affine Hecke algebra_\(\mathcal{A}_{n}\) to be the \(\mathbb{Q}(q,t)\)-algebra generated by \(T_{1},\dots,T_{n-1}\) and \(\theta_{1},\dots,\theta_{n}\) subject to the relations * \(T_{1},\dots,T_{n-1}\) generate \(\mathcal{H}_{n}\) * \(\theta_{i}\theta_{j}=\theta_{j}\theta_{i}\) for all \(1\leq i,j\leq n\) * \(\theta_{i+1}=tT_{i}^{-1}\theta_{i}T_{i}^{-1}\) for \(1\leq i\leq n-1\) * \(T_{i}\theta_{j}=\theta_{j}T_{i}\) for \(j\notin\{i,i+1\}\) Define the special elements \(\pi_{n}\) and \(\varphi_{1},\dots,\varphi_{n-1}\) of \(\mathcal{A}_{n}\) by * \(\pi_{n}:=t^{n-1}\theta_{1}T_{1}^{-1}\cdots T_{n-1}^{-1}\) * \(\varphi_{i}:=(tT_{i}^{-1})\theta_{i}-\theta_{i}(tT_{i}^{-1})\). **Remark 2**.: Note that the \(\theta_{i}\) elements are distinct from the Cherednik elements \(\xi_{i}\) defined in [11] which after aligning the different finite Hecke algebra \(T_{i}\) conventions satisfy \(\pi_{n}=\xi_{1}T_{1}\dots T_{n-1}\). **Definition 2.10**.: Define the \(\mathbb{Q}(q,t)\)-algebra homomorphism \(\rho_{n}:\mathcal{A}_{n}\to\mathcal{H}_{n}\) by * \(\rho_{n}(T_{i})=T_{i}\) for \(1\leq i\leq n-1\) * \(\rho_{n}(\theta_{i})=\overline{\theta}_{i}\) for \(1\leq i\leq n\). For a \(\mathcal{H}_{n}\)-module \(V\) we will denote by \(\rho_{n}^{*}(V)\) the \(\mathcal{A}_{n}\)-module with action defined for \(v\in V\) and \(w\in\mathcal{A}_{n}\) by \(w(v):=\rho_{n}(w)(v)\). **Remark 3**.: Note for \(\lambda\in\mathbb{Y}\) with \(|\lambda|=n\) that \(\rho_{n}^{*}(\lambda)\) is an irreducible \(\mathcal{A}_{n}\)-module with a basis of \(\theta\)-weight vectors with distinct weights given by \(\tau\in\mathrm{SYT}(\lambda)\). ### Positive Double Affine Hecke Algebra Here we describe the positive double affine Hecke algebras in type \(GL_{n}\). **Definition 2.11**.: Define the _positive double affine Hecke algebra_\(\mathcal{D}_{n}\) to be the \(\mathbb{Q}(q,t)\)-algebra generated by \(T_{1},\dots,T_{n-1}\), \(\theta_{1},\dots,\theta_{n}\), and \(X_{1},\dots,X_{n}\) subject to the relations * \(T_{1},\dots,T_{n-1}\) and \(\theta_{1},\dots,\theta_{n}\) generate \(\mathcal{A}_{n}\) * \(X_{i}X_{j}=X_{j}X_{i}\) for \(1\leq i,j\leq n\) * \(X_{i+1}=tT_{i}^{-1}X_{i}T_{i}^{-1}\) for \(1\leq i\leq n-1\) * \(T_{i}X_{j}=X_{j}T_{i}\) for \(1\leq i\leq n-1\) and \(1\leq j\leq n\) with \(j\notin\{i,i+1\}\) * \(\pi_{n}X_{i}=X_{i+1}\pi_{n}\) for \(1\leq i\leq n-1\) * \(\pi_{n}X_{n}=qX_{1}\pi_{n}\). **Remark 4**.: Note that \(\mathcal{D}_{n}\) has a \(\mathbb{Z}_{\geq 0}\)-grading determined by \(\deg(X_{i})=1\) and \(\deg(Y_{i})=\deg(T_{i})=0\). **Definition 2.12**.: Let \(\epsilon^{(n)}\in\mathcal{H}_{n}\) denote the (normalized) trivial idempotent given by \[\epsilon^{(n)}:=\frac{1}{[n]_{t}!}\sum_{\sigma\in\mathfrak{S}_{n}}t^{\binom{n} {2}-\ell(\sigma)}T_{\sigma}\] where \([n]_{t}!:=\prod_{i=1}^{n}(\frac{1-t^{i}}{1-t}).\) The _positive spherical double affine Hecke algebra_\(\mathcal{D}_{n}^{\mathrm{sph}}\) is the (non-unital) subalgebra of \(\mathcal{D}_{n}\) given by \(\mathcal{D}_{n}^{\mathrm{sph}}:=\epsilon^{(n)}\,\mathcal{D}_{n}\,\epsilon^{(n)}\). **Remark 5**.: Given any \(\mathcal{D}_{n}\)-module \(V\) the space \(\epsilon^{(n)}(V)\) is naturally a \(\mathcal{D}_{n}^{\mathrm{sph}}\)-module. Note that \(\mathcal{D}_{n}^{\mathrm{sph}}\) is unital with unit \(\epsilon^{(n)}\) and that \(\mathcal{D}_{n}^{\mathrm{sph}}\) has a grading inherited from \(\mathcal{D}_{n}\,\). ### Positive Elliptic Hall Algebra Here we give a very brief description of the positive elliptic Hall algebra. **Definition 2.13**.: For \(\ell>0\) define the special elements \(P_{0,\ell}^{(n)},P_{\ell,0}^{(n)}\in\mathcal{D}_{n}^{\mathrm{ph}}\) by * \(P_{0,\ell}^{(n)}=\epsilon^{(n)}\left(\sum_{i=1}^{n}\theta_{i}^{\ell}\right) \epsilon^{(n)}\) * \(P_{\ell,0}^{(n)}=q^{\ell}\epsilon^{(n)}\left(\sum_{i=1}^{n}X_{i}^{\ell}\right) \epsilon^{(n)}.\) **Remark 6**.: Following [15], we may also define elements \(P_{a,b}^{(n)}\in\mathcal{D}_{n}^{\mathrm{ph}}\) for \((a,b)\in\mathbb{Z}^{2}\setminus\{(0,0)\}\) similarly using an algebra automorphism action by \(\mathrm{SL}_{2}(\mathbb{Z})\) on \(\mathcal{D}_{n}^{\mathrm{ph}}\). However, we will not need to work with these general elements directly as \(P_{0,\ell}^{(n)}\,,P_{\ell,0}^{(n)}\) for \(\ell>0\) generate \(\mathcal{D}_{n}^{\mathrm{ph}}\). **Theorem 2.14**.: [15] There is a unique graded algebra surjection \(\mathcal{D}_{n+1}^{\mathrm{ph}}\to\mathcal{D}_{n}^{\mathrm{ph}}\) determined for \(\ell>0\) by \(P_{0,\ell}^{(n+1)}\to P_{0,\ell}^{(n)}\) and \(P_{\ell,0}^{(n+1)}\to P_{\ell,0}^{(n)}.\) **Definition 2.15**.: [15] The _positive elliptic Hall algebra_\(\mathcal{E}^{+}\) is the inverse limit of the graded algebras \(\mathcal{D}_{n}^{\mathrm{ph}}\) with respect to the maps \(\mathcal{D}_{n+1}^{\mathrm{ph}}\to\mathcal{D}_{n}^{\mathrm{ph}}\,.\) For \(\ell>0\) define the special elements \(P_{0,\ell}:=\lim_{n}P_{0,\ell}^{(n)}\) and \(P_{\ell,0}:=\lim_{n}P_{\ell,0}^{(n)}\). The positive elliptic Hall algebra \(\mathcal{E}^{+}\) is generated by \(P_{0,\ell},P_{\ell,0}\) for \(\ell>0\) and has \(\mathbb{Z}_{\geq 0}\)-grading determined by \(\deg(P_{0,\ell})=0\) and \(\deg(P_{\ell,0})=\ell.\) Remarkably, there is a description of \(\mathcal{E}^{+}\) (and its Drinfeld double \(\mathcal{E}\) called the elliptic Hall algebra) given by straightforward generators and relations [15] which we will not detail here. ## 3 DAHA Modules from Young Diagrams ### The \(\mathcal{D}_{n}\)-module \(V_{\lambda}\) We begin by defining a collection of DAHA modules indexed by Young diagrams \(\lambda\in\mathbb{Y}\,.\) These modules are the same as those appearing in [14] but we take the approach of using induction from \(\mathcal{A}_{n}\) to \(\mathcal{D}_{n}\) for their definition. **Definition 3.1**.: Let \(\lambda\in\mathbb{Y}\) with \(|\lambda|=n\). Define the \(\mathcal{D}_{n}\)-module \(V_{\lambda}\) to be the induced module \[V_{\lambda}:=\mathrm{Ind}_{\mathcal{A}_{n}}^{\mathcal{D}_{n}}\,\rho_{n}^{*}( \lambda).\] These modules naturally have the basis given by \(X^{\alpha}\otimes\tau\) where \(X^{\alpha}\) is a monomial and \(\tau\in\mathrm{SYT}(\lambda)\). Note that the action of \(\pi_{n}\) on \(V_{\lambda}\) is invertible so we may consider the action of \(\pi_{n}^{-1}\) although we have not formally included \(\pi_{n}^{-1}\) into the algebra \(\mathcal{D}_{n}\,.\) Using the theory of intertwiners for DAHA and some combinatorics we are able to show the following structural results. The \(F_{\tau}\) appearing below are the version of the non-symmetric vv. Macdonald polynomials following our conventions. **Proposition 3.2**.: There exists a basis of \(V_{\lambda}\) consisting of \(\theta^{(n)}\)-weight vectors \(\{F_{\tau}:\tau\in\mathrm{PSYT}_{\geq 0}(\lambda)\}\) with distinct \(\theta^{(n)}\)-weights such that the following hold: * \(\theta_{i}^{(n)}(F_{\tau})=q^{w_{\tau}(i)}t^{c_{\tau}(i)}F_{\tau}\) * If \(\tau\in\mathrm{SYT}(\lambda)\) then \(F_{\tau}=1\otimes\tau.\) * If \(s_{i}(\tau)>\tau\) then \[\left(tT_{i}^{-1}+\frac{(t-1)q^{w_{\tau}(i+1)}t^{c_{\tau}(i+1)}}{q^{w_{\tau}(i) }t^{c_{\tau}(i)}-q^{w_{\tau}(i+1)}t^{c_{\tau}(i+1)}}\right)F_{\tau}=F_{s_{i}( \tau)}.\] * \(F_{\Psi(\tau)}=q^{w_{1}(\tau)}X_{n}\pi_{n}^{-1}F_{\tau}.\) **Proposition 3.3**.: The \(\mathcal{D}_{n}\)-module \(V_{\lambda}\) has the following decomposition into \(\mathcal{A}_{n}\)-submodules: \[\mathrm{Res}_{\mathcal{A}_{n}}^{\mathcal{D}_{n}}\,V_{\lambda}=\bigoplus_{T\in \mathrm{RYT}_{\geq 0}(\lambda)}U_{T}\] where \(U_{T}:=\mathrm{span}_{\mathbb{Q}(q,t)}\{F_{\tau}:\mathfrak{p}_{\lambda}(\tau)=T\}.\) Further, each \(\mathcal{A}_{n}\)-module \(U_{T}\) is irreducible. Using induction on the partial order defined over \(\mathrm{PSYT}_{\geq 0}(\lambda)\) we obtain the following result. Recall Section 2 for notation. **Proposition 3.4**.: For \(T\in\mathrm{RYT}_{\geq 0}(\lambda)\), \(F_{\mathrm{top}(T)}\) has a triangular expansion of the form \[F_{\mathrm{top}(T)}=t^{-b_{T}}X^{\nu(T)}\otimes S(T)+\sum_{\beta\prec\nu(T)}X^{ \beta}\otimes v_{\beta}\] for some \(v_{\beta}\in\lambda.\) Here \(\prec\) denotes the Bruhat order on \(\mathbb{Z}_{\geq 0}^{n}\). ### Connecting Maps Between \(V_{\lambda^{(n)}}\) In order to build the inverse systems which we will use to define Muraghan-type modules for \(\mathcal{E}^{+}\), we need to consider the following maps. **Definition 3.5**.: Let \(\lambda\in\mathbb{Y}\). For \(n\geq n_{\lambda}\) define \(\Phi_{\lambda}^{(n)}:V_{\lambda^{(n+1)}}\to V_{\lambda^{(n)}}\) as the \(\mathbb{Q}(q,t)\)-linear map determined by \[\Phi_{\lambda}^{(n)}(X^{\alpha}\otimes v)=\mathbb{1}(\alpha_{n+1}=0)X_{1}^{ \alpha_{1}}\cdots X_{n}^{\alpha_{n}}\otimes\mathfrak{q}_{\lambda}^{(n)}(v).\] The next proposition is the most crucial step in proving the main theorem of this paper. Its proof relies heavily on the use of the re-orientated Cherednik operators \(\theta_{i}\) and their spectral analysis. **Proposition 3.6**.: Let \(T\in\mathrm{RYT}_{\geq 0}(\lambda^{(n)})\) and \(T^{\prime}\in\mathrm{RYT}_{\geq 0}(\lambda^{(n+1)})\) be such that \(T(\Box)=T^{\prime}(\Box)\) for \(\Box\in\lambda^{(n)}\) and \(T^{\prime}(\Box_{0})=0\) for \(\Box_{0}\in\lambda^{(n+1)}/\lambda^{(n)}.\) Then \[\Phi_{\lambda}^{(n)}(F_{\mathrm{top}(T^{\prime})})=F_{\mathrm{top}(T)}.\] The maps \(\Phi_{\lambda}^{(n)}\) possess another remarkable property regarding the action of the Macdonald elements which will be required later in this paper. **Corollary 3.7**.: For all \(\ell>0\) and \(n\geq n_{\lambda}\), \[\Phi_{\lambda}^{(n)}\left(P_{(0,\ell)}^{(n+1)}-\sum_{\Box\in\lambda^{(n+1)}}t ^{\ell e(\Box)}\right)=\left(P_{(0,\ell)}^{(n)}-\sum_{\Box\in\lambda^{(n)}}t ^{\ell e(\Box)}\right)\Phi_{\lambda}^{(n)}.\] ## 4 Positive EHA Representations from Young Diagrams In this section we build \(\mathcal{E}^{+}\)-modules using the maps \(\Phi_{\lambda}^{(n)}\) and the stability of the \(F_{\tau}\) basis already described. ### The \(\mathcal{D}_{n}^{\text{sph}}\)-modules \(W_{\lambda^{(n)}}\) Here we consider the spherical subspaces of the \(V_{\lambda}\) modules. **Definition 4.1**.: For \(\lambda\in\mathbb{Y}\) with \(|\lambda|=n\) define the \(\mathcal{D}_{n}^{\text{sph}}\)-module \(W_{\lambda}:=\epsilon^{(n)}(V_{\lambda})\). We will need the following combinatorial description of the AHA submodules of \(V_{\lambda}\) which contain a nonzero \(T_{i}\)-invariant vector. **Proposition 4.2**.: For \(\lambda\in\mathbb{Y}\) with \(|\lambda|=n\) and \(T\in\mathrm{RYT}_{\geq 0}(\lambda)\), \[\dim_{\mathbb{Q}(q,t)}\epsilon^{(n)}(U_{T})=\begin{cases}1&T\in\mathrm{RSSYT}_ {\geq 0}(\lambda)\\ 0&T\notin\mathrm{RSSYT}_{\geq 0}(\lambda).\end{cases}\] We define the vv. symmetric Macdonald polynomials in the following way. These will align up to a scalar with those in [11]. **Definition 4.3**.: Let \(T\in\mathrm{RSSYT}_{\geq 0}(\lambda).\) Define \(P_{T}\in\epsilon^{(n)}(U_{T})\) to be the unique element of the form \[P_{T}=F_{\mathrm{top}(T)}+\sum_{y\in\mathrm{PSYT}_{\geq 0}(\lambda;T)\setminus \{\mathrm{top}(T)\}}\kappa_{y}F_{y}.\] We can now use Prop. 3.2 and Prop. 3.6 to prove the following results for the \(P_{T}\). **Proposition 4.4**.: For all \(T\in\operatorname{RSSYT}_{\geq 0}(\lambda)\), \[P_{T}=\sum_{\tau\in\operatorname{PSYT}_{\geq 0}(\lambda;T)}\prod_{(\Box_{1}, \Box_{2})\in\operatorname{Inv}(\tau)}\left(\frac{q^{T(\Box_{1})}t^{c(\Box_{1})+ 1}-q^{T(\Box_{2})}t^{c(\Box_{2})}}{q^{T(\Box_{1})}t^{c(\Box_{1})}-q^{T(\Box_{ 2})}t^{c(\Box_{2})}}\right)F_{\tau}.\] **Proposition 4.5**.: The set \(\{P_{T}:T\in\operatorname{RSSYT}_{\geq 0}(\lambda)\}\) is a \(\mathbb{Q}(q,t)[\theta_{1},\dots,\theta_{n}]^{\mathfrak{C}_{n}}\)-weight basis for \(W_{\lambda}\). Further, \[P_{0,\ell}^{(n)}(P_{T})=\left(\sum_{\Box\in\lambda}q^{\ell T(\Box)}t^{c(\Box) }\right)P_{T}.\] **Corollary 4.6**.: Let \(T\in\operatorname{RSSYT}_{\geq 0}(\lambda^{(n)})\) and \(T^{\prime}\in\operatorname{RSSYT}_{\geq 0}(\lambda^{(n+1)})\) such that \(T(\Box)=T^{\prime}(\Box)\) for \(\Box\in\lambda^{(n)}\) and \(T^{\prime}(\Box_{0})=0\) for \(\Box_{0}\in\lambda^{(n+1)}/\lambda^{(n)}\). Then \(\Phi_{\lambda}^{(n)}(P_{T^{\prime}})=P_{T}\). ### Stable Limit of the \(W_{\lambda^{(n)}}\) We now can define the stable-limit spaces \(\widetilde{W}_{\lambda}\) and the generalized symmetric Macdonald functions. **Definition 4.7**.: Let \(\lambda\in\mathbb{Y}\). Define the infinite diagram \(\lambda^{(\infty)}:=\bigcup_{n\geq n_{\lambda}}\lambda^{(n)}\). Define \(\Omega(\lambda)\) to be the set of all labellings \(T:\lambda^{(\infty)}\to\mathbb{Z}_{\geq 0}\) such that * \(|\{\square\in\lambda^{(\infty)}:T(\Box)\neq 0\}|<\infty\) * \(T\) increases weakly along rows * \(T\) increases strictly along columns. Define the space \(W_{\lambda}^{(\infty)}\) to be the inverse limit \(\varliminf_{\lambda^{(n)}}W_{\lambda^{(n)}}\) with respect to the maps \(\Phi_{\lambda}^{(n)}\). Let \(\widetilde{W}_{\lambda}\) be the subspace of all bounded \(X\)-degree elements of \(W_{\lambda}^{(\infty)}\). For any symmetric function \(F\in\Lambda\) define \(F[X]^{\bullet}\) to be the corresponding multiplication operator on \(\widetilde{W}_{\lambda}\). Lastly, for \(T\in\Omega(\lambda)\) define the generalized symmetric Macdonald function \(\mathfrak{P}_{T}:=\lim_{n}P_{T_{|\lambda^{(n)}}}\in\widetilde{W}_{\lambda}\). **Remark 7**.: Each \(\mathfrak{P}_{T}\) is homogeneous of degree \(\deg(\mathfrak{P}_{T})=\sum_{\square\in\lambda^{(\infty)}}T(\Box)<\infty\). It is clear from definition that the set of all \(\mathfrak{P}_{T}\) for \(T\in\Omega(\lambda)\) gives a \(\mathbb{Q}(q,t)\)-basis of \(\widetilde{W}_{\lambda}\). Lastly, the multiplication operators \(F[X]^{\bullet}\) are well-defined since \(\Phi_{\lambda}^{(n)}X_{n+1}=0\). **Definition 4.8**.: For \(\ell>0\) define the operator \(\Delta_{\ell}:\widetilde{W}_{\lambda}\to\widetilde{W}_{\lambda}\) to be the stable-limit \(\Delta_{\ell}:=\lim_{n}\left(P_{0,\ell}^{(n)}-\sum_{\square\in\lambda^{(n)}}t^ {\ell c(\Box)}\right).\) ### \(\mathcal{E}^{+}\) Action on \(\widetilde{W}_{\lambda}\) Finally, we are ready to state and prove the main result of this paper. **Theorem 4.9** (Main Theorem).: For \(\lambda\in\mathbb{Y}\), \(\widetilde{W}_{\lambda}\) is a graded \(\mathcal{E}^{+}\)-module with action determined for \(\ell>0\) by * \(P_{\ell,0}\to q^{\ell}p_{\ell}[X]^{\bullet}\) * \(P_{0,\ell}\to\Delta_{\ell}\). Further, \(\widetilde{W}_{\lambda}\) is spanned by a basis of eigenvectors \(\{\mathfrak{P}_{T}\}_{T\in\Omega(\lambda)}\) with distinct eigenvalues for the Macdonald operator \(\Delta=\Delta_{1}\). Proof.: It suffices to establish that the map \(\mathcal{E}^{+}\to\operatorname{End}_{\mathbb{Q}(q,t)}(\widetilde{W}_{\lambda})\) satisfies the generating relations of \(\mathcal{E}^{+}\). Any such relation is a non-commutative polynomial expression in \(\mathcal{E}^{+}\) of the form \[F(P_{0,1},\dots,P_{0,r},P_{1,0},\dots,P_{s,0})=0\] for some \(r,s>0\). By an argument of Schiffmann-Vasserot (Lemma 1.3 in [13]), there are automorphisms \(\Gamma^{(n)}\) of \(\mathcal{D}_{n}^{\text{ph}}\) such that \(\Gamma^{(n)}(P_{0,\ell}^{(n)})=P_{0,\ell}^{(n)}-\sum_{\square\in\lambda^{(n)}}t^{ \ell c(\Box)}\) and \(\Gamma^{(n)}(P_{\ell,0}^{(n)})=P_{\ell,0}^{(n)}\). By applying the canonical quotient maps \(\Pi_{n}:\widetilde{W}_{\lambda}\to W_{\lambda^{(n)}}\) we see using Cor. 3.7 that as maps \[\Pi_{n}F(P_{0,1},\ldots,P_{0,r},P_{1,0},\ldots,P_{s,0}) =F(\Gamma^{(n)}(P_{0,1}^{(n)}),\ldots,\Gamma^{(n)}(P_{0,r}^{(n)}), \Gamma^{(n)}(P_{1,0}^{(n)}),\ldots,\Gamma^{(n)}(P_{s,0}^{(n)}))\Pi_{n}\] \[=\Gamma^{(n)}(F(P_{0,1}^{(n)},\ldots,P_{0,r}^{(n)},P_{1,0}^{(n)}, \ldots,P_{s,0}^{(n)}))\Pi_{n}=0.\] As this holds for all \(n\geq n_{\lambda}\), it follows that \(F(P_{0,1},\ldots,P_{0,r},P_{1,0},\ldots,P_{s,0})=0\) in \(\operatorname{End}_{\mathbb{Q}(q,t)}(\widetilde{W}_{\lambda})\) as desired. The last statement regarding the spectrum of \(\Delta\) follows directly from Prop. 4.5 and Cor. 4.6. **Remark 8**.: For \(\lambda=\emptyset\), \(\widetilde{W}_{\emptyset}=\Lambda\) recovers the standard representation of \(\mathcal{E}^{+}\,.\) In this case, \(\Omega(\emptyset)=\mathbb{Y}\) and \(\mathfrak{P}_{\mu}=P_{\mu}[X;q^{-1},t]\) (up to nonzero scalar). By considering the grading of each module \(\widetilde{W}_{\lambda}\) and the spectral theory of the Macdonald operator \(\Delta\) we can prove the following. **Proposition 4.10**.: For \(\lambda,\mu\in\mathbb{Y}\) distinct, \(\widetilde{W}_{\lambda}\ncong\widetilde{W}_{\mu}\) as graded \(\mathcal{E}^{+}\)-modules. **Remark 9**.: Although we will not detail the construction here, there is a natural way to extend the \(\mathcal{E}^{+}\) action on each \(\widetilde{W}_{\lambda}\) to an action of the full elliptic Hall algebra \(\mathcal{E}\) using a non-degenerate \(q,t\)-sesquilinear form. ## 5 Pieri Rule In this section we give the description of a Pieri rule for the generalized symmetric Macdonald functions \(\mathfrak{P}_{T}\). We need to consider the following \(q,t\)-rational function. **Definition 5.1**.: For \(T\in\operatorname{RSSYT}_{\geq 0}(\lambda)\) define \[K_{T}(q,t):=\frac{[\mu(T)]_{t}!}{[n]_{t}!}\prod_{(\Box_{1},\Box_{2})\in \mathbb{I}(T)}\left(\frac{q^{T(\Box_{1})}t^{c(\Box_{1})}-q^{T(\Box_{2})}t^{c( \Box_{2})+1}}{q^{T(\Box_{1})}t^{c(\Box_{1})}-q^{T(\Box_{2})}t^{c(\Box_{2})}} \right).\] Using Prop. 4.4 and some book-keeping we obtain the following finite-rank Pieri formula. **Theorem 5.2**.: For \(T\in\operatorname{RSSYT}_{\geq 0}(\lambda)\) and \(1\leq r\leq n\) we have the expansion \[e_{r}[X_{1}+\ldots+X_{n}]P_{T}=\sum_{S}d_{S,T}^{(r)}P_{S}\] where \[\frac{d_{S,T}^{(r)}}{t^{(\mathcal{D})}e_{r}(1,\ldots,t^{n-1})K_{S }(q,t)}=\sum_{\begin{subarray}{c}\tau\in\operatorname{PSYT}_{\geq 0}(\lambda;T)\\ \mathfrak{P}^{\tau}(\tau)\in\operatorname{PSYT}_{\geq 0}(\lambda;S) \end{subarray}}t^{c_{\tau}(1)+\ldots+c_{\tau}(r)}\prod_{(\Box_{1},\Box_{2}) \in\operatorname{Inv}(\tau)}\left(\frac{q^{T(\Box_{1})}t^{c(\Box_{1})+1}-q^{T( \Box_{2})}t^{c(\Box_{2})}}{q^{T(\Box_{1})}t^{c(\Box_{1})}-q^{T(\Box_{2})}t^{c( \Box_{2})}}\right)\ \times\\ \prod_{(\Box_{1},\Box_{2})\in\operatorname{Inv}(\Psi^{\tau}(\tau) )}\left(\frac{q^{S(\Box_{1})}t^{c(\Box_{1})}-q^{S(\Box_{2})}t^{c(\Box_{2})}}{q ^{S(\Box_{1})}t^{c(\Box_{1})}-q^{S(\Box_{2})}t^{c(\Box_{2})+1}}\right)\] and \(S\) ranges over all \(S\in\operatorname{RSSYT}_{\geq 0}(\lambda)\) one can obtain from \(T\) by adding \(r\) 1's to the boxes of \(T\) with at most one 1 being added to each box. **Definition 5.3**.: For \(S,T\in\Omega(\lambda)\) and \(r\geq 1\) define \(\mathfrak{d}_{S,T}^{(r)}\in\mathbb{Q}(q,t)\) by \[e_{r}[X]^{\bullet}(\mathfrak{P}_{T})=\sum_{S\in\Omega(\lambda)}\mathfrak{d}_{S,T}^{(r)}\,\mathfrak{P}_{S}\,.\] Define the _rank_\(\operatorname{rk}(T)\) to be the minimal \(n\geq n_{\lambda}\) such that \(T|_{\lambda^{(\infty)}\setminus\lambda^{(n)}}=0\). **Remark 10**.: Note that from Theorem 5.2 it is clear for \(T\in\Omega(\lambda)\) and \(r\geq 1\) that each \(S\in\Omega(\lambda)\) such that \(\mathfrak{d}_{S,T}^{(r)}\neq 0\) will necessarily be obtained from \(T\) by adding \(r\)\(1\)'s to the boxes of \(T\) with at most one \(1\) being added to each box. As such the set of \(S\) with \(\mathfrak{d}_{S,T}^{(r)}\neq 0\) is finite. We can use the stability from Cor. 4.6 to obtain a Pieri rule. **Corollary 5.4** (Pieri Rule).: Let \(S,T\in\Omega(\lambda)\) and \(r\geq 1\). For all \(n\geq\operatorname{rk}(T)+r\) \[\mathfrak{d}_{S,T}^{(r)}=d_{S|_{\lambda(n)},T|_{\lambda(n)}}^{(r)}.\] ## 6 Family of Product-Series Identities In order to state the final result of this paper we need the following. **Definition 6.1**.: A non-negative _asymptotic periodic standard Young tableau_ with base shape \(\lambda\in\mathbb{Y}\) is a labelling \(\tau:\lambda^{(\infty)}\to\{iq^{a}:i\geq 1,a\geq 0\}\) such that * \(\tau\) is strictly increasing along rows and columns * The set of boxes \(\square\in\lambda^{(\infty)}\) such that \(\tau(\square)=iq^{a}\) for some \(i\geq 1\) and \(a>0\) is finite. * For all \(i\geq 1\) there exists a unique \(\square\in\lambda^{(\infty)}\) such that \(\tau(\square)=iq^{a}\) for some \(a\geq 0\). We will write \(\operatorname{APSYT}_{\geq 0}(\lambda)\) for the set of all non-negative asymptotic periodic standard Young tableaux with base shape \(\lambda\in\mathbb{Y}\). If \(\tau\in\operatorname{APSYT}_{\geq 0}(\lambda)\) has that for every \(\square\in\lambda\), \(\tau(\square)=iq^{0}\) for some \(i\geq 1\) then we will call \(\tau\) an _asymptotic standard Young tableau_ with base shape \(\lambda\). We will write \(\operatorname{ASYT}(\lambda)\) for the set of asymptotic standard Young tableau with base shape \(\lambda\). As an abuse of notation will write \(\mathfrak{p}_{\lambda}:\operatorname{APSYT}_{\geq 0}(\lambda)\to\Omega(\lambda)\) for the map given on \(\tau\in\operatorname{APSYT}_{\geq 0}(\lambda)\) by \(\mathfrak{p}_{\lambda}(\tau)(\square)=a\) whenever \(\tau(\square)=iq^{a}\) for some \(i\geq 1\). We will let \(\operatorname{APSYT}_{\geq 0}(\lambda;T)\) denote the set of all \(\tau\in\operatorname{APSYT}_{\geq 0}(\lambda)\) with \(\mathfrak{p}_{\lambda}(\tau)=T\). **Example 6**.: * \(\begin{array}{|c|c|c|c|c|c|c|c|c|c|}\hline 4q^{3}&5q^{3}&6q^{3}&2q^{2}&3q^{2}&2q^{ 4}&3q^{4}&4q^{5}&5q^{6}\\ \hline 1q^{2}&6q^{2}&11q^{4}&\\ \hline 8q^{1}&9q^{1}&\\ \hline 0q^{q}&\end{array}\) & \(\in\operatorname{APSYT}_{\geq 0}(3,2,1)\) **Definition 6.2**.: For \(T\in\Omega(\lambda)\) define \(S(T)\in\operatorname{ASYT}(\lambda)\) by ordering the boxes of \(\lambda^{(\infty)}\) according to \(\square_{1}\leq\square_{2}\) if and only if * \(T(\square_{1})>T(\square_{2})\) or * \(T(\square_{1})=T(\square_{2})\) and \(\square_{1}\) comes before \(\square_{2}\) in the column-standard labelling of \(\lambda^{(\infty)}\). Let \(\tau\in\operatorname{APSYT}_{\geq 0}(\lambda;T)\). An ordered pair of boxes \((\square_{1},\square_{2})\in\lambda^{(\infty)}\times\lambda^{(\infty)}\) is called an _inversion pair_ of \(\tau\) if \(S(T)(\square_{1})<S(T)(\square_{2})\) and \(i>j\) where \(\tau(\square_{1})=iq^{a}\), \(\tau(\square_{2})=jq^{b}\) for some \(a,b\geq 0\). The set of all inversion pairs of \(\tau\) will be denoted by \(\operatorname{Inv}(\tau)\) and we will write \(\operatorname{inv}(\tau)=|\operatorname{Inv}(\tau)|\). Define the _rank_\(\operatorname{rk}(\tau)\) to be the minimal \(n\geq n_{\lambda}\) such that \(\tau|_{\lambda^{(\infty)}\setminus\lambda^{(n)}}\) has consecutive labels. We will write \(\mu_{T}:=\mu(T|_{\lambda^{(\operatorname{rk}(T))}})\) (see Def. 2.4). Using a \(t\)-adic convergence argument and a limiting version of Cor. 4.6 we can show the following. **Theorem 6.3**.: For \(T\in\Omega(\lambda)\) we have the following equality in \(\mathbb{Q}(q)((t)):\) \[\frac{\prod_{\square\in\lambda^{(\operatorname{rk}(T))}}\left(1-q ^{-T(\square)}t^{\operatorname{rk}(T)-|\lambda|-c(\square)}\right)}{(1-t)^{ \operatorname{rk}(T)}[\mu_{T}]_{t}!} \prod_{(\square_{1},\square_{2})\in\Omega(\lambda^{(\operatorname{rk}(T))} )}\left(\frac{1-q^{T(\square_{2})-T(\square_{1})}t^{c(\square_{2})-c(\square_{ 1})}}{1-q^{T(\square_{2})-T(\square_{1})}t^{c(\square_{2})-c(\square_{1})+1}}\right)\] \[=\sum_{\tau\in\operatorname{APSYT}_{\geq 0}(\lambda;T)}t^{ \operatorname{inv}(\tau)}\prod_{(\square_{1},\square_{2})\in\operatorname{Inv}( \tau)}\left(\frac{1-q^{T(\square_{2})-T(\square_{1})}t^{c(\square_{2})-c(\square_ {1})-1}}{1-q^{T(\square_{2})-T(\square_{1})}t^{c(\square_{2})-c(\square_{1})+1}} \right).\] **Example 7**.: If \(\lambda=\emptyset\) and \(T=\boxed{\begin{array}{|c|c|c|c|c|}\hline 1&0&0&\ldots&\in\Omega(\emptyset)\text{ then from Thm. \ref{thm:20} we get}\\ \hline 1-q^{-1}t&=\sum_{k=0}^{\infty}t^{k}\prod_{j=1}^{k}\left(\frac{1-q^{-1}t^{j-1}} {1-q^{-1}t^{j+1}}\right).\end{array}}\)
2302.07524
Revisiting Initializing Then Refining: An Incomplete and Missing Graph Imputation Network
With the development of various applications, such as social networks and knowledge graphs, graph data has been ubiquitous in the real world. Unfortunately, graphs usually suffer from being absent due to privacy-protecting policies or copyright restrictions during data collection. The absence of graph data can be roughly categorized into attribute-incomplete and attribute-missing circumstances. Specifically, attribute-incomplete indicates that a part of the attribute vectors of all nodes are incomplete, while attribute-missing indicates that the whole attribute vectors of partial nodes are missing. Although many efforts have been devoted, none of them is custom-designed for a common situation where both types of graph data absence exist simultaneously. To fill this gap, we develop a novel network termed Revisiting Initializing Then Refining (RITR), where we complete both attribute-incomplete and attribute-missing samples under the guidance of a novel initializing-then-refining imputation criterion. Specifically, to complete attribute-incomplete samples, we first initialize the incomplete attributes using Gaussian noise before network learning, and then introduce a structure-attribute consistency constraint to refine incomplete values by approximating a structure-attribute correlation matrix to a high-order structural matrix. To complete attribute-missing samples, we first adopt structure embeddings of attribute-missing samples as the embedding initialization, and then refine these initial values by adaptively aggregating the reliable information of attribute-incomplete samples according to a dynamic affinity structure. To the best of our knowledge, this newly designed method is the first unsupervised framework dedicated to handling hybrid-absent graphs. Extensive experiments on four datasets have verified that our methods consistently outperform existing state-of-the-art competitors.
Wenxuan Tu, Bin Xiao, Xinwang Liu, Sihang Zhou, Zhiping Cai, Jieren Cheng
2023-02-15T08:38:06Z
http://arxiv.org/abs/2302.07524v1
# Revisiting Initializing Then Refining: An Incomplete and Missing Graph Imputation Network ###### Abstract With the development of various applications, such as social networks and knowledge graphs, graph data has been ubiquitous in the real world. Unfortunately, graphs usually suffer from being absent due to privacy-protecting policies or copyright restrictions during data collection. The absence of graph data can be roughly categorized into attribute-incomplete and attribute-missing circumstances. Specifically, attribute-incomplete indicates that a part of the attribute vectors of all nodes are incomplete, while attribute-missing indicates that the whole attribute vectors of partial nodes are missing. Although many graph imputation methods have been proposed, none of them is custom-designed for a common situation where both types of graph data absence exist simultaneously. To fill this gap, we develop a novel graph imputation network termed Revisiting Initializing Then Refining (RITR), where we complete both attribute-incomplete and attribute-missing samples under the guidance of a novel initializing-then-refining imputation criterion. Specifically, to complete attribute-incomplete samples, we first initialize the incomplete attributes using Gaussian noise before network learning, and then introduce a structure-attribute consistency constraint to refine incomplete values by approximating a structure-attribute correlation matrix to a high-order structural matrix. To complete attribute-missing samples, we first adopt structure embeddings of attribute-missing samples as the embedding initialization, and then refine these initial values by adaptively aggregating the reliable information of attribute-incomplete samples according to a dynamic affinity structure. To the best of our knowledge, this newly designed method is the first end-to-end unsupervised framework dedicated to handling hybrid-absent graphs. Extensive experiments on four datasets have verified that our methods consistently outperform existing state-of-the-art competitors. incomplete multi-view learning, graph neural network, hybrid-absent data, feature completion. ## I Introduction Graphs, which model and represent the complicated relationships among real-world objects, are ubiquitous in practical scenarios, including citation graphs, social graphs, protein graphs, and molecule graphs. To analyze the graph data, graph machine learning attempts to transform an original graph into low-dimensional sample representations by preserving node attributes and graph structure simultaneously [1, 2, 3]. In recent years, with the help of graph neural networks (GNNs), graph machine learning has become an increasingly powerful artificial intelligence technique. It has achieved significant success in diverse real-world applications, such as anomalous citation detection [4], few-shot learning [5], feature selection [6], and knowledge graphs [7]. The key prerequisite for the impressive performance of existing graph machine learning methods lies in the assumption that all samples within a graph are available and complete. However, this assumption may not always hold in practice since it is hard to collect all information from graph data. The reason behind including but not limited to privacy-protecting policies, copyright restrictions, and simply not enough information. For example, in a co-purchase graph of Amazon, consumers tend to selectively (or entirely not) provide their feedback for specific items due to privacy concerns. In a citation network, some papers are inaccessible due to copyright protection. All these circumstances could easily trigger sparsity and data-absent problems that adversely affect graph representations. According to the type of node attribute absence, the absent graphs can be roughly divided into two categories: 1) the attribute-incomplete graph where only a portion of attributes of all nodes are absent; 2) the attribute-missing graph where all attributes of specific nodes are absent. Fig. 1 illustrates the situations of attribute absence. Among them, Fig. 1(a) corresponds to an attribute-incomplete graph, Fig. 1(b) corresponds to an attribute-missing graph, and Fig. 1(c) corresponds to a hybrid-absent graph where both attribute-incomplete samples and attribute-missing samples exist in the same graph. The above cases make valuable information invisible and pose significant challenges to existing graph machine learning methods for graph analysis. To solve the attribute-incomplete learning problem, many Fig. 1: Different types of absent graphs. (a) Attribute-incomplete graph: particular attributes of all samples are absent; (b) Attribute-missing graph: all attributes of specific nodes are absent; (c) Hybrid-absent graph: both circumstances (a) and (b) exist simultaneously within a graph. As one can easily see, the last category is the most challenging, however, it is still under-explored in previous literature. We make the first attempt to solve it by proposing a novel method called RITR. efforts have been devoted to developing various imputation strategies such as matrix completion [8, 9], generative adversarial network [10], Gaussian mixture model (GMM) [11], and other advanced ones [12, 13]. With the imputed attributes, these methods then integrate a standard GNN-based framework with data imputation techniques to conduct the sample embedding. Although significant progress has been made in solving the attribute-incomplete learning problem, the performance of these methods degrades drastically when they handle extremely absent data (_e.g.,_ attribute-missing graphs). To solve this problem, a recent advanced method termed SAT [14] first introduces an unsupervised graph imputation framework to handle attribute-missing graphs under the guidance of a shared-latent space assumption. Specifically, SAT utilizes a graph neural network (_e.g.,_ GCN [15] or GAT [16]) to embed the available attributes and graph structure into a latent space in a decoupled manner. Then it performs a distribution matching mechanism to recover the unknown values of attribute-missing samples. Although achieving encouraging success, SAT also suffers from the following limitations when conducting the data imputation: 1) two-source information isolation. SAT isolates the learning processes of embeddings of observed attributes and the complete graph structure. This prevents the trustworthy visible information from being sufficiently utilized, which could cause the learned representations to be biased and also increase the risk of inaccurate data imputation; 2) strict prior assumption. SAT forces two-source latent variables to align with an in-discriminative noise matrix obeying normal distribution. While in reality, the pre-defined normal distribution would not ideally conform to the complex graphs. As a result, the negotiation between attribute and structure information tends to get overly rigid, resulting in less discriminative representations. This could adversely affect the quality of the rebuilt attribute matrix of all samples, especially those without attributes. To overcome the above issues, we propose an **I**nitializing **T**hen **R**efining (ITR) network [17] to forbid the adverse effect of inaccurate simple initialization and the limitation of rigid distribution assumption. The core idea of ITR is to fully leverage the trustworthy visible information to implement the sample embeddings for missing attribute imputation. Though being the potential for better tackling the attribute-missing problem, we observe that when ITR processes hybrid-absent graphs, attribute-incomplete samples would largely undermine the quality of generated attribute-missing features due to the diffusion of inaccurate imputed information. To our knowledge, hybrid-absent graph machine learning has not been studied in the existing graph literature, which is a universal and more challenging problem for various practical applications. To fill this gap, we revisit ITR and further improve it by designing a variant, termed **R**evising **I**nitializing **T**hen **R**efining (RITR). In this newly proposed RITR, we complete both attribute-incomplete and attribute-missing samples under the guidance of the initializing-then-refining imputation criterion. To impute incomplete attributes, we elaborately design a **S**ample-denoising **T**hen **C**onsistency-preserving (STC) mechanism. As illustrated in Fig. 2, the feature completion process within this component mainly includes step \(1\) to step \(3\). Firstly, we learn the sample embeddings through a denoising learning approach by combining the attribute and structure information of attribute-incomplete samples sufficiently. Secondly, we take all nodes and learn sample embeddings only according to the structure information. Finally, we explicitly develop a structure-attribute consistency constraint to refine these incomplete latent variables by approximating a structure-attribute correlation matrix to a high-order structure matrix. This operation aims to guarantee the representation quality of nodes with incomplete attributes, and meanwhile provide a feasible initialization to those nodes with missing attributes in the next step. To impute missing attributes, we design another data imputation mechanism termed **I**nitializing **T**hen **R**efining (ITR). In this component, we first take the structure embeddings of attribute-missing samples as initial imputed variables and then refine them with an adaptively updated affinity structure for embedding refinement. The above operations correspond to step \(4\) and step \(5\) in Fig. 2. Comprehensive experiments on four benchmark datasets have been conducted to verify the effectiveness and superiority of our proposed methods and components. As demonstrated, ITR consistently outperforms state-of-the-art methods. Moreover, the other proposed variant, _i.e.,_ RITR, further improves the profiling and classification performance against ITR. It is expected that the simplicity and effectiveness of the RITR method will make it a promising option to be considered for practical applications where the hybrid-absent case is encountered. This work is a substantially extended version of our original conference paper [17]. Compared with its previous version, it has the following significant improvements. 1) _Novel research problem_. To the best of our knowledge, hybrid-absent graph machine learning is a rarely explored research field yet is a real-world demand from various applications. Accordingly, we develop a novel graph machine learning framework called RITR without relying on any pre-assumed distribution assumption, which is the first incomplete and missing graph imputation network to solve the corresponding learning problem. 2) _Newly proposed strategy_. To complete attribute-incomplete samples, we propose a new feature completion mechanism termed STC by following the initializing-then-refining imputation criterion. This operation enables the model to generate more robust and discriminative features for attribute-incomplete samples, best serving subsequent attribute-missing imputation tasks. 3) _More experimental results and analyses_. Besides more detailed discussion and extension, we also conduct more comprehensive experiments, and all evaluation results have verified that the two proposed methods achieve the best performance in different absent graph situations. The remainder of this paper is organized as follows. Section 2 reviews related work in terms of unsupervised graph machine learning and graph machine learning on absent graphs. Section 3 presents the notations, definitions, network and component design, and learning targets. Section 4 conducts experiments and discusses the results. Section 5 draws a final conclusion. ## II Related Work ### _Unsupervised Graph Machine Learning_ Early solutions to unsupervised graph machine learning mainly focus on random walk-based methods [18, 19, 20], which first generate the random walk sequences over the network structure properties and then utilize a Skip-Gram model to learn graph representations. However, these methods heavily rely on structure information and overlook other available properties (_e.g._, attribute information) in the graph. More recently, since the powerful neighborhood aggregation capacity of graph neural networks (GNNs), many efforts have been made to design GNN-based methods. As one of the most representatives, generative/predictive learning-oriented methods aim to explore abundant information embedded in data via some well-known approaches, such as auto-encoder learning [21, 22, 23, 24] and adversarial learning [25, 26, 27, 28]. Another line pays attention to graph contrastive learning, which aims to maximize the agreement of two jointly sampled positive pairs [29, 30, 31, 32, 33, 34]. One underlying assumption commonly adopted by these methods is that the attributes of all nodes are trustworthy and complete. While in real-world scenarios, they may suffer from significant performance degradation when handling absent graphs. ### _Graph Machine Learning on Absent Graphs_ According to the type of absent graphs, we can roughly group existing absent graph machine learning methods into the following three categories. #### Ii-B1 Attribute-incomplete Graph Machine Learning In the attribute-incomplete circumstance, some methods propose to leverage data imputation-oriented techniques to restore the incomplete information, such as matrix completion [8], generative adversarial network (GAN) [35], and Gaussian mixture model (GMM) [36]. For instance, NMTR [37] and GRAPE [38], two typical matrix completion methods, first take the user-item rating matrix, users (or items), and the observed ratings as a bipartite graph, sample attributes, and connected relationships, respectively. These methods then adopt a graph neural network to predict the probabilities (regarded as imputed values) of absent connected relationships. Similarly, GRAPE [38] first converts a data imputation task into a linkage prediction learning process over a created bipartite graph, and then utilizes a graph neural network to solve it. Recent efforts like NMTR [37] and IGMC [9] follow the same paradigm as previous matrix completion methods to conduct the data imputation and sample embedding in a transductive or inductive learning manner. In addition, GINN [10] first initializes the incomplete values by a binary mask matrix before network training, and then learns a graph neural network with an adversarial learning mechanism to complete the absent information. GCNMF [11] utilizes a Gaussian mixture model to estimate the incomplete features according to the available information, and in the meanwhile, jointly optimizes the Gaussian mixture model and graph neural network in a united framework. More recently, T2-GNN [39] designs a general teacher-student graph learning framework to restore both incomplete node features and graph structure through distillation. #### Ii-B2 Attribute-missing Graph Machine Learning Compared to the attribute-incomplete circumstance, handling the graph data with a majority of samples having no attributes poses more challenges in learning high-quality node representations. This topic has attracted great attention from graph machine learning researchers recently. For example, HGNN-AC [40] first adopts current heterogeneous information networks (HINs) to learn node topological representations, and then utilizes the topological relationship between nodes as guidance to implement feature completion for attribute-missing samples via an attention mechanism. HGCA [41] is an unsupervised heterogeneous graph contrastive learning approach for heterogeneous graphs with missing attributes. It employs the contrastive learning technique to unify the processes of feature completion and representation learning, and thereafter, conduct a fine-grained attribute completion by extracting the semantic relations among different types of samples. Besides the attribute-missing heterogeneous graph machine learning, an advanced method called SAT [14] makes the first attempt to solve the attribute-missing learning problem over the homogeneous graphs. By unifying the data imputation and network learning processes into a single optimization procedure, SAT learns two-source information embedding matrices in a decoupled manner and then aligns them with a noise matrix sampled from a normal distribution for attribute restoration. Another recent work, ITR [17] introduces an initializing-then-refining mechanism, enabling the network to fully use the trustworthy visible information to adaptively conduct the sample embedding for missing attribute imputation. More recently, SVGA [42] and Amer [43] develop an auto-encoder-style framework to estimate missing node features via structured variational inference and adversarial learning techniques, respectively. #### Ii-B3 Hybrid-absent Graph Machine Learning As aforementioned, attribute-incomplete and attribute-missing graph machine learning problems have been intensively studied in recent years. Despite their significant progress, in nature, these methods are not capable of effectively handling hybrid-absent graphs. In this circumstance, especially for the unsupervised scenario, the performance of existing attribute-incomplete and attribute-missing methods could drop drastically since they suffer from at least one of the following limitations: 1) heavily relying on annotated graph data; 2) lacking a specialized feature completion mechanism for handling attribute-missing (or attribute-incomplete) samples; 3) disconnecting the processes of data imputation and network optimization; 4) isolating the learning processes of structure and attribute embeddings; 5) imposing too strict a distribution assumption on the latent variables. Although our recently proposed ITR could address most of the above problems and exhibits powerful learning capacity in the attribute-missing situation, it is still a great challenge to recover incomplete and missing values with limited available information simultaneously. To achieve this goal, we study a new important research problem termed hybrid-absent graph machine learning, and further improve ITR by designing another variant called RITR. It proposes to first leverage the intimate structure-attribute relationship to guide the imputation of incomplete attributes and then employ the most trustworthy visible information to implement the missing attribute completion. To the best of our knowledge, none of the above literature considers the hybrid-absent graph machine learning problem. RITR is the first work dedicated to this field. ## III Approach Fig. 2 shows an overview of our proposed RITR. As follows, we will provide details on notations, definitions, crucial components, and learning targets, respectively. ### _Notations and Definitions_ \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) denotes a given undirected graph that contains \(N\) samples with \(C\) categories, where \(\mathcal{V}\) and \(\mathcal{E}\) indicate the node set and edge set, respectively. Generally, the topology of a graph \(\mathcal{G}\) can be characterized by its adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) and the content of graph \(\mathcal{G}\) can be represented by its of attribute matrix \(\mathbf{X}\in\mathbb{R}^{N\times D}\), where \(D\) refers to the sample dimension. The main notations and their explanations are summarized in Table I. **Definition 1 (Hybrid-absent Graph)**. We denote a hybrid-absent graph \(\widetilde{\mathcal{G}}=\{\mathcal{V}^{I},\mathcal{V}^{M},\mathcal{E}\}\), where partial attributes of some samples are unavailable (_i.e.,_ the attribute-incomplete sample set \(\mathcal{V}^{I}\)) and all attributes of other samples are entirely missing (_i.e.,_ the attribute-missing sample set \(\mathcal{V}^{M}\)). \(N^{I}=|\mathcal{V}^{I}|\) and \(N^{I}=|\mathcal{V}^{M}|\) refer to the number of attribute-incomplete samples and attribute-missing samples, respectively. Accordingly, \(\mathcal{V}=\mathcal{V}^{I}\cup\mathcal{V}^{M}\), \(\mathcal{V}^{I}\cap\mathcal{V}^{M}\) = \(\varnothing\), and \(N\) = \(N^{I}\) + \(N^{M}\). Note that the structure information (_i.e.,_\(\mathcal{E}\)) of \(\widetilde{\mathcal{G}}\) is complete. **Definition 2 (Learning Task)**. In this work, we mainly focus on addressing the hybrid-absent graph machine learning problem on graphs without label annotation. Our auto-encoder-style framework works for learning two graph encoding functions \(E_{A}(\cdot)\) and \(E_{s}(\cdot)\) to impute invisible latent variables. Then a graph encoding function \(D(\cdot)\) will recover the attribute-missing and attribute-incomplete samples based on the imputed hidden features. The recovered attributes can be saved and used for profiling and node classification tasks. ### _Overview_ To tackle the issue that existing studies fail to perform well on hybrid-absent graphs, we introduce an end-to-end unsupervised graph imputation network termed RITR. Our goal is to design personalized incomplete and missing feature completion mechanisms on hybrid-absent graphs, and achieve accurate data imputation and effective information propagation with the initializing-then-refining imputation criterion. As illustrated in Fig. 2, RITR contains two core components, _i.e.,_ Sample-denoising Then Consistency-preserving (STC) mechanism (Section III-C) and Initializing Then Refining (ITR) mechanism (Section III-D), which are intended to solve the attribute-incomplete and attribute-missing problems, respectively. Specifically, before network learning, we first generate a corrupted sub-graph by randomly adding Gaussian noise information to incomplete attributes as initial values. Then the corrupted sub-graph and the structural graph are transferred into two architecture-identical yet decoupled graph encoders to learn low-dimensional representations. In the information extraction phase, a structure-attribute consistency constraint allows the intermediate structure-attribute representations of attribute-incomplete samples to negotiate with each other to refine incomplete attributes. After that, the ITR mechanism utilizes the structure embeddings of the attribute-missing samples as the embedding initialization, and then adaptively refines these initial values by aggregating the reliable information of attribute-incomplete samples according to an affinity structure. Finally, RITR conducts the structure and attribute reconstructions over the latent embeddings by jointly minimizing three objectives. ### _Sample-denoising Then Consistency-preserving_ This work attempts to solve an under-explored yet more challenging hybrid-absent graph machine learning task, where attribute-incomplete and attribute-missing samples exist simultaneously within a graph. The critical technique extension of RITR against its conference version is the STC component, which aims to make the learned representations robust to the attribute-incomplete input pattern and discriminative to serve the subsequent attribute-missing imputation task. Since the network could not be optimized over unknown values, we employ a sample-denoising learning approach to ease the network training and facilitate the robustness of the learned attribute-incomplete features. Specifically, we first randomly generate a Gaussian noise matrix \(\mathbf{N}\in\mathbb{R}^{N^{I}\times D}\) with iterations, and then assign it to the incomplete attribute matrix \(\mathbf{X}^{I}\in\mathbb{R}^{N^{I}\times D}\) as initial values. The resultant matrix is denoted as a corrupted incomplete attribute matrix \(\widetilde{\mathbf{X}}^{I}\in\mathbb{R}^{N^{I}\times D}\). In the attribute-incomplete circumstance, we merely minimize the reconstruction errors of visible attributes between the original attribute matrix and the rebuilt attribute matrix for sample denoising. Although the sample-denoising scheme has been previously explored and proved to be powerful [44], directly applying it to the restoration of incomplete attributes is less comprehensive since partial attributes are invisible. To guarantee the high quality of representations of attribute-incomplete samples, it is intuitive that exploring the complete structure information could benefit the reconstruction of incomplete attributes since both types of latent variables share consistent and complementary properties of a typical graph [45]. Motivated by this, instead of directly addressing the incompleteness issue in the input space, we leverage the intimate relationship between structure-attribute embeddings to further refine the initial imputation of incomplete attributes. To be specific, we first utilize two graph convolution network-based encoders denoted as \(E_{A}(\cdot)\) and \(E_{S}(\cdot)\) to extract the latent features of attribute-incomplete samples and graph structure, respectively. Formally, given a sub-graph \(\widetilde{\mathcal{G}}^{Sub}\) with a corrupted incomplete attribute matrix \(\widetilde{\mathbf{X}}^{I}\) and a corresponding normalized adjacency matrix \(\widetilde{\mathbf{A}}^{I}\in\mathbb{R}^{N^{I}\times N^{I}}\), \(E_{A}(\cdot)\) accepts them as input and the _l_-th latent representations of attribute-incomplete samples can be formulated as below: \[\mathbf{H}_{A}^{I(l)}=\sigma(\widetilde{\mathbf{A}}^{I}\mathbf{H}_{A}^{I(l-1)} \mathbf{\Theta}^{(l)}), \tag{1}\] where \(\mathbf{\Theta}^{(l)}\) is the parameter matrix of \(E_{A}(\cdot)\) in the _l_-th layer and \(\sigma(\cdot)\) indicates a non-linear activation function. Similarly, \(E_{S}(\cdot)\) receives a structure graph \(\mathcal{G}^{S}\) with an identity matrix \(\mathbf{I}\in\mathbb{R}^{N\times N}\) and a normalized adjacency matrix \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N\times N}\), and the _l_-th latent representations of the graph structure can be obtained: \[\mathbf{H}_{S}^{(l)}=\sigma(\widetilde{\mathbf{A}}\mathbf{H}_{S}^{(l-1)} \mathbf{\Psi}^{(l)}), \tag{2}\] where \(\mathbf{\Psi}^{(l)}\) is the parameter matrix of \(E_{S}(\cdot)\) in the _l_-th layer. After that, we refine the intermediate information of attribute-incomplete samples via a structure-attribute consistency constraint, so as to make the learned representations much better suited for subsequent attribute-missing imputation tasks. Concretely, we encourage each attribute-incomplete sample to be closer to its counterpart as well as \(r\)-order neighbors across structure-attribute modalities, which can be formulated as: \[\mathcal{L}_{C}=\underbrace{\frac{\mathbf{N}^{I}}{N^{I}}\sum_{i}^{N^{I}}( \mathbf{C}_{ii}-1)^{2}}_{\text{self-loop consistency}}+\underbrace{\frac{1}{N^{I}(N^{I}-1)}\sum_{i}^{N^{I}} \sum_{j\neq i}^{N^{I}}(\mathbf{C}_{ij}-\widetilde{\mathbf{A}}_{ij}^{Ir})^{2}} _{\text{high-order structural consistency}}, \tag{3}\] \[\mathbf{C}=\frac{\mathbf{H}_{A}^{I(1)}p(\mathbf{H}_{S}^{(1)})^{\text{T}}}{ \|\mathbf{H}_{A}^{I(1)}\|\|p(\mathbf{H}_{S}^{(1)})\|}, \tag{4}\] where \(\mathbf{C}\in\mathbb{R}^{N^{I}\times N^{I}}\) and \(\widetilde{\mathbf{A}}^{Ir}\in\mathbb{R}^{N^{I}\times N^{I}}\) denote a structure-attribute correlation matrix and a \(r\)-order normalized adjacency matrix, respectively. \(N^{I}\) is the number of attribute-incomplete samples. In addition, \(\mathbf{H}_{A}^{I(1)}\) and \(\mathbf{H}_{S}^{(1)}\) indicate the embeddings of attribute-incomplete samples and graph structure in the first layer, respectively. \(p(\cdot)\) is an embedding pick-out function. As seen in Eq.(3), the first term makes diagonal elements of the structure-attribute correlation matrix close to one value, causing the structure-attribute embeddings of each attribute-incomplete sample to be consistent. Moreover, the neighbors of each sample contain rich complementary information that should be considered for incomplete attribute completion. Hence, the second term encourages each sample to be closer to \(r\)-order neighbors than non-neighbors across structure-attribute modalities, aiming to exploit diverse complete structure properties to assist the feature completion of attribute-incomplete samples. By doing this, both sample denoising and structure-attribute consistency constraint are seamlessly integrated to 1) make the learned representations of attribute-incomplete samples robust and invariant to data perturbations (_e.g.,_ the noise and incompleteness of the graph); 2) preserve more trustworthy features from available information to achieve better data imputation. The overall pipeline of training the proposed STC is summarized in Algorithm 1. Fig. 2: The architecture of the revisiting initializing then refining (RITR) framework. To impute the incomplete values, we first initialize the original incomplete attributes as Gaussian noise for denoising learning (_i.e._, step _1_), and then introduce a structure-attribute consistency constraint to refine the incomplete values by approximating a structure-attribute correlation matrix to a high-order structural matrix (_i.e._, step _3_). To impute the missing values, we first adopt the structure embeddings of the attribute-missing samples as the embedding initialization (_i.e._, step _4_), and then adaptively refine these initial values by aggregating the reliable and informative information of the attribute-incomplete samples according to the affinity structure (_i.e._, step _5_). ### _Initializing Then Refining_ **Inputulation Initialization.** Besides completing and fine-tuning the samples with incomplete attributes, assigning initial values to the samples with missing samples should be further taken into account. A widely-adopted measure is traditional imputation techniques, such as zero value filling and mean value filling. Nevertheless, in the attribute-missing circumstance, these filling methods could incorporate amounts of irrelevant noise that will diffuse through the network, causing semantically biased representations. To alleviate this issue, it is intuitive to leverage the structure embeddings as the embedding initialization for latent variables of attribute-missing samples. The reason for that is two-fold. Firstly, the attribute embedding and the structure embedding describe different aspects of a node, providing consistent and complementary information in these two modalities [45]. Secondly, this initialization approach is reliable since the structure information of the original graph is complete. To this end, after encoding incomplete attribute embeddings \(\mathbf{H}_{A}^{I}\in\mathbb{R}^{N^{I}\times d}\) and structure embeddings \(\mathbf{H}_{S}\in\mathbb{R}^{N\times d}\), we first pick out the structure embeddings of attribute-missing samples \(\mathbf{H}_{S}^{M}\in\mathbb{R}^{N^{M}\times d}\) from \(\mathbf{H}_{S}\), and then utilize a Concat function \(C(\cdot)\) to integrate \(\mathbf{H}_{S}^{M}\) with \(\mathbf{H}_{A}^{I}\), where \(d\) refers to the embedding dimension. It is worth noting that the information concatenation to construct \(\mathbf{H}_{I}\in\mathbb{R}^{N\times d}\) is not the classic channel-wise or row-wise concatenation. In this operation, the latent variables of attribute-incomplete samples are filled with \(\mathbf{H}_{A}^{I}\) and the latent variables of attribute-missing samples are filled with \(\mathbf{H}_{S}^{M}\): \[\mathbf{H}_{I}=C(\mathbf{H}_{A}^{I},\mathbf{H}_{S}^{M}), \tag{5}\] where \(\mathbf{H}_{I}\) indicates the initially imputed embeddings. In our concatenation settings, the location of each sample remains unchanged within the original graph. **Inputulation Refinement.** It is well known that node attributes preserve the semantic graph content while the graph structure implies the connection relationships among samples. Consequently, the trustworthiness degrees of two-source information exhibit differences to some extent. Making full use of the trustworthy visible structure and attribute information to initial and refine the missing values may benefit the data imputation quality of attribute-missing samples. To illustrate whether our argument holds, we make a comparison among three methods and discuss the performance on Cora and Citeseer. Here we set both attribute-incomplete and attribute-missing ratios as 60%. Ours-Z, Ours-S, and Ours-S-A are methods where we impute the latent variables of attribute-missing samples with zero values, the structure embeddings merely, and the structure-attribute embeddings, respectively. As shown in Table II, we can observe that 1) Ours-S performs better than Ours-Z, which empirically verifies that the structure embeddings \(\mathbf{H}_{S}^{M}\) could provide an effective embedding initialization; 2) the trustworthy attributes adding in data imputation could provide more discriminative information to assist in the refinement of initial imputation. Specifically, we leverage available attribute properties \(\mathbf{H}_{A}^{I}\) to refine the initially imputed variables \(\mathbf{H}_{S}^{M}\) via an affinity structure \(\mathbf{R}\in\mathbb{R}^{N\times N}\). The intuition behind this is that \(\mathbf{H}_{A}^{I}\) is trustworthy since all attribute-incomplete samples are carefully restored via the STC mechanism. By doing this, the semantic gap between \(\mathbf{H}_{A}^{I}\) and \(\mathbf{H}_{S}^{M}\) is allowed to be narrowed, thus boosting the discriminative capacity of the overall graph embedding. The above procedure can be written as a graph convolution-like formulation: \[\mathbf{H}=\mathbf{R}\mathbf{H}_{I}, \tag{6}\] where \(\mathbf{H}\in\mathbb{R}^{N\times d}\) indicates the imputed embeddings and we initialize the affinity structure \(\mathbf{R}\) as \(\mathbf{\tilde{A}}\). According to Eq.(6), we refine the attribute-missing imputation from the following two aspects. On the one hand, it is obvious that the noise information in \(\mathbf{H}_{S}^{M}\) can be transferred into the well-learned attribute embeddings of attribute-incomplete samples. This would undermine the representation quality and the reconstruction accuracy of available information, which in turn negatively affects the subsequent data imputation tasks and even distort the original graph. To tackle this problem, we implement an information recomposing scheme to decrease the adverse effect of noise information passing from the embeddings of attribute-missing samples. Firstly, we pick out the latent variables of attribute-missing samples \(\mathbf{H}^{M}\in\mathbb{R}^{N^{M}\times d}\) from \(\mathbf{H}\), and next recomgome them with \(\mathbf{H}_{A}^{I}\) using a Concat function \(C(\cdot)\): \[\widetilde{\mathbf{H}}=C(\mathbf{H}_{A}^{I},\mathbf{H}^{M}), \tag{7}\] where \(\widetilde{\mathbf{H}}\in\mathbb{R}^{N\times d}\) indicates the sample-recomposed embeddings. The information recomposing scheme replaces the adjusted embeddings of attribute-incomplete samples with more reliable \(\mathbf{H}_{A}^{I}\). Meanwhile, as illustrated in Fig. 2, we fix the embeddings of attribute-incomplete samples as \(\mathbf{H}_{A}^{I}\) in the final step. This provides the most trustworthy information on attribute-incomplete samples for subsequent missing attribute imputation. On the other hand, we argue that the initial affinity matrix **R** (_i.e.,_\(\mathbf{\tilde{A}}\)) is not the ground truth. The limitations within this matrix are two-fold: 1) noisy connections. Besides inner connections within clusters, inappropriate connections could exist between clusters in the matrix; 2) missing connections. In \(\mathbf{\tilde{A}}\), only the first-order connections are preserved, the high-order relevant connections could be missing. Both would cause inaccurate imputation and reconstruction of missing attributes. To overcome these issues, we seek to refine **R** by emphasizing the dependable connections while weakening the unreliable ones. To this end, we propose an affinity structure updating scheme to optimize **R** with iterations. Specifically, we first calculate a normalized self-correlated matrix \(\mathbf{S}\in\mathbb{R}^{N\times N}\) according to \(\mathbf{\widetilde{H}}\) as below: \[\mathbf{S}_{jk}=\mathcal{N}(\frac{\mathbf{\widetilde{h}}_{j}\mathbf{ \widetilde{h}}_{k}^{\mathrm{T}}}{\|\mathbf{\widetilde{h}}_{j}\|\|\mathbf{ \widetilde{h}}_{k}\|}),\;\;\forall\;j,k\in[1,N], \tag{8}\] where \(\mathcal{N}(\mathbf{Y})=\mathbf{D_{Y}^{-\frac{1}{2}}\mathbf{YD_{Y}^{-\frac{1} {2}}}}\) indicates a structural normalization function, \(\mathbf{D_{Y}}\in\mathbb{R}^{N\times N}\) is the degree matrix of \(\mathbf{Y}\). \(\mathbf{\widetilde{h}}_{j}\) (\(\mathbf{\widetilde{h}}_{k}\)) indicates the embedding of sample \(\mathbf{v}_{j}\) (\(\mathbf{v}_{k}\)). Then we optimize the affinity structure **R** every \(t\) iterations via Eq.(9), and leverage it as guidance for the subsequent missing attribute imputation: \[\mathbf{R}=\gamma\mathbf{\widetilde{A}}+(1-\gamma)\mathbf{S}, \tag{9}\] where \(\gamma\) is a balanced hyper-parameter and is initialized as 0.5. With the affinity structure updating scheme, the network is enabled to construct the embeddings of attribute-missing samples with not only the first-order but also the high-order connections within the graph structure. Since the embeddings of attribute-incomplete samples become more reliable and the embeddings of attribute-missing samples become more informative, the quality of data imputation could be further improved, making the learned representations more discriminative and robust. The overall pipeline of training the proposed ITR is summarized in Algorithm 2. ### _Training Objectives and Complexity Analysis_ #### Iii-E1 Training Objectives After obtaining \(\mathbf{\widetilde{H}}\), we feed it with \(\mathbf{\widetilde{A}}\) into a graph decoder \(D(\cdot)\) to rebuilt the attributes of attribute-incomplete and attribute-missing samples: \[\mathbf{\widetilde{H}}^{(l)}=\sigma(\mathbf{\widetilde{A}}\mathbf{\widetilde{ H}}^{(l-1)}\mathbf{\Phi}^{(l)}), \tag{10}\] where \(\mathbf{\widetilde{W}}^{(l)}\) indicates the parameter matrix of \(D(\cdot)\) in the _l_-th layer. \(\mathbf{\widetilde{H}}^{(0)}\) and \(\mathbf{\widetilde{H}}^{(2)}\) denote the sample-recomposed embeddings \(\mathbf{\widetilde{H}}\) and the rebuilt attribute matrix \(\mathbf{\widetilde{X}}\in\mathbb{R}^{N\times D}\), respectively. The joint loss function of RITR includes three parts, which can be written as: \[\mathcal{L}_{A}=\frac{1}{2N^{I}}\|\mathbf{M}\odot(\mathbf{X}^{I}-\mathbf{ \widehat{X}}^{I})\|_{F}^{2}, \tag{11}\] \[\mathcal{L}_{S}=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}BCE(\mathbf{ \widetilde{A}}_{ij},\mathbf{\widehat{A}}_{ij}), \tag{12}\] \[\mathcal{L}=\alpha\mathcal{L}_{A}+\mathcal{L}_{S}+\beta\mathcal{L}_{C}. \tag{13}\] In Eq.(11), \(\mathcal{L}_{A}\) refers to the mean square error (MSE) of attribute-incomplete samples between \(\mathbf{X}\) and \(\mathbf{\widetilde{X}}\). \(\mathbf{M}\in\mathbb{R}^{N\times D}\) is an element-wise indicator matrix where \(\mathbf{M}_{ij}\) = 1 if \(\mathbf{X}_{ij}^{I}\) is a real value, otherwise \(\mathbf{X}_{ij}^{I}\) is a null value (_i.e.,_ an incomplete attribute). In Eq.(12), \(\mathcal{L}_{S}\) refers to the binary cross-entropy (BCE) between the normalized adjacency matrix \(\mathbf{\widehat{A}}\) and the rebuilt adjacency matrix \(\mathbf{\widehat{A}}\in\mathbb{R}^{N\times N}\), where \(\mathbf{\widehat{A}}=\sigma(\mathbf{H}_{S}\mathbf{\widetilde{H}}^{\mathrm{T}})\), \(\sigma(\cdot)\) is a Sigmoid activation function. \(\alpha\) and \(\beta\) are two balanced hyper-parameters. The applied optimization objectives are similar to existing attribute-missing graph machine learning methods [14, 17]. However, the major differences between current methods and our improved method could be summarized in the following three parts: 1) more naturally handling hybrid-absent graphs in an unsupervised circumstance; 2) more comprehensive that seamlessly unifies the representation learning and data imputation processes of attribute-incomplete and attribute-missing samples into a common framework; 3) more discriminative that enables the structure-attribute information to sufficiently negotiate with each other for feature completion by performing STC and ITR mechanisms. #### Iii-E2 Complexity Analysis The time complexity of the proposed RITR could be discussed from the following two aspects: the graph auto-encoder framework and the loss error computation. For two GCN-based graph encoders, the complexities of \(E_{A}(\cdot)\) and \(E_{s}(\cdot)\) are \(\mathcal{O}(Nd^{2}(L-1)+NdD_{a}+|\mathcal{E}|dL)\) and \(\mathcal{O}(Nd^{2}(L-1)+NdD_{s}+|\mathcal{E}|dL)\), where \(N\), \(L\), \(|\mathcal{E}|\) are the number of nodes, encoder layers, and edges, respectively. \(D_{a}\), \(D_{s}\), and \(d\) are the dimension of raw attribute features, raw structure features, and hidden representations, respectively. For the graph decoder, the complexity of \(D(\cdot)\) is \(\mathcal{O}(Nd^{2}(L-1)+NdD_{a}+|\mathcal{E}|d(L-1)+|\mathcal{E}|D_{a})\). For the loss error computation, we follow SAT [14] and use the MSE and BCE loss functions to reconstruct node attributes and graph structure, respectively. The time complexities of \(\mathcal{L}_{A}\) and \(\mathcal{L}_{S}\) are \(\mathcal{O}(nD_{a})\) and \(\mathcal{O}(N^{2})\). The time complexity of the structure-attribute consistency loss function \(\mathcal{L}_{C}\) is \(\mathcal{O}(n^{2})\), where \(n=(1-\theta)N\), \(\theta\) is the ratio of attribute-missing samples. Considering the attribute-missing problem is universal in real-world scenarios (_i.e.,_\(\theta\) can be set to a large value) and \(n^{2}\) can be a relatively small value, the computation overhead here is acceptable. The overall time complexity of RITR for each training iteration is \(\mathcal{O}(Nd(D_{a}+D_{s}+dL)+|\mathcal{E}|(dL+D_{a})+N^{2}+n(n+D_{a}))\approx \mathcal{O}(N^{2})\). For a fair comparison, we conduct a complexity comparison among four attribute-missing graph machine learning methods and report the results in Table III. As seen, RITR consistently outperforms all baselines in both attribute-missing and hybrid-absent cases (see Sections V-B and V-C), requiring no additional computation complexity compared to these competitors. ## IV Experiments In this section, we evaluate the effectiveness of ITR and RITR against some advanced graph machine learning methods. The experiments aim to answer the following five questions: * \(\mathbf{Q1}\). How do the proposed methods perform compared to baselines in profiling and node classification tasks? * \(\mathbf{Q2}\). How does the designed component influence the performance? * \(\mathbf{Q3}\). How does the proposed method perform with different absent ratios? * \(\mathbf{Q4}\). How do key hyper-parameters influence the performance of the proposed method? * \(\mathbf{Q5}\). How about the method convergence and performance variation with iterations? In the following, we begin with a brief experimental setup introduction, including benchmark datasets, implementation procedures, training settings, and compared methods. Then we report experimental results with corresponding analysis. ### _Experimental Setup_ #### Iv-A1 Benchmark Datasets We implement experiments to evaluate two proposed methods, _i.e.,_ ITR and RITR on four benchmark datasets, including Cora, Citeseer, Amazon Computer, and Amazon Photo. We summarize the detailed dataset information in Table IV. * Cora1 and Citeseer1 are two popular citation network datasets. Specially, nodes mean scientific publications, and edges mean citation relationships. Each node has a predefined feature with corresponding dimensions. Footnote 1: [https://docs.dgl.ai/api/python/dgl.data.html#citation-network-dataset](https://docs.dgl.ai/api/python/dgl.data.html#citation-network-dataset) * Amazon Photo2 and Amazon Computers2 (Amap and Amac for abbreviation) are segments of the Amazon co-purchase network, where nodes represent goods, edges indicate that two goods are frequently bought together, node features are bag-of-words encoded product reviews, and class labels are given by the product category. Footnote 2: [https://docs.dgl.ai/api/python/dgl.data.html#citation-network-dataset](https://docs.dgl.ai/api/python/dgl.data.html#citation-network-dataset) Footnote 2: [https://docs.dgl.ai/api/python/dgl.data.html#amazon-co-purchase-dataset](https://docs.dgl.ai/api/python/dgl.data.html#amazon-co-purchase-dataset) #### Iv-A2 Implementation Procedures Both ITR and RITR are implemented with the Pytorch platform. We evaluate the effectiveness of two proposed methods through a two-step learning procedure. Firstly, we train an unsupervised framework to learn node representations and complete absent information for at least 600 iterations. Following SAT [14], we regard the profiling learning as a pretext task and adopt Recall@K and NDCG@K as metrics to evaluate the quality of rebuilt attributes. To alleviate the over-fitting problem, we perform an early stop strategy when the loss value reaches a plateau. Secondly, for the node classification task, we feed the rebuilt attribute matrix into a graph classifier and optimize it with five-fold validation 10 times, and report the average accuracy (ACC) performance. #### Iv-A3 Training Settings In the attribute-missing case, we record the performance of all methods directly according to the paper of SAT [14] except for GINN [10], GCNMF [11], and SVGA [42]. In the hybrid-absent case, we run the released source code of all compared methods by following the settings of the corresponding literature, and report their results. For our proposed ITR and RITR, we strictly follow the criterion of data splits as was done in SAT, including the split ratio of attribute-complete/missing samples and the split ratio of train/test sets. Specifically, 1) in the profiling task, we randomly sample 40% nodes with complete attributes as the training set, and manually mask all attributes of the rest of 10% and 50% nodes (_i.e.,_ attribute-missing samples) as the validation set and the test set, respectively. Besides, when attribute-incomplete and attribute-missing samples exist simultaneously within a graph, we randomly mask 60% attributes of each attribute-complete sample (_i.e._, the training set) before network learning. We employ a 4-layer graph auto-encoder framework and optimize it with Adam optimization algorithm. During the training phase, we transfer all samples into ITR and RITR to complete absent attributes by merely reconstructing the available ones. After training, we rebuild the attribute matrix over the well-trained model via forwarding propagation; 2) in the node classification task, we randomly split the rebuilt attributes into 80% and 20% for training and testing, respectively. We train the classifier with five-fold validation for 1000 iterations and repeat the experiments 10 times. According to the results of parameter sensitivity testing, we fix two balanced hyper-parameters \(\alpha\) and \(\beta\) to 10. Moreover, the learning rate, the latent dimension, the dropout rate, and the weight decay are set to 1e-3, 64, 0.5, and 5e-4, respectively. Please note that we do not carefully tune these parameters for ease of training as was done in SAT. #### Iv-A4 Compared Methods We compare RITR with the existing 13 baseline methods for future estimation on attribute-missing and hybrid-absent graphs. Specifically, **NeighAggre** (NAS' 08) [46] is a classical profiling method. **VAE** (NeurIPS' 16) [47] is a well-known auto-encoder method. **GCN** (ICLR' 17) [48], **GraphSage** (NeurIPS' 17) [49], and **GAT** (ICLR' 18) [16] are three typical graph neural networks. **GraphRNA** (KDD' 19) [50] and **ARWMF** (NeurIPS' 19) [51] are representatives of attributed random walk-based methods. **Hers** (AAAT' 19) [52] is a cold-start recommendation method. **SAT** (TPAMI' 22) [14] is the first attribute-missing graph imputation network. **SVGA** (KDD' 22) [42] and **ITR** (IJCAI' 22) [17] are two most advanced attribute-missing graph autoencoders. **GINN** (NN' 20) [10] and **GCNMF** [10] (FGCS' 21) [11] are two state-of-the-art attribute-incomplete graph machine learning methods. ### _Attribute-missing Case_ #### Iv-B1 Performance Comparison (Q1) As shown in Table V, we report the profiling performance of all methods mentioned above. This table shows that ITR and RITR outperform all compared baseline methods in terms of six metrics on four datasets. Specifically, 1) we first compare NeighAggre and VAE with our methods. Instead of merely exploiting the structure or attribute information for data imputation, our methods have two-source information sufficiently negotiate with each other, thus consistently exceeding NeighAggre and VAE by a large margin; 2) ITR and RITR show superior performance against GCN, GraphSage, and GAT, all of which have demonstrated strong representation learning capability in handling attribute-complete graphs. However, the results show that these methods are not suitable to solve the attribute-missing problem; 3) for the two strongest attribute-missing graph machine learning methods (_i.e.,_ SVGA and SAT), RITR outperforms them by 1.81%/3.40%, 1.23%/4.75%, 0.79%/2.20%, and 0.97%/1.13% in terms of NDCG@50 metric on four datasets, respectively. This is because these baselines heavily rely on pre-defined assumptions that may not always hold in real-world graphs for data imputation, while RITR does not make any prior distribution assumption so that it can flexibly and effectively make full use of visible information for feature completion; 4) RITR further achieves better performance than ITR on all datasets. These superior results of RITR over the state-of-the-art method further verify the effectiveness of our improved framework for handling attribute-missing graphs. Moreover, we report the node classification performance of 8 methods in Table VI. "X" or "X+A" indicates that the classifier receives the attribute matrix or attribute and adjacency matrices as input in the node classification task. Note that here we only take the attribute-missing case (_i.e.,_ marked as "AM") into consideration. From these results, we can see that 1) the classification results of GINN and GCNMF are not comparable to those of our two methods. ITR and RITR achieve at least 15.26%/15.51%, 4.69%/5.61%, 6.38%/7.22%, and 4.08%/4.45 accuracy increment. This indicates that these attribute-incomplete methods fall into inaccurate data imputation with extremely limited observations so that they can not learn effective representations; 2) taking the performance of "X" for instance, ITR and RITR gain 4.99%/5.20%, 7.05%/7.37%, 9.78%/11.18%, and 3.13%/3.65% performance enhancement over the state-of-the-art SAT method. Similar observations can be obtained among SVGA, ITR, and RITR. These benefits can be attributed to the following merits: 1) different from SVGA and SAT, our proposed graph imputation networks avoid the reliance on any prior distribution assumption for missing attribute completion, so that they can facilitate the structure-attribute negotiation more flexibly and comprehensively; 2) the trustworthy visible attribute information and structure information can be used unitedly by ITR and RITR for data imputation instead of being treated separately. The above experimental results well demonstrate the superiority of ITR and RITR in the attribute-missing case. #### V-B2 Effect of Two Schemes in ITR Criterion (Q2) To verify the benefit of the initializing-then-refining imputation criterion, we conduct ablation studies on four datasets to compare ITR and two ITR variants, each of which has one of the critical components removed. ITR w/o IR and ITR w/o ASU indicate the method with information recomposing and affinity structure updating being masked, respectively. From the results in Fig. 3, we observe that the accuracy of ITR on four datasets would degrade without one of the key components. Specifically, for the "X" task, ITR exceeds ITR w/o IR by 2.54%, 1.53%, 2.24%, and 1.64% accuracy increment, and ITR w/o ASU by 0.69%, 0.30%, 0.98%, and 0.55% accuracy increment on Cora, Citeseer, Amac, and Amap, respectively. We find that the information recomposing scheme plays a more important role than the information refining scheme. To visually illustrate this point, we present the mean square error comparison of ITR and ITR w/o IR at the last training iteration. As seen in Fig. 4, the method with the IR scheme achieves a better convergence than ITR w/o IR. This indicates that our information recomposing operation can effectively prohibit inaccurate information from being propagated, so the network can learn reliable representations for high-quality missing attribute restoration. All the above observations demonstrate the effectiveness of our proposed ITR imputation criterion, which can enable the structure-attribute information to sufficiently negotiate with each other for more accurate data imputation. ### _Hybrid-absent Case_ #### V-C1 Performance Comparison (Q1) In this section, we further investigate the performance of our proposed methods and study a more challenging hybrid-absent (_i.e.,_ marked as "HA") problem that both attribute-missing and attribute-incomplete samples exist simultaneously within a graph. "X" or "X+A" indicates that the classifier receives the attribute matrix or attribute and adjacency matrices as input in the node classification task. To evaluate the quality of the rebuilt attribute matrix, we take six methods (_i.e.,_ GCN, GAT, GINN, GCNMF, SVGA, and SAT) as baselines and report the classification accuracy on four datasets. From these results in Table VI, we can find that 1) although ITR outperforms all baseline methods and achieves the most competitive results, it suffers from a significant performance degradation by 4.37% on average in the "HA"("X") case compared to that in the "AM"("X") case. This is because ITR conducts sample embedding over attribute-incomplete samples directly so that amounts of error information have diffused through the network. As a result, the resultant representations are inaccurate and can hardly provide the attribute-missing samples with discriminative enough information for feature completion; 2) taking the "HA"("X") case for example, RITR achieves the best performance against all compared baselines. Specifically, RITR improves SVGA Fig. 4: MSE comparison of ITR and an ITR variant. ITR w/o IR denotes the ITR without information recomposing scheme. Fig. 3: Effect of the information recomposing (IR) and affinity structure updating (ASU) schemes for node classification. and SAT by 10.20%/5.58%, 9.24%/9.60%, 18.48%/13.98%, and 8.00%/5.96% accuracy increment on all datasets. The observations of other cases are similar. These results once again verify that when both attribute-incomplete and attribute-missing samples exist simultaneously, the feature completion mechanisms of these baselines may have an adverse effect on the quality of recovered attributes due to incorrect information diffusion. RITR effectively alleviates this adverse influence by designing two personalized feature completion components with the initializing-then-refining imputation criterion. #### V-A2 Effect of Each Component of RITR (Q2) Here we conduct an ablation study to validate the effectiveness of our proposed attribute-incomplete and attribute-missing imputation mechanisms. Table VII reports the Recall and NDCG performance of three methods, including RITR w/o STC, RITR w/o ITR, and RITR. Specially, RITR w/o STC or RITR w/o ITR indicates that the method removes the sample-denoising then consistency-preserving component of the initializing then refining component. Note that here we set both attribute-missing and incomplete ratios as 60%. Table VII compares the results of RITR and its two variants, from which we can see that 1) RITR consistently improves RITR w/o STC on all datasets. Taking the results on Cora for instance, RITR gains 0.73%, 1.02%, 1.52%, 1.06%, 1.22%, and 1.45% increment in terms of Recall and NDCG, demonstrating the effectiveness of leveraging the intimate structure-attribute relationship to guide the imputation of incomplete attributes. Similar observations can be concluded from the results on other datasets; 2) RITR significantly outperforms RITR w/o ITR and has performance enhancements of 4.73%, 3.97%, 2.80%, and 2.54% over it in terms of Recall@50 on four datasets, respectively. These results imply the importance of an effective imputation strategy in which we employ the most trustworthy visible information to implement the missing attribute completion. In summary, this ablation study clearly validates that each component can contribute to the overall performance of RITR. #### V-A3 Analysis of The Attribute-incomplete Ratio (Q3) To further investigate the superiority of RITR, it is necessary to show whether the proposed RITR can still achieve effective feature completion when less visible attribute information is available. To this end, we make a performance comparison between SAT and RITR by varying the attribute-incomplete ratio from 10% to 70% and fixing the attribute-missing ratio as 60%. From the results in Table VIII, several observations can be summarized: 1) RITR consistently performs better than SAT in all situations on four datasets. For example, RITR outperforms SAT by 2.42%, 2.25%, 1.85%, 1.84%, 1.83%, 1.94%, and 1.51% in terms of Recall@10 when the attribute-incomplete ratio varies from 10% to 70% on Citeseer. The observations of other metrics and datasets are similar. This is because SAT can not implement an effective latent distribution matching between incomplete attributes and structures. Naturally, the resultant misleading information poses a negative impact on data imputation and feature completion, resulting in sub-optimal node representations. RITR can effectively model hybrid-absent graphs and alleviate the diffusion of inaccurate information under the guidance of initializing-then-refining imputation criterion; 2) taking the results of Recall@10/NDCG@10 on Amac and Amap for example, RITR with 70% incomplete attributes can still achieve better performance than SAT with 10% ones. These results illustrate that RITR can still achieve high-quality data imputation and feature completion with limited observed signals. Overall, all the above results solidly demonstrate the superiority and robustness of RITR. #### V-A4 Hyper-parameter Analysis (Q4) As seen in Eq.(13), RITR introduces two hyper-parameters to balance the importance of different objectives. To show their influence in depth, we conduct an experiment to investigate the effect of \(\alpha\) and \(\beta\). Note that we first set one to a certain value and then tune the other carefully. Fig. 5 reports the Recall and NDCG performance variation of RITR on Cora and Citeseer when \(\alpha\) and \(\beta\) vary from 1 to 20 with a step size of 5. From these sub-figures, we can observe that 1) tuning both \(\alpha\) and \(\beta\) would cause performance variation and the model performance is more stable in the range of [5, 15], suggesting that searching \(\alpha\) and \(\beta\) values from a reasonable hyper-parameter region could benefit the model performance; 2) for a certain \(\alpha\) value, the performance shows a trend of first rising and then dropping slightly with the variation of \(\beta\). This indicates that RITR needs a proper coefficient to guarantee the structure-attribute consistency for improving the quality of feature completion. Fig. 5: The sensitivity analysis of RITR with the variation of two hyper-parameters. Both attribute-incomplete and attribute-missing ratios are set to 60%. Fig. 6: Illustration of method convergence and performance variation of RITR. X-axis, left Y-axis, and right Y-axis refer to the iteration number, the final objective error, and the Recall@10 performance, respectively. Both attribute-incomplete and attribute-missing ratios are set to 60%. As shown, the performance of the model with a certain \(\beta\) value has similar trends when we change the \(\alpha\) value; 3) RITR tends to perform well by setting \(\alpha\) and \(\beta\) to 10 according to the results of all datasets. #### Iv-B5 Convergence and Performance Variation (Q5) To illustrate the convergence of the proposed RITR, we record the profiling performance reflected by the Recall@10 metric and plot the objective error of RITR with iterations on four datasets. From these sub-figures illustrated in Fig. 6, we can observe that 1) the Recall@10 metric of RITR first gradually increases to a plateau with an obvious tendency and then keeps stable with a wide range of iterations; 2) RITR can converge within 1000 epochs on four datasets. These results clearly verify the good convergence property of our proposed method and reveal the effectiveness of the learning procedure. ## V Conclusion and Future Work Hybrid-absent graphs are ubiquitous in practical applications. However, the corresponding learning problem that significantly influences the performance of existing graph machine learning methods is still left under-explored. We firstly propose ITR toward the attribute-missing circumstance, which can enable the attribute and structure information to sufficiently negotiate with each other for accurate missing value restoration. We further improve ITR and design a variant called RITR to handle hybrid-absent graphs, which can effectively leverage the intimate structure-attribute relationship to guide the imputation of incomplete attributes and employ the most trustworthy visible information to implement the missing attribute completion. Extensive experiments on four benchmark datasets have been conducted to compare two proposed methods with state-of-the-art competitors. These results have solidly demonstrated the superiority and robustness of ITR and RITR on both profiling and node classification tasks. However, some limitations of existing attribute-missing or hybrid-absent graph machine learning methods are we still have not explored some issues for them. For instance, the time complexities of most methods are \(\mathcal{O}(N^{2})\), making them hard to be deployed to various large-scale graph-oriented applications. Future work may extend the proposed RITR to a scalable version with linear scalability (_i.e.,_\(\mathcal{O}(BNd)\)) via a mini-batch design. Moreover, in the current version, RITR conducts the structure-attribute information interaction and imputation via a simple concatenation. In the future, how to develop a more mathematical hybrid-absent graph machine learning approach to illustrate the structure-attribute relationship in theory and alleviate the mutual interference between them is another interesting direction, which may further improve the quality of data imputation and graph representations.
2310.08406
Tightening Bounds on Probabilities of Causation By Merging Datasets
Probabilities of Causation (PoC) play a fundamental role in decision-making in law, health care and public policy. Nevertheless, their point identification is challenging, requiring strong assumptions, in the absence of which only bounds can be derived. Existing work to further tighten these bounds by leveraging extra information either provides numerical bounds, symbolic bounds for fixed dimensionality, or requires access to multiple datasets that contain the same treatment and outcome variables. However, in many clinical, epidemiological and public policy applications, there exist external datasets that examine the effect of different treatments on the same outcome variable, or study the association between covariates and the outcome variable. These external datasets cannot be used in conjunction with the aforementioned bounds, since the former may entail different treatment assignment mechanisms, or even obey different causal structures. Here, we provide symbolic bounds on the PoC for this challenging scenario. We focus on combining either two randomized experiments studying different treatments, or a randomized experiment and an observational study, assuming causal sufficiency. Our symbolic bounds work for arbitrary dimensionality of covariates and treatment, and we discuss the conditions under which these bounds are tighter than existing bounds in literature. Finally, our bounds parameterize the difference in treatment assignment mechanism across datasets, allowing the mechanisms to vary across datasets while still allowing causal information to be transferred from the external dataset to the target dataset.
Numair Sani, Atalanti A. Mastakouri
2023-10-12T15:19:15Z
http://arxiv.org/abs/2310.08406v1
# Tightening Bounds on Probabilities of Causation By Merging Datasets ###### Abstract Probabilities of Causation (PoC) play a fundamental role in decision-making in law, health care and public policy. Nevertheless, their point identification is challenging, requiring strong assumptions, in the absence of which only bounds can be derived. Existing work to further tighten these bounds by leveraging extra information either provides numerical bounds, symbolic bounds for fixed dimensionality, or requires access to multiple datasets that contain the _same_ treatment and outcome variables. However, in many clinical, epidemiological and public policy applications, there exist _external_ datasets that examine the effect of _different_ treatments on the same outcome variable, or study the association between covariates and the outcome variable. These external datasets cannot be used in conjunction with the aforementioned bounds, since the former may entail different treatment assignment mechanisms, or even obey different causal structures. Here, we provide _symbolic_ bounds on the PoC for this challenging scenario. We focus on combining either two randomized experiments studying different treatments, or a randomized experiment and an observational study, assuming causal sufficiency. Our symbolic bounds work for arbitrary dimensionality of covariates and treatment, and we discuss the conditions under which these bounds are tighter than existing bounds in literature. Finally, our bounds parameterize the difference in treatment assignment mechanism across datasets, allowing the mechanisms to vary across datasets while still allowing causal information to be transferred from the external dataset to the target dataset. ## 1 Introduction Probabilities of Causation (PoC) play a fundamental role in decision-making in law, health care and public policy [14, 13]. For example, in medical applications, if a medication for a disease has similar side effects to the disease itself, we must calculate the probability that the adverse side effect was caused by the medication, for safety assessments. In epidemiology, we often need to determine the likelihood that a particular outcome is caused by a specific exposure, or if a particular subgroup that experienced an adverse outcome would benefit from an intervention. Probabilities of Causation defined in [1] provide a logical framework to reason about such counterfactuals, as well as necessary assumptions required to identify them from the observed data. While causal parameters such as the Average Treatment Effect are _point identified_ from the observed data distribution, under reasonable assumptions, such as exogeneity, PoC are not. Specifically, in addition to exogeneity, these require monotonicity to hold for point identification [1, 13]. However, when we do not have sufficient justification for assuming monotonicity, then, the PoC are no longer _point identified_. Rather, PoC are _partially identified_, i.e. bounded as a function of the observed data. There exists a rich literature on partial identification and its use in bounding causal quantities; some examples include [10, 11, 12, 13, 14]. The specific question of bounding PoC has also been explored in [11, 12, 13, 14, 15, 16, 17, 18, 19, 20], however, this body of literature assumes the joint probability distribution for the variables of interest is known. Our contribution differs fundamentally from existing work in that we do not assume access to the joint probability distribution for the variables of interest. Rather, we consider the case where multiple datasets of non-overlapping treatments that study the same outcome are given. Specifically, consider the _target_ dataset containing treatment \(X\) and outcome \(Z\). In addition, we are given an _external_ dataset that studies the same outcome \(Z\), and contains different treatment or covariates \(Y\) which are randomised and independent of \(X\). We demonstrate how to merge the external dataset with the target dataset to tighten the bounds on the PoC of \(X\) on \(Z\), while allowing for the treatment \(X\) to be confounded in the external dataset. We demonstrate the importance of this scenario using the following example. Suppose we have treatment \(X\) (medication) for a disease, and an adverse side effect \(Z\), that could be caused by the disease or the medication itself. Suppose \(X\) is randomized, and for safety reasons, we are interested in calculating the probability that \(X\) caused \(Z\). The Probabilities of Causation provide a logical framework to reason about this probability, so we aim to calculate it using the observed data. Since we don't know whether the medication is protective or harmful, we are not justified in assuming monotonicity. Then, the Probabilities of Causation are no longer point identified, and we must settle for bounds on them. Given the target dataset containing observations on \(X\) and \(Z\), these bounds can be calculated - however, these bounds may be too wide for us to arrive at a conclusion. Re-running the study on the same population while recording a richer set of covariates is not an option due to time and financial constraints. This raises the question "Can other external datasets, studying the same adverse effect on similar populations, but not necessarily studying the same treatment, be leveraged to tighten the bounds on the PoC in the target dataset?" While Zhang, Tian, and Bareinboim 2022; Li and Pearl 2022; Pearl 2022; Cuellar 2018; Dawid, Musio, and Murtas 2017 tighten bounds on the Probabilities of Causation (and more generally, other counterfactuals), they all assume access to a dataset recording all of the variables of interest. Therefore, such bounds cannot be straightforwardly applied to use cases like the one we described above. To address this gap, in this paper we focus on the challenging cases that assume only access to datasets that contain the same outcome, but do not record \(X\) and \(Y\) at the same time. These datasets can differ in their treatment assignment mechanism for \(X\), allowing it to be randomized or confounded (assuming a sufficient set of confounders is observed). While Duarte et al. 2021; Zeitler and Silva 2022 do not need access to the joint distributions over all the variables, the bounds that they provide are only _numerical_. Here, we provide _symbolic_ bounds. Additionally, the linear programming approach in [1] cannot be applied for the bound estimation since the dual of the linear program would still have symbolic constraints. From a non-causal perspective, Charitopoulos, Papageorgiou, and Dua 2018 present a symbolic linear programming approach. However, knowing which constraints to supply to the linear program to derive bounds in the problem we describe requires knowledge of the causal graph and invariances. As we show in this paper, these constraints are not trivial to derive. Moreover, the solution in [1] only works for fixed dimensionality for \(Y\), since different dimensionalities of \(Y\) correspond to different linear programs. In our work, we explicitly target the aforementioned use case allowing arbitrary dimensionality of \(Y\). Structure and ContributionsWe start by reviewing the structural causal model (SCM) framework and its semantics for counterfactual reasoning (SS2). Counterfactuals are the basis of the Probabilities of Causation, which we introduce in SS3. In this section, we formally state the Probability of Sufficiency and Necessity, and we review existing bounds on it. In SS4 we describe the data-generating process for the target and external datasets. We then describe the assumed invariances, and how these can be used to transfer information across datasets. Leveraging this invariance principle, we present theorems that provide symbolic bounds on the PoC in our target dataset, after merging it with an external dataset containing a randomized covariate or treatment of arbitrary dimensionality. In SS4.3 we further relax the strict assumption of \(X\) being randomized in the external dataset, and provide symbolic bounds in the presence of observed confounding. We conclude with remarks in SS5. We provide all our proofs and derivations in the Technical Appendix. ## 2 Preliminaries While there exist many formulations of causal models in the literature, such as the Finest Fully Randomized Causally Interpretable Structured Tree Graph (FFRCISTG) of [16] and the agnostic causal model of [10], in this work, we utilise the SCM defined in [10]. Formally, a SCM \(\mathcal{M}\) is defined as a tuple \(\langle\mathbf{U},\mathbf{V},\mathcal{F},\mathbb{P}\rangle\) where \(\mathbf{U}\) and \(\mathbf{V}\) represent a set of exogenous and endogenous random variables respectively. \(\mathcal{F}\) represents a set of functions that determine the value of \(V\in\mathbf{V}\) through \(v\gets f_{V}(pa_{V},u_{V})\) where \(pa_{V}\) denotes the parents of \(V\) and \(u_{V}\) denotes the values of the noise variables relevant to \(V\). \(\mathbb{P}\) denotes the joint distribution over the set of noise variables \(\mathbf{U}\), and since the noise variables \(\mathbf{U}\) are assumed to be mutually independent, the joint distribution \(\mathbb{P}(\mathbf{U})\) factorises into the product of the marginals of the individual noise distributions. \(\mathcal{M}\) induces an observational data distribution on \(\mathbf{V}\), and is associated with a Directed Acyclic Graph (DAG) \(\mathcal{G}\). Defining an SCM allows us to define submodels, potential responses and counterfactuals, as defined in [10]. Given a causal model \(\mathcal{M}\) and a realisation \(x\) of random variables \(\mathbf{X}\subset\mathbf{V}\), a submodel \(\mathcal{M}_{x}\) corresponds to deleting from \(\mathcal{F}\) all functions that set values of elements in \(\mathbf{X}\) and replacing them with constant functions \(X=x\). The submodel captures the effect of intervention \(do(X=x)\) on \(\mathcal{M}\). Given a subset \(\mathbf{Y}\subset\mathbf{V}\), the potential response \(\mathbf{Y}_{x}(u)\) denotes the values of \(Y\) that satisfy \(\mathcal{M}_{x}\) given value \(u\) of the exogenous variables \(\mathbf{U}\). Thus, the counterfactual \(\mathbf{Y}_{x}(u)=y\) represents the scenario where the potential response \(\mathbf{Y}_{x}(u)\) is equal to \(y\), if we possibly contrary to fact, set \(X=x\). When \(u\) is generated from \(P(\mathbf{U})\), we obtain counterfactual random variables \(\mathbf{Y}_{x}\) that have a corresponding probability distribution. Counterfactual random variables lie on rung three of the Ladder of Causation [10], needing additional assumptions for their identification. In the following section, we explore a special class of counterfactual probabilities, known as the Probabilities of Causation. ## 3 Probabilities of Causation An important class of counterfactual probabilities that have applications in law, medicine, and public policy are known as the Probabilities of Causation [10]. These are a set of five counterfactual probabilities related to the Probability of Necessity and Sufficiency (\(PNS\)), which we define as follows. Consider the causal graph in Fig. 1, where the outcome \(Z\) and treatment \(X\) are binary random variables. Let \(x\) denote the event that the random variable \(X\) has value \(1\) and let \(x^{\prime}\) denote the event that \(X\) has value \(0\). Then, the \(PNS\) is defined as \[PNS\equiv P(Z_{x}=1,Z_{x^{\prime}}=0) \tag{1}\] Figure 1: Causal graph containing treatment \(X\) and outcome \(Z\), where treatment \(X\) is randomized \(PNS\) represents the joint probability that the counterfactual random variable \(Z_{x}\) takes on value \(1\) and the counterfactual random variable \(Z_{x^{\prime}}\) takes on value \(0\). Under conditions of exogeneity, defined as \(Z_{x}\perp\!\!\!\perp X\), the rest of the Probabilities of Causation such as Probability of Necessity (\(PN\)) and Probability of Sufficiency (\(PS\)), are all defined as functions of \(PNS\) (see Theorem 9.2.11 in Pearl (2009)). Consequently, when \(PNS\) is identified, all the other Probabilities of Causation are straightforwardly identified from the observed data as well. However, to identify \(PNS\), we must make assumptions such as _monotonocity_Tian and Pearl (2000), which may not be justified in settings involving experimental drugs, legal matters and occupational health. Without this assumption, \(PNS\) is no longer point identified, but it can still be meaningfully bounded using tools from the partial identification literature. Assuming still the graph in Fig. 1, an important bound on \(PNS\) is defined in Tian and Pearl (2000), and is presented below, with \(p_{11}\), \(p_{10}\) and \(p_{00}\) denoting \(P(Z=1\mid X=1)\), \(P(Z=1\mid X=0)\), and \(P(Z=0\mid X=0)\) respectively. \[\max\begin{bmatrix}0\\ p_{11}-p_{10}\end{bmatrix}\leq PNS\leq\min\begin{bmatrix}p_{11}\\ p_{00}\end{bmatrix} \tag{2}\] Given additional data (i.e. \(Y\) in Fig. 2), the bounds on \(PNS\) can be further tightened, as shown in Dawid et al. (2017). We present one of their results here, as we utilize this to tighten the bounds on the Probabilities of Causation by merging target and external datasets. In Fig. 2, \(X\) and \(Z\) represent the treatment and outcome respectively, and \(Y\) represents additional treatments or an additional set of covariates. Then, Dawid et al. (2017) show that given \(P(Z,X,Y)\), the bounds on \(PNS\) can be further tightened as 1 Footnote 1: While Dawid et al. (2017) provide bounds on a different counterfactual probability called \(PC\), its relation to \(PNS\) is described in Theorem 9.2.11 in Pearl (2009) \[\Delta\leq PNS\leq P(Z=1\mid X=1)-\Gamma \tag{3}\] Where \[\Delta=\sum_{y}P(Y=y) \max\{0,P(Z=1\mid X=1,Y=y)\] \[-P(Z=1\mid X=0,Y=y)\}\] \[\Gamma=\sum_{y}P(Y=y) \max\{0,P(Z=1\mid X=1,Y=y)\] \[-P(Z=0\mid X=0,Y=y)\}\] Dawid et al. (2017) show that this interval is always contained in the one given in Eq. 2. Note that the bounds in Eq. 3 assume access to the joint distribution \(P(Z,X,Y)\). In the use case we tackle in this paper, we do not have this luxury. On the contrary, we are given a _target_ dataset that studies a treatment \(X\) and an outcome \(Z\), and additional information in the form of an _external_ dataset with the same outcome \(Z\), that studies a different treatment (or covariate) \(Y\) which is randomized and independent of \(X\). We denote 2 the distribution associated with the target dataset as \(P^{T}\), and the distribution associated with the external dataset as \(P^{E}\). We are interested in using this external dataset to tighten the bounds on the PNS of \(X\) on \(Z\) in our target dataset. In other words, we do not have access to the joint distribution \(P^{T}(Z,X,Y)\) for our target dataset, rather, we only have access to \(P^{T}(Z,X)\). Hence, we cannot straightforwardly apply the bounds in Eq. 3. Footnote 2: We abuse notation by denoting all distributions associated with the target dataset by \(P^{T}\). A similar approach is used for \(P^{E}\) To this end, we borrow information from the external dataset to constrain the possible choices for the _target_ distribution \(P^{T}(Z,X,Y)\). Typically, the target distribution is not identified, and as such, we constrain the set of possible distributions compatible with both our target and external datasets. Then, we can utilize Eq. 3 to pick the most conservative bounds implied by the set of joint distributions which are compatible with the target and the external dataset. Formally, let \(i\) index this set of compatible joint distributions. Then the most conservative bound will be the smallest lower bound on \(PNS\), and the greatest upper bound on \(PNS\). Denoting \(P^{T}(Z=1\mid X=1)\) as \(p^{T}_{11}\), the bounds are given as \[\min_{P^{T}_{i}}\Delta(P^{T}_{i})\leq PNS\leq\max_{P^{T}_{i}}p^{T}_{11}- \Gamma(P^{T}_{i}) \tag{4}\] From the properties of the \(\max\) operator, and since \(p^{T}_{11}\) is known, the bounds in Eq. 4 and are re-written as \[\min_{P^{T}_{i}}\Delta(P^{T}_{i})\leq PNS\leq p^{T}_{11}-\min_{P^{T}_{i}} \Gamma(P^{T}_{i}) \tag{5}\] This raises the question of how exactly to utilize the external dataset over \(Z\) and \(Y\) to constrain the set of possible possible joint distributions \(P^{T}(Z,X,Y)\) for our target dataset. To this end, we utilize the principle of independent causal mechanisms (see Definition 4 in Janzing and Scholkopf (2010), Scholkopf et al. (2012) and Principle 2.1 in Peters et al. (2017)) to transfer causal information across datasets. ## 4 Merging Target and External Datasets To transfer causal information from our external dataset to our target dataset, we must first identify causal quantities that remain invariant across these different data sources. Similar approaches of defining invariant quantities and utilizing them to borrow information across datasets have been used in transportability, distribution shift and robustness Pearl (2011); Christiansen et al. (2022); Buhlmann (2020). Figure 2: Causal graph containing binary treatment \(X\), binary outcome \(Z\) and additional treatment or covariate \(Y\) Throughout this paper, we assume we are given a treatment variable \(X\), a set of covariates \(C\), and either an additional treatment or covariate \(Y\), along with outcome \(Z\), where \(Z\) is a causal descendant of \(X\), \(Y\) and \(C\). Motivated by the principle of independent mechanisms (see Principle 2.1 of Peters, Janzing, and Scholkopf (2017)), we assume the interventional distribution \(P(Z_{x,y,c})\) is invariant across datasets. This assumption is justified given a rich enough set of covariates \(C\), and similar assumptions have been made by other works in the literature (Christiansen et al., 2022; Muandet, Balduzzi, and Scholkopf, 2013; Daume III and Marcu, 2006). This invariance is assumed in all the data-generating processes shown in Fig. 3 and Fig. 4, representing scenarios where we combine datasets with different study designs. Throughout this paper, to avoid cases with undefined quantities, we assume that \(0<P^{T}(X)<1\), \(0<P^{T}(Y)<1\) as well as \(0<P^{E}(X)<1\) and \(0<P^{E}(Y)<1\). ### Merging Datasets With Randomized Treatments First, we consider the case where the target dataset contains an outcome \(Z\) and a randomized treatment \(X\), and the external dataset contains the same outcome \(Z\), and either a different randomized treatment or additional randomized covariate \(Y\), while still having \(X\) assigned randomly. Unobserved covariates \(C\) that are independent of \(X\) and \(Y\) can exist, and these are marginalized out. Under these assumptions, we derive invariances across our target and external datasets. We denote the distribution associated with our target dataset as \(P^{T}(Z,X)\), and the one associated with our external dataset as \(P^{E}(Z,Y)\). In the scenario we consider, \(P^{T}(Z,X,Y,C)\) and \(P^{E}(Z,X,Y,C)\)3 obey the causal structure in Fig. 2(a). Then the interventional distributions are: Footnote 3: Note, that we do not have access to these joint distributions. \[P^{T}(Z_{x,y})=P^{T}(Z\mid x,y)=\sum_{c}P(Z\mid x,y,c)P(c)\] \[P^{E}(Z_{x,y})=P^{E}(Z\mid x,y)=\sum_{c}P(Z\mid x,y,c)P(c)\] Consequently, when both the populations considered in our internal and external datasets have the same distribution of covariates \(P(C)\), we expect \(P(Z_{x,y})\) to be invariant across \(P^{T}\) and \(P^{E}\). To transfer causal information from the external dataset to the target dataset, we must also consider the data-generating process for \(P^{T}(Z,X)\) and \(P^{E}(Z,Y)\). Specifically, the distribution for the target dataset \(P^{T}\) can be factorized as \[P^{T}(Z=1\mid X=1) =P(Z\mid X=1,Y=1)P^{T}(Y=1)\] \[+P(Z\mid X=1,Y=0)P^{T}(Y=0)\] Here, \(P(Z_{x,y})=P(Z=1\mid X=1,Y=1)\) since \(Z_{x,y}\perp\!\!\!\perp X,Y\). Similarly, the external distribution can be factorized as \[P^{E}(Z=1\mid Y=1) =P(Z\mid X=1,Y=1)P^{E}(Y=1)\] \[+P(Z\mid X=0,Y=1)P^{E}(Y=0)\] When \(P^{E}(X)=P^{T}(X)\), and \(P^{E}(Y)=P^{T}(Y)\), both \(P^{E}(Z,Y)\) and \(P^{T}(Z,X)\) can be viewed as marginalized distributions obtained from a joint distribution over \(P(Z,X,Y)\), where \(X\), \(Y\) and \(Z\) follow the collider shaped causal structure given in Fig. 2(b). Then, the target dataset and external dataset can be used to constrain \(P^{T}(Z,X,Y)\) as \[P^{T}(Z=1\mid X=1)=\sum_{y}P(Z=1\mid X=1,Y=y)\] \[\times P^{E}(Y=y)\] \[\vdots\] \[P^{E}(Z=1\mid Y=1)=\sum_{x}P(Z=1\mid X=x,Y=1)\] \[\times P^{T}(X=x).\] Figure 3: Causal graphs representing the different types of interactions between the outcome \(Z\), the treatments \(X\) and \(Y\), and the covariates \(C\). (a) A causal graph where both treatments \(X\) and \(Y\) are assigned randomly, and together with \(C\) determine the outcome \(Z\). (b) A causal graph where the causal mechanisms in the target and external datasets are the same for \(X\) and \(Y\), i.e. \(P^{T}(X)=P^{E}(X)\) and \(P^{T}(Y)=P^{E}(Y)\), and as such, \(P^{T}(Z,X)\) and \(P^{E}(Z,Y)\) are the result of marginalizing \(Y\) and \(X\) from \(P(Z,X,Y)\) respectively. (c) A causal graph describing the data-generating process for our target dataset. In the latter, the treatment \(X\) is assigned randomly and is observed (cyan node), and the node \(Y\) is not observed (grey node). (d) A causal graph describing the data-generating process for our external dataset. Here, although the treatment \(X\) is still assigned randomly, its mechanism differs from that in the target dataset and is not observed. As we have already stated, we assume \(P^{T}(Y)=P^{E}(Y)\). Since \(X\perp\!\!\!\perp Y\), \(P^{E}(X)=P^{T}(X)\), and \(P^{E}(Y)=P^{T}(Y)\), these constraints form a system of equations with multiple solutions for \(P(Z\mid X,Y)\). When \(Y\) is binary, these constraints form a system of equations with a single free parameter \(P(Z=1\mid X=1,Y=1)\). Then, bounds on \(PNS\) using \(P^{T}(Z,X)\) and \(P^{E}(Z,Y)\) are obtained by solving \[\min_{P(Z=1\mid X=1,Y=1)}\Delta(P(Z=1\mid X=1,Y=1))\] \[\leq PNS\leq\] \[p_{11}^{T}-\min_{P(Z=1\mid X=1,Y=1)}\Gamma(P(Z=1\mid X=1,Y=1)) \tag{6}\] We present the bounds on \(PNS\) in our target dataset when \(Y\) is binary in Theorem 1. **Theorem 1**.: _Let \(X\), \(Y\) and \(Z\) be binary random variables, obeying the causal structure in Fig. 2(b). Then, given distributions \(P^{T}(Z,X)\) and \(P^{E}(Z,Y)\) where \(P^{E}(X)=P^{T}(X)\) and \(P^{E}(Y)=P^{T}(Y)\), the bounds of \(PNS\) of \(X\) on \(Z\) in the target dataset are_ \[\max\begin{bmatrix}0\\ p_{11}^{T}-p_{10}^{T}\end{bmatrix}\leq PNS\leq\min\begin{bmatrix}p_{11}^{T}- \sum_{i=0}^{1}\Phi_{i}\\ p_{00}^{T}-\sum_{i=0}^{1}\Theta_{i}\end{bmatrix} \tag{7}\] _Where \(p_{11}^{T}\)= \(P^{T}(Z=1\mid X=1)\), \(p_{10}^{T}=P^{T}(Z=1\mid X=0)\), \(p_{00}^{T}=P^{T}(Z=0\mid X=0)\), \(p_{x}=P^{T}(X=1)\) and \(p_{y_{i}}=P^{E}(Y=y_{i})\), and \(\Phi_{i}\) and \(\Theta_{i}\) are_ \[\Phi_{i}=\mathbb{I}(p_{1i}^{E}\geq\max\{1-p_{x},p_{x}\})\frac{p_{y_{i}}(p_{1i} ^{E}-\max\{1-p_{x},p_{x}\})}{\min\{1-p_{x},p_{x}\}}\] \[\Theta_{i}=\mathbb{I}(p_{1i}^{E}\leq\min\{1-p_{x},p_{x}\})\frac{p_{y_{i}}(\min \{1-p_{x},p_{x}\}-p_{1i}^{E})}{\min\{1-p_{x},p_{x}\}}\] The bounds in Theorem 1 will be tighter than the bounds in Eq. 2 whenever either \(P(Z=1\mid Y=1)\) or \(P(Z=1\mid Y=0)\) is greater than the maximum of \(p_{x}\) and \(1-p_{x}\), or less than the minimum of \(p_{x}\) and \(1-p_{x}\). Note that these bounds recover Proposition 4 in Gresele et al. (2022), showing the lower bound on \(PNS\) cannot be tightened. These bounds can be extended to the case when \(Y\) is discrete, taking values in \(\{0,\ldots,N\}\). We provide bounds for this case in Theorem 2. **Theorem 2**.: _Let \(X\) and \(Z\) be binary random variables, and let \(Y\) be a discrete random variable taking on \(N+1\) discrete values in \(\{0,1,\ldots,N\}\). Assume \(Z\), \(X\) and \(Y\) obey the causal structure in Fig. 2(b). Then, given distributions \(P^{T}(Z,X)\) and \(P^{E}(Z,Y)\) where \(P^{E}(X)=P^{T}(X)\) and \(P^{E}(Y)=P^{T}(Y)\), the bounds of \(PNS\) of \(X\) on \(Z\) in the target dataset are_ \[\max\begin{bmatrix}0\\ p_{11}^{T}-p_{10}^{T}\end{bmatrix}\leq PNS\leq\min\begin{bmatrix}p_{11}^{T}- \sum_{i=0}^{N}\Phi_{i}\\ p_{00}^{T}-\sum_{i=0}^{N}\Theta_{i}\end{bmatrix} \tag{8}\] _The notation used is identical to that used in Theorem 1._ The assumptions of \(P^{T}(X)=P^{E}(X)\) and \(P^{T}(Y)=P^{E}(Y)\) are very restrictive - so we discuss approaches to either satisfy or relax these assumptions. ### Mismatch Between Treatment Assignment Mechanisms First, the assumption \(P^{T}(Y)=P^{E}(Y)\) can be satisfied by choosing a suitable \(Y\), such that \(P(Y)\) is invariant across our target and external datasets. An example of such a \(Y\) would be the presence of a genetic mutation, which based on Mendelian Randomization, is assigned randomly, and is expected to have similar prevalence across populations that share similar characteristics. More generally, suitable choices of \(Y\) would be variables that do not affect the treatment assignment of \(X\) in the external or target dataset, and are expected to have identical prevalence across the target and external datasets. Next, the restrictive assumption on \(P^{T}(X)=P^{E}(X)\) can be relaxed by parameterizing the difference in data generating processes between the external and target dataset using a parameter \(\delta_{X}\), i.e. \(P^{E}(X)=P^{T}(X)+\delta_{X}\), where \(\delta_{X}\) is adequately restricted to ensure valid probabilities as well as \(0<P^{E}(X)<1\). Using this parameterization, we constrain \(P(Z\mid X,Y)\) in terms of \(\delta_{X}\) and the given target and external datasets as \[P^{E}(Z=1\mid Y=1) =\sum_{x}P(Z=1\mid X=x,Y=1)\] \[\times\Big{(}P^{T}(X=x)+\delta_{X}\Big{)}\] \[\vdots\] \[P^{T}(Z=1\mid X=0)=\] \[\sum_{y}P(Z=1\mid X=0,Y=y)P^{T}(Y=y)\] Under this parameterization, the bounds in Theorem 1 and Theorem 2 can be re-derived in terms of the parameter \(\delta_{X}\), and we present these in Theorem 3. **Theorem 3**.: _Let \(X\) and \(Z\) be binary random variables, and let \(Y\) be a discrete random variable taking on \(N+1\) discrete values in \(\{0,1,\ldots,N\}\). Assume the target distribution \(P^{T}(Z,X)\) and external distribution \(P^{E}(Z,Y)\) obey the causal structure in Fig. 2(c) and Fig. 2(d) respectively. Then, assuming \(P^{T}(Y)=P^{E}(Y)\) and \(P^{E}(X)=P^{T}(X)+\delta_{X}\), the bounds of \(PNS\) of \(X\) on \(Z\) in the target dataset are given as_ \[\max\begin{bmatrix}0\\ p_{11}^{T}-p_{10}^{T}\end{bmatrix}\leq PNS\leq\min\begin{bmatrix}p_{11}^{T}- \sum_{\{\bar{\imath}=0\}}^{N}\Phi_{i,\delta_{X}}\\ p_{00}^{T}-\sum_{i=0}^{N}\Theta_{i,\delta_{X}}\end{bmatrix} \tag{9}\] _Where \(p_{11}^{T}\), \(p_{10}^{T}\) and \(p_{00}^{T}\) are defined in Theorem 1, and \(\Phi_{i,\delta_{X}}\) and \(\Theta_{i,\delta_{X}}\) are defined as_ \[\Phi_{i}=\mathbb{I}(p_{1i}^{E}\geq\max\{1-p_{x}-\delta_{X},p_{x}+ \delta_{X}\})\] \[\times\frac{p_{y_{i}}(p_{1i}^{E}-\max\{1-p_{x}-\delta_{X},p_{x}+ \delta_{X}\})}{\min\{1-p_{x}-\delta_{X},p_{x}+\delta_{X}\}}\] \[\Theta_{i}=\mathbb{I}(p_{1i}^{E}\leq\min\{1-p_{x}-\delta_{X},p_{x}+ \delta_{X}\})\] \[\times\frac{p_{y_{i}}(\min\{1-p_{x}-\delta_{X},p_{x}+\delta_{X}\}-p _{1i}^{T})}{\min\{1-p_{x}-\delta_{X},p_{x}+\delta_{X}\}}\] So, introducing \(\delta_{X}\) maintains the overall structure of the bounds introduced in Theorems 1 and 2, but it does require the external dataset to have a stronger treatment effect (\(p_{1i}^{E}\)) of \(Y\) on \(Z\) to tighten the bounds on the target dataset. While the above bounds relax the assumption \(P^{T}(X)=P^{E}(X)\) by parameterizing their difference, they still require the external dataset \(P^{E}(Z,Y)\) to have \(X\) randomized. As this does not always hold in practice, in the following section we tackle a more realistic scenario; one where the treatment assignment mechanism for \(X\) in \(P^{E}\) is confounded by a set of covariates \(C\). We assume that both the target and external datasets record this \(C\). ### Merging Experimental And Observational Datasets When the treatment \(X\) in the _external_ dataset is allowed to be confounded by a set of discrete _observed_ confounders \(C\), the causal graph representing the data generating process for the external dataset is given in Fig. 4b. Throughout this section, we assume that \(P^{T}(C)=P^{E}(C)\). Note that \(X\) is still randomized in the target dataset. The causal graph corresponding to the data-generating process for the target dataset is depicted in Fig. 4a. To employ a similar approach to deriving bounds as before, we must first derive bounds in the ideal case where we have access the the joint distribution \(P^{T}(Z,X,Y,C)\) for our target dataset. Following a similar derivation to the bounds presented in Dawid, Musio, and Murtas (2017), first, we derive bounds on \(PNS\) for when the joint \(P^{T}(Z,X,Y,C)\) is observed in Theorem 4. **Theorem 4**.: _Let \(X\) and \(Z\) be binary random variables, and let \(C\) and \(Y\) be discrete random variables. If \(X\), \(Y\), \(Z\) and \(C\) follow the causal graph in Fig. 4a, then given \(P(Z,X,Y,C)\), we can obtain bounds on \(PNS\) as_ \[\sum_{C}P(C)\Delta_{C}\leq PNS\leq p_{11}-\sum_{C}P(C)\Gamma_{C} \tag{10}\] _Where_ \[\Delta_{C}=\sum_{y}P(Y=y) \max\{0,P(Z=1\mid X=1,Y=y,C)\] \[-P(Z=1\mid X=0,Y=y,C)\}\] \[\Gamma_{C}=\sum_{y}P(Y=y) \max\{0,P(Z=1\mid X=1,Y=y,C)\] \[-P(Z=0\mid X=0,Y=y,C)\}\] The conditions under which these bounds are tighter than bounds in Eq. 3 are described in Lemma 5. **Lemma 5**.: _The bounds in Eq. 10 are contained in the bounds given in Eq. 3, and will be tighter when for any \(C\), \(P(Z=1\mid X=1,Y,C)\) is sometimes, but not all the time, greater than \(P(Z=1\mid X=0,Y,C)\) or \(P(Z=0\mid X=0,Y,C)\)._ Having established bounds when given access to the joint distribution \(P^{T}(Z,X,Y,C)\), we now derive bounds on our target dataset when the joint distribution \(P^{T}(Z,X,Y,C)\) is unknown, but additional information is available from the external dataset. Specifically, we consider the case where the target distribution \(P^{T}(Z,X,C)\) obeys the causal structure in Fig. 4a, with treatment \(X\) being randomized, and does not measure \(Y\). In addition, we are given access to an external distribution \(P^{E}(Z,Y,C)\), which obeys the causal structure in Fig. 4b. Note that in \(P^{E}\), the treatment \(X\) is confounded by \(C\), while this is not the case in \(P^{T}\). We assume \(P(Z_{x,y,c})\) is invariant across these datasets, however, since \(X\) is confounded by \(C\) in the external dataset, we must also parameterize the difference between the treatment assignment mechanism of \(X\) in \(P^{T}\) and \(P^{E}\). This must be done for every level of covariates \(C\). Hence we index this parameter for every level \(C\) as \(\delta_{X}^{C}\). Then, the following constraints can be utilized to restrict the set of choices of joint distributions \(P^{T}(Z,X,Y,C)\) compatible with the target and external dataset: \[P^{E}(Z=1\mid Y=1,C)=\] \[\sum_{x}P(Z=1\mid X=x,Y=1,C)\times\left(P^{T}(X=x)+\delta_{X}^{C}\right)\] \[\vdots\] \[P^{T}(Z=1\mid X=0,C)=\] \[\sum_{y}P(Z=1\mid X=0,Y=y,C)\times P(Y=y)\] Note that when \(\delta_{X}^{C}=0\) for all levels of \(C\), this corresponds to the case where the external dataset has \(X\) randomized as well, and additionally, both the target and external datasets are results of marginalizing the distribution \(P(Z,X,Y,C)\) over \(X\) and \(Y\) respectively. Similarly, note that when \(\delta_{X}^{c_{0}}=\delta_{X}^{c_{1}}=\cdots=\delta_{X}^{c_{1}}\) for all levels of \(C\), this corresponds to the case where \(X\) is randomized in the external dataset, but it has a different treatment mechanism than the target dataset. We provide bounds for arbitrary values of \(\delta_{X}^{C}\) (ensuring valid probabilities) in Theorem 6. **Theorem 6**.: _Let \(X\) and \(Z\) be binary random variables, and let \(Y\) be a discrete random variable taking on \(N+1\) discrete values in \(\{0,1,\ldots,N\}\), and \(C\) be a discrete random variable taking on \(M+1\) discrete values in \(\{0,\ldots,M\}\). Assume \(P^{T}(Z,X,C)\) and \(P^{E}(Z,Y,C)\) are generated generated according to the causal graphs given in Fig. 4a and 4b respectively. Then, assuming \(P^{T}(Y)=P^{E}(Y)\), and \(P^{E}(X\mid C)=P^{T}(X)+\delta_{X}^{C}\), then \(P^{T}\) and \(P^{E}\) can be merged to tighten the bounds on \(PNS\) for \(X\) on \(Z\) in the target dataset as_ \[\Delta\leq PNS\leq\sum_{C}P^{T}(C)\min\begin{bmatrix}p_{11C}^{T}-\sum_{i=0}^{N }\Phi_{i,\delta_{X}^{C}}^{C}\\ p_{00C}^{T}-\sum_{i=0}^{N}\Phi_{i,\delta_{X}^{C}}^{C}\end{bmatrix} \tag{11}\] _Where \(p_{11C}^{T}=P(Z=1\mid X=1,C)\), \(p_{00C}^{T}=P(Z=0\mid X=0,C)\), \(p_{1iC}^{E}=P^{E}(Z=1\mid Y=i,C)\) and \(\Delta\), \(\Phi_{i,\delta_{X}^{C}}^{C}\) and \(\Theta_{i,\delta^{C}_{x}}\) are defined as_ \[\Delta =\sum_{C}P^{T}(C)\max\{0,P^{T}(Z=1\mid X=1,C)\] \[\qquad\qquad\qquad-P^{T}(Z=0\mid X=1,C)\}\] \[\Phi^{C}_{i,\delta^{C}_{X}} =\mathbb{I}(p^{E}_{iC}\geq\max\{1-p_{x}-\delta^{C}_{X},p_{x}+ \delta^{C}_{X}\})\] \[\times\frac{p_{y_{i}}(p^{E}_{1iC}-\max\{1-p_{x}-\delta^{C}_{X},p_{ x}+\delta^{C}_{X}\})}{\min\{1-p_{x}-\delta^{C}_{X},p_{x}+\delta^{C}_{X}\}}\] \[\Theta^{C}_{i,\delta^{C}_{X}} =\mathbb{I}(p^{E}_{1iC}\leq\min\{1-p_{x}-\delta^{C}_{X},p_{x}+ \delta^{C}_{X}\})\] \[\times\frac{p_{y_{i}}(\min\{1-p_{x}-\delta^{C}_{X},p_{x}+\delta^{ C}_{X}\}-p^{E}_{1iC})}{\min\{1-p_{x}-\delta^{C}_{X},p_{x}+\delta^{C}_{X}\}}\] These bounds allow for the target and external datasets to have different treatment assignment mechanisms for \(X\), allowing for a greater variety of external datasets to be used to tighten the bounds on \(PNS\) in the target dataset. Theorem 6 shows that the lower bound on \(PNS\) in our setting is identical to the lower bound in Eq. 3. However, the upper bound will be tighter whenever for any \(C\), at least one \(p^{E}_{1iC}\), but not all, is greater than \(p_{x}+\delta^{C}_{X}\) and \(1-p_{x}-\delta^{C}_{X}\), or less than \(p_{x}+\delta^{C}_{X}\) and \(1-p_{x}-\delta^{C}_{X}\). Theorem 6 enables us to use genomics datasets that study the same outcome to tighten the bounds on PoC in the target dataset, without measuring or making restrictive assumptions on the treatment assignment mechanism for \(X\) in the external genomics dataset. Now, using the Theorems presented in this paper, a variety of external datasets can be leveraged to tighten the bounds on the PoC. ## 5 Discussion and Future Work Having presented various approaches to tightening the bounds on the Probabilities of Causation, we briefly highlight some key discussion points. **Dealing with finite samples** Throughout this paper, we assume that having access to the dataset is equivalent to having access to the joint distribution over the variables contained in the dataset. In finite samples, this will not hold, and there will be additional statistical considerations. In this case, Maximum likelihood [1] based approaches, as well as approximations may be used to ensure compatibility of target and external datasets. **Assumption on prevalence parameter \(\delta_{X}\)** We provided theorems on bounds on PoC for the target dataset in terms of \(\delta_{X}\). Even though we may not know the true value of \(\delta_{X}\), we think of it like a sensitivity parameter that can be varied to understand how the bounds change. Ranges on \(\delta_{X}\) can be imposed based on domain knowledge, or by utilizing information about the study design of the external dataset. Additionally, the prevalence parameter \(\delta_{X}\) makes the assumptions on the treatment assignment mechanism of \(X\) across datasets explicit, providing greater transparency in inference. **Choice of \(Y\)** In the bounds we present, the prevalence of \(Y\) is assumed to remained unchanged. As mentioned before, examples of such \(Y\) include genetic mutations; according to Mendelian Randomization genes are randomized by nature, hence their prevalence is expected to be similar across populations with similar characteristics. This illustrates the useful role genetic mutations can play in tightening bounds on the PoC, and provides a way to leverage the growing number of genomics datasets. However, to relax the assumption on the unchanged prevalence of \(Y\), a similar approach to the one used for \(\delta_{X}\) could be employed to parameterize the difference in the treatment assignment mechanism of \(Y\), and subsequently re-derive the bounds in this paper. We leave this to future work. **Invariance of Causal Mechanisms** We assume that \(P(Z_{x,y,c})\) remains unchanged across datasets. This assumption is supported in the transportability, robustness and distribution shift literature [2, 1, 13]. However, if someone wishes to, following a similar approach used with \(\delta_{X}\), this assumption can be further weakened, albeit at the cost of transferring less information across datasets. **Data access and privacy** In the case of trials, we may not have access to individual-level records due to privacy or intellectual property concerns. The merit of our approach is that it only requires population-level summaries of the data, such as the adverse effect prevalence in the treated and untreated group, or these quantities within strata of the subject population. Figure 4: Causal graphs representing the different types of interactions between the outcome \(Z\), treatments \(X\) and \(Y\), and covariates \(C\). (a) A causal graph describing the data generating process for the target dataset where both \(X\) and \(Y\) are assigned randomly, and together with \(C\) determine the outcome \(Z\). (b) A causal graph describing the data-generating process for the external dataset, where \(X\) is confounded by \(C\). Here recall that \(P^{T}(C)=P^{E}(C)\). We parameterize the difference in treatment assignment mechanisms for \(X\) across datasets using \(\delta^{C}_{X}\) as \(P^{E}(X\mid C)=P^{T}(X)+\delta^{C}_{X}\). We denote with cyan the observed nodes in each dataset and with grey the unobserved ones. **Conclusion** In this paper, we presented approaches to tighten the bounds on the Probabilities of Causation via merging external datasets studying the same outcome variable, but examining different treatments or covariates. To this end, we tightened existing bounds on the Probabilities of Causation by merging external datasets with the target dataset, allowing the external dataset to have a different treatment assignment mechanism. This is accomplished by parameterizing the difference in treatment mechanisms and providing bounds in terms of this parameter. Our approach could also be extended to derive bounds on counterfactual statements (rung 3 in Pearl's ladder of causation) other than Probabilities of Causation.
2301.12337
Interaction of oxygen with pristine and defective MoS2 monolayers
Atom controlled sub-nanometer MoS$_2$ pores have been recently fabricated. Oxidative environments are of particular interest for MoS$_2$ applications in electronics, sensing and energy storage. In this work we carried out first-principles calculations of oxygen adsorption in plain and sub-nanometer MoS$_2$ nanopores. The chemical stability of the layers and pores towards oxygen was verified using density-functional theory. Dissociation and diffusion barriers have been calculated in order to understand surface and pore oxidation and its electronic properties at the atomic scale, which opens the path for future investigations of MoS$_2$ pores in a realistic environment.
Murilo Kendjy Onita, Flavio Bento de Oliveira, Andreia Luisa da Rosa
2023-01-29T03:20:00Z
http://arxiv.org/abs/2301.12337v1
# Interaction of oxygen with pristine and defective MoS\({}_{2}\) monolayers ###### Abstract Atom controlled sub-nanometer MoS\({}_{2}\) pores have been recently fabricated. Oxidative environments are of particular interest for MoS\({}_{2}\) applications in electronics, sensing and energy storage. In this work we carried out first-principles calculations of oxygen adsorption in plain and sub-nanometer MoS\({}_{2}\) nanopores. The chemical stability of the layers and pores towards oxygen was verified using density-functional theory. Dissociation and diffusion barriers have been calculated in order to understand surface and pore oxidation and its electronic properties at the atomic scale. Introduction Owing to their fascinating properties, two dimensional transition metal dichalcogenides (TMDs) have been explored for a variety of applications, including electronics and optoelectronics, photonics, catalysis and energy storage [1; 2; 3; 4; 5]. In particular, molybdenum disulfide (MoS\({}_{2}\)), the most promising TMDCs, is efficiently exfoliated in monolayer or multilayers [6; 7]. Recently the fabrication of MoS2 sub-nanometer pores offer several opportunities being promising candidates for several technological applications such as membranes for DNA translocation, [1; 8; 9], water filtration and desalination [10; 11; 12; 13], energy harvesting [14] and hydrogen evolution reaction [15; 16; 17]. Control at atomic scale nanopores can be achieved by using electron or ion beams to fabricate nanopores of specific sizes [18; 19; 20; 21; 22]. Sulfur and molibdenum vacancies, the most abundant defect species in MoS\({}_{2}\), may serve as the nucleation sites for the nanopore formation [7; 23]. Pore sizes down to 0.5-1.2 nm have been created, corresponding to a few atoms missing [24; 25; 26; 27]. In particular, the interaction of small molecules with MoS\({}_{2}\) is of particular importance, since it plays a role in the properties and performance of two-dimensional-based devices. Concurrently, structural defects such as vacancies and edges are particularly susceptible to attacks in reactive environments [28]. Adsorption energy and diffusion of oxygen on MoS\({}_{2}\) is controversial and not completely understood. Moreover, oxygen on surfaces either forms undesired alloys entering a sulphur site or possesses a low binding energy. One important, not completely understood question, is the role of point defects on the adsorption of oxygen on MoS2. Previous experimental investigations suggested that the presence of defects significantly alters the surface stability [6; 29]. Oxygen leads to the formation of Mo oxide in the two-dimensional MoS\({}_{2}\) lattice, yielding an overall disordered and fragmented structure. On the other hand, other investigations suggested that oxygen is incorporated in the MoS\({}_{2}\) lattice at substitutional sites [30]. X-ray photoelectron spectroscopy measurements have shown that the basal plane of MoS\({}_{2}\) monolayers, when subjected to long-term ambient exposure, spontaneously undergoes oxygen substitution reactions, giving rise to a highly crystalline two-dimensional molybdenum oxysulfide phase [31; 32]. In this work we have investigated the interaction of oxygen atoms and molecules with MoS\({}_{2}\) monolayers and subnanometer pores. The pore size and termination play a crucial role on the interaction with oxygen molecules. Methodology All calculation have been performed using the density functional theory [33; 34] as implemented in the VASP [35]. For the exchange-correlation functional we use the GGA approximation [36]. Basis set consisting of an expansion in plane waves and an energy cutoff of 300 eV were used. The Brillouin zone was sampled according to the Monkhorst-Pack scheme [37] with \(\Gamma\) point only in a \((6\times 6)\) unit cells. To avoid interaction between neighbouring wires, a vacuum space of 15 A was set in the direction perpendicular to the two-dimensional MoS\({}_{2}\) layers. The structural relaxation was performed until the atomic forces were less than \(10^{-3}\) eV/A. In order to determine the diffusion and reaction barriers we have performed CI-NEB [38; 39] calculations with at least seven images to search the minimum-energy reaction paths and saddle points between the initial state and final state configurations. ## III Results Among the various arrangements for MoS\({}_{2}\), the energetically most stable one at room temperature is known as 2H which has a honeycomb structure as shown in Fig. 1 (a) and (b). This arrangement has in its unit cell one molybdenum and two sulfur atoms. The optimized Mo-S (S-S) distances are 2.42 (3.14) A. The lattice parameter \(a=b\) equals to 3.16 A. The electronic band structure and total density-of-states (DOS) of bare a MoS\({}_{2}\) monolayer are shown in Fig. 1(b). The direct band gap at K-point is 1.76 eV, in agreement with previous results [40]. The electronic band gap and formation energies of bare and defective monolayered MoS\({}_{2}\) are reported in our previous publication [41]. The formation enthalpy of a bare MoS\({}_{2}\) monolayer is -2.63 eV. Here the adsorption energy of oxygen is calculated at high oxygen chemical potential according to: \(\mathrm{E_{b}=E_{MoS_{2}/O}-E_{MoS_{2}-bare}-\sum_{i}\mu_{O}}\), where \(\mathrm{E_{MoS_{2}/O}}\) is the total e nergy of the MoS\({}_{2}\) the with oxygen adsorbed, \(\mathrm{E_{MoS_{2}}^{bare}}\) is the total energy of the bare sheet, \(\mu_{O}\) is the chemical potential of oxygen, which has ben chosen as the total energy of the oxygen molecule. As possible configurations for defective MoS\({}_{2}\), we have considered oxygen substituting sulphur atoms, namely O\({}_{\mathrm{S}}\), with concentrations of 11%, 22% and 25% as shown in Figs.2 (a), (b) and (c), respectively. We notice that substitution at Mo sites is highly unfavorable. Furthermore, defects have been incorporated considering Mo vacancies namely O\({}_{\rm S}\) (11%) + V\({}_{\rm Mo}\), Fig.2 (d) and S vacancies, Figs. 2 (e) and (f) labeled as O\({}_{\rm S}\) (11%) + V\({}_{\rm S}\) (far) and O\({}_{\rm S}\) (11%) + V\({}_{\rm S}\) (close). These concentrations were chosen based on experimental results reported in Ref. [42]. We have included disorder effects by adding Mo vacancies far and close to oxygen substitutional atoms, as mentioned above. In Fig. 3 the DOS and PDOS of the oxygen substitutional in MoS\({}_{2}\) is shown. a) O\({}_{\rm S}\) at (11%), b) O\({}_{\rm S}\) at (22%), c) O\({}_{\rm S}\) (25%), d) O\({}_{\rm S}^{\rm close}\) + V\({}_{\rm Mo}\) (11%), e) O\({}_{\rm S}^{\rm far}\) + V\({}_{\rm Mo}\) (11%), f) O\({}_{\rm S}^{\rm close}\) + V\({}_{\rm S}\) (11%). The electronic structure of a) O\({}_{\rm S}\) at (11%), b) O\({}_{\rm S}\) at (22%) is very similar. The later two systems exhibit semiconductor behavior. At higher concentration as in O\({}_{\rm S}\) (25%), Fig.3(c), additional state in the gap appear, but the structure remain semiconducting. For d) O\({}_{\rm S}^{\rm close}\) + V\({}_{\rm Mo}\) (11%) and O\({}_{\rm S}^{\rm far}\) + V\({}_{\rm Mo}\) (11%) states cross the Fermi level, indicating a metalic behavior. Finally, for O\({}_{\rm S}^{\rm close}\) + V\({}_{\rm S}\) (11%) Figure 1: a) Top view, b) side view and c) electronic band structure and DOS of bare MoS\({}_{2}\) monolayers. shown in Fig. 3(f) the system is a semiconductor as well. On the monolayer pristine the oxygen molecule spontaneously dissociates and the oxygen atoms diffuses further apart to finally bind on the sulfur atoms. This means that the barrier \begin{table} \begin{tabular}{l c} \hline \hline Structure & E\({}_{\rm b}\) (eV) \\ \hline pristine & -0.55 \\ V\({}_{\rm 1Mo-O_{2}}\) (non-diss.) & -0.37 \\ V\({}_{\rm 1S-O_{2}}\) (diss.) & -0.90 \\ V\({}_{\rm 2S-O_{2}}\) (diss.) & -3.44 \\ V\({}_{\rm 1Mo6S-3O_{2}}\)(diss.) & -3.24 \\ V\({}_{\rm 1Mo6S-O_{2}}\) (non-diss.) & -0.45 \\ \hline \hline \end{tabular} \end{table} Table 1: Binding energies of oxygen molecules in \(\rm MoS_{2}\) nanopores. for O diffusion should be low, as suggested in Ref. [43]. The binding energy of the final configuration is -1.13 eV/atom. In addition we have investigated the interaction of O\({}_{2}\) with other small defects which we call subnanometer pores: a Mo single vacancy, V\({}_{\rm Mo-O_{2}}\), shown in Fig. 3(a). The binding energy of the final configuration is -1.13 eV/atom. In addition we have investigated the interaction of O\({}_{2}\) with other small defects which we call subnanometer pores: a Mo single vacancy, V\({}_{\rm Mo-O_{2}}\), shown in Fig. 3(b). The binding energy of the final configuration is -1.13 eV/atom. In addition we have investigated the interaction of O\({}_{2}\) with other small defects which we call subnanometer pores: a Mo single vacancy, V\({}_{\rm Mo-O_{2}}\), shown in Fig. 3(c). The binding energy of the final configuration is -1.13 eV/atom. In addition we have investigated the interaction of O\({}_{2}\) with other small defects which we call subnanometer pores: a Mo single vacancy, V\({}_{\rm Mo-O_{2}}\), shown in Fig. 3(d). The binding energy of the final configuration is -1.13 eV/atom. In addition we have investigated the interaction of O\({}_{2}\) with other small defects which we call subnanometer pores: a Mo single vacancy, V\({}_{\rm Mo-O_{2}}\), shown in Fig. 3(e). The binding energy of the final configuration is -1.13 eV/atom. In addition we have investigated the interaction of O\({}_{2}\) with other small defects which we call subnanometer pores: a Mo single vacancy, V\({}_{\rm Mo-O_{2}}\), shown in Fig. 3(f). The binding energy of the final configuration is -1.13 eV/atom. In addition we have investigated the interaction of O\({}_{2}\) with other small defects which we call subnanometer pores: a Mo single vacancy, V\({}_{\rm Mo-O_{2}}\), shown in Fig. in Fig. 4 (b), a Mo single vacancy plus S hexavacancy, \(V_{\rm 1Mo6S-O_{2}}\), shown in Fig. 4 (c) and d) \(\rm V_{\rm 1Mo6S-O_{2}}\) (non-dissociated). Such structures can serve as prototypes for nanopores in MoS\({}_{2}\). Additionally, we have considered a Mo triple vacancy plus a S divacancy, \(\rm V_{\rm 3Mo2S-O_{2}}\), and \(\rm V_{\rm 1Mo6S-O_{2}}\). The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and (b), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and the \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (c) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and the \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (c) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (c) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (c) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (c) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (c) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (a) and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. 4 (b), and d), respectively. The \(\rm V_{\rm 1Mo6S-O_{2}}\) structures are shown in Fig. and a single Mo vacancy plus a S tetravacancy, V\({}_{\rm 1Mo4S-O_{2}}\), a Mo triple vacancy plus a S hexavacancy, V\({}_{\rm 3Mo6S-O_{2}}\) (to be published). Binding energies of oxygen for the relaxed structures are shown in Table 1. On pristine MoS\({}_{2}\), shown in Fig. 4 (a), adsorption of oxygen leads to a dissociative configuration with binding energy is -0.55 eV/O atom. Oxygen spontaneously dissociates with the oxygen atoms diffusing further apart with a 6.3 A being the smallest O-O distance. The O sits right above the S atom with S-O distance equal to 1.48 A. On V\({}_{\rm 1Mo-O_{2}}\) oxygen chemisorbs in a non-dissociate manner with binding energy of -0.37 eV/O atom. The O-O distance is 2.78 A with each oxygen bound to a sulphur atom in different layers. The relaxed geometry is shown in Fig. 4 (b). In a single sulfur vacancy defect, V\({}_{\rm 1S-O_{2}}\), shown in Fig. 4 (c), the oxygen molecule dissociates. Upon dissociation, the first atom goes to a substitutional sulphur site, while the second oxygen atom moves to ontop of a sulphur atom. This is in agreement with results reported previously [44]. The binding energy is -0.90 eV/O atom. On a double sulphur vacancy, V\({}_{\rm 2S-O_{2}}\) both oxygen atoms go to substitutional sulphur positions. The Mo-O bond length is 1.7 A forming a slightly distorted hexagon. One notes that typical Mo-O bond lengths range from 1.69 to 1.73 A [45]. The O-O distance is our case 2.38 A. The binding energy amounts -3.44 eV/atom, meaning that the reaction is exothermic. The V\({}_{\rm 1Mo6S-O_{2}}\) interacts with the oxygen molecule via a perpendicular orientation of the molecule with respect to the MoS\({}_{2}\) surface in its final configuration. The oxygen distance to the molybdenum atoms is around 3 A. For other large nanopores such as V\({}_{\rm 3Mo6S-O_{2}}\), V\({}_{\rm 1Mo4S-O_{2}}\) and V\({}_{\rm 3Mo2S-O_{2}}\) (to be published)), the binding energy indicates physisorption, with the O\({}_{2}\) molecule lying in the middle of the pore with O-O bond distance equals to 1.0 A. We may argue here that there is an interplay between the number of oxygen atoms and the number of dangling bonds in the pore. The density of states of the above discussed nanopores are show in Fig. 5. V\({}_{\rm 1S-O_{2}}\) shows states within the gap, as seen in Fig. 5(a). On the other hand, V\({}_{\rm 2S-O_{2}}\) does not have states in the gap as shown in Fig. 5(b). Furthermore, V\({}_{\rm 1Mo-O_{2}}\) and V\({}_{\rm 1Mo6S-O_{2}}\) have states within the gap due to sulphur dangling bonds. Recently it has been suggested that the formation of oxygen at substitutional sites with no band gap states can be formed in MoS\({}_{2}\) layers [42]. A possible explanation for the ordered oxygen incorporation is that due to the higher strength of the Mo-O bonds compared to M-S, the substitutional oxidation of the 2D MoS\({}_{2}\) basal plane could be also thermodynamically favourable [29; 46]. One can explain the dissociation considering that SO\({}_{2}\) has cohesive energy of -3.1 eV, which is larger than the formation enthalpy of MoS\({}_{2}\) which is -2.48 eV. Experiments reveal relatively low activation energy values for MoS\({}_{2}\) oxidation, ranging from 0.54 eV to 0.98 eV [43]. Recent theoretical calculations [44] report a barrier of 0.33 eV in the presence of sulphur defects. Here we considered the presence of a molibdenum vacancy. The reaction path is done by following the molecule along its minimum energy path. We have used between five and eight intermediate states to calculate the diffusion. While oxygen atom diffuses through the pore with a barrier of 2.53 eV. On the other hand, an oxygen molecule diffuses barrierless along the path shown in Fig. 6 respectively. In order to have further insight on the interaction between the oxygen molecules and the Figure 6: Diffusion barrier (top panel) and corresponding paths (lower panel) of an oxygen molecule on a molibdenum vacancy in MoS\({}_{2}\) with perpendicular orientation to the MoS\({}_{2}\) basal plane. nanopores we have calculated the projected partial charge density on the first level above the Fermi level. These are shown in Fig. 7 (a) for \(\rm V_{1Mo-O_{2}}\) which has the unnocupied orbitals of the pore oriented towards the molecule. However, as the preferred orientation of the molecule in the middle of the pore is perpendicular to the pore, this could explain the small overlap between HOMO of the molecule and LUMO of the pore and therefire the relative small barrier for \(\rm V_{1Mo-O_{2}}\) where the molecule easily diffuses through the pore. For the \(\rm V_{1Mo-O_{2}}\) the molecule diffuses without barrier. Furthermore, the \(\rm V_{3Mo6S-O_{2}}\) pore is large enough for the molecule to diffuse either parallel or perpendicular to the pore. However, if the molecule is close enough to the edges, it is likely to dissociate. On the other hand, the \(\rm V_{1Mo6S-O_{2}}\) pore have its Mo orbitals hybridizing forming metallic-like bonds. ## IV Conclusions In this work we carried out first-principles calculations of oxygen on monolayered \(\rm MoS_{2}\) and sub-nanometer \(\rm MoS_{2}\) nanopores. The dissociation and diffusion of oxygen in \(\rm MoS_{2}\) reveal two main features: the orientation of the molecule with respect to the surface plays a role on the molecule diffusion barrier; the reactivity of the pore plays a role on the molecule dissociation. ## V Acknowledgements We acknowledge financial support from Conselho Nacional de Pesquisa e Desenvolvimento (CNPq) under grants 313081/2017-4 and 305335/2020-0. The calculations have been performed using the computational facilities of Santos Dumont Supercomputar at LNCC, CENAPAD at Unicamp and LaMCAD/UFG.
2302.13579
Resolving Entropy Growth from Iterative Methods
We consider entropy conservative and dissipative discretizations of nonlinear conservation laws with implicit time discretizations and investigate the influence of iterative methods used to solve the arising nonlinear equations. We show that Newton's method can turn an entropy dissipative scheme into an anti-dissipative one, even when the iteration error is smaller than the time integration error. We explore several remedies, of which the most performant is a relaxation technique, originally designed to fix entropy errors in time integration methods. Thus, relaxation works well in consort with iterative solvers, provided that the iteration errors are on the order of the time integration method. To corroborate our findings, we consider Burgers' equation and nonlinear dispersive wave equations. We find that entropy conservation results in more accurate numerical solutions than non-conservative schemes, even when the tolerance is an order of magnitude larger.
Viktor Linders, Hendrik Ranocha, Philipp Birken
2023-02-27T08:33:40Z
http://arxiv.org/abs/2302.13579v2
# Resolving Entropy Growth from Iterative Methods ###### Abstract We consider entropy conservative and dissipative discretizations of nonlinear conservation laws with implicit time discretizations and investigate the influence of iterative methods used to solve the arising nonlinear equations. We show that Newton's method can turn an entropy dissipative scheme into an anti-dissipative one, even when the iteration error is smaller than the time integration error. We explore several remedies, of which the most performant is a relaxation technique, originally designed to fix entropy errors in time integration methods. Thus, relaxation works well in consort with iterative solvers, provided that the iteration errors are on the order of the time integration method. To corroborate our findings, we consider Burgers' equation and nonlinear dispersive wave equations. We find that entropy conservation results in more accurate numerical solutions than non-conservative schemes, even when the tolerance is an order of magnitude larger. _Keywords: iterative methods, entropy conservation, implicit methods, dispersive wave equations_ ## 1 Introduction For many partial differential equations (PDEs) in computational fluid dynamics (CFD), the notion of mathematical entropy plays an important role. It can be that entropy is preserved or that the solution satisfies an entropy inequality. Alternatively, for nonlinear systems such as the Euler equations, an entropy inequality can be used to select a unique weak solution, at least in 1D. Enforcing such an entropy inequality on the discrete level has, for certain equations, led to schemes that are provably convergent to weak entropy solutions [25]. In recent years it has been observed that the robustness of high-order discontinuous Galerkin (DG) discretizations for compressible turbulent flows, greatly benefits from discrete entropy stability; see [18] and the references therein for an overview. Following the groundbreaking paper [16], a space-time version of the DG spectral element method (DG-SEM) with a specific choice of fluxes, was proven to be entropy-stable for systems of hyperbolic conservation laws [17], giving the first fully discrete high order scheme with this property. This method is inherently implicit. Proofs of entropy stability for schemes involving implicit time integration assume that the arising systems of nonlinear equations are solved exactly. In practice, iterative solvers are used, which are terminated when a tolerance is reached. These in turn, have rarely been constructed with the intention of preserving physical invariants. Indeed, the impact of iterative solvers on the conservation of linear invariants was analyzed in [6, 29], and it was found that many iterative methods violate global or local conservation. A recent study of Jackaman and MacLachlan [20] focuses on Krylov subspace methods for linear problems with quadratic invariants, and is related in spirit to this paper. Given the importance of entropy estimates in discrete schemes, there is a need to analyze the role played by iterative solvers in this context. We consider PDEs posed in a single spatial dimension with periodic boundary conditions: \[u_{t}=\mathcal{L}(u),\quad x\in(x_{\min},x_{\max}],\quad t\in[t_{0},t_{e}]. \tag{1}\] Here, \(\mathcal{L}\) is a nonlinear differential operator. The cases considered are such that there is a convex nonlinear functional \(\eta(u)\), here called the entropy, which is conserved. By splitting the spatial terms in certain ways, entropy conservative semi-discretizations are obtained with the aid of skew-symmetric difference operators. Next, we apply well-known implicit time integration methods that are able to either conserve or dissipate entropy, resulting in nonlinear algebraic systems with entropy bounded solutions. Using convergent iterative solvers within implicit time integration, the iteration error (and thus the entropy error) can be made arbitrarily small by iterating long enough. However, such an approach is ill-advised since the number of iterations dictates the efficiency of the scheme. On the other hand, it is desirable to iterate long enough to prevent the iteration error from affecting the time integration error. The iteration error should thus be kept on the order of the local time integration error, but smaller in magnitude. The interested reader will find an overview of the use of iterative solvers in computational fluid dynamics in [5]. In this setting, we analyze the entropy behavior of Newton's method, which forms the basis of many iterative solvers. As it turns out, Newton's method can generate undesired growth (or decay) of the entropy error. We consider Burgers' equation in Section 2 and demonstrate that Newton's method can turn an entropy dissipative scheme into an anti-dissipative method when the tolerance is comparable to the error in the time step. A detailed analysis reveals that the source of the entropy error is the Jacobian of the spatial discretization. In Section 3, strategies are evaluated that recover the entropy conservation after each Newton iteration. It is seen that these strategies come with a large cost to the efficiency of the solver, since they significantly reduce the convergence rate. To get the best of both worlds -- entropy conservation and fast convergence -- we introduce relaxation methods in Section 4. Under mild assumptions, relaxation can be used to recover any convex entropy. We apply relaxation to nonlinear dispersive wave equations in Sections 5 and 6. Experiments reveal that entropy conservation leads to numerical solutions of considerably higher quality than non-conservative schemes, even when larger tolerances on the iterations are used. We thus conclude that relaxation works well in combination with iterative methods. Finally, we summarize the developments and discuss our conclusions in Section 7. ### Software The numerical experiments discussed in this article are implemented in Julia [4]. We use the Julia packages SummationByPartsOperators.jl [40], ForwardDiff.jl [46], and Krylov.jl [35]. All source code required to reproduce the numerical experiments is available in our repository [32]. ## 2 Burgers' equation As a starting point, we consider Burgers' equation as a classical model for nonlinear conservation laws. It is well known how to design both entropy conservative and entropy dissipative discretizations for this problem. The purpose of this section is to demonstrate that Newton's method can destroy the entropic behavior of the discretization and cause undesired entropy growth (or decay). This example serves as an illustration of a general truth: Iterative methods may destroy the design principles upon which a discretization has been built. ### The continuous case Consider the inviscid Burgers' equation defined on a 1D periodic domain, \[u_{t}+6uu_{x}=0,\quad u(x,t=0)=u_{0}. \tag{2}\] The factor \(6\) has been added for consistency with the Korteweg-de Vries equation discussed in Section 5. Multiplying (2) by the solution \(u\) and integrating in space results in the identity \[\frac{\mathrm{d}}{\mathrm{d}t}\frac{1}{2}\|u\|^{2}=0,\] where periodicity has been used to eliminate the boundary terms. Here, \(\|\cdot\|\) denotes the \(L^{2}\) norm. Evidently, the quantity \(\eta(u)=\frac{1}{2}\|u\|^{2}\) is conserved. Throughout, we refer to \(\eta\) as an _entropy_ for Burgers' equation (2). ### The semi-discrete case It is well known how to design discretizations of (2) that mimic the entropy conservation of the continuous problem. Let \(\mathbf{D}\) be a skew-symmetric matrix that approximates the spatial derivative operator. In this section, we use classical fourth-order accurate central finite differences with periodic boundaries. Consider the semi-discretization \[\mathbf{u}_{t}+2(\mathbf{D}\mathrm{diag}(\mathbf{u})\mathbf{u}+\mathrm{diag} (\mathbf{u})\mathbf{D}\mathbf{u})=\mathbf{0}. \tag{3}\] Here and elsewhere a sans serif font is used to denote quantities evaluated on a discrete, uniform computational grid with grid spacing \(\Delta x\). The formulation (3) constitutes a so-called skew-symmetric split form [47, eq. (6.40)]. It arises from the identity \(6uu_{x}=2\big{(}(u^{2})_{x}+uu_{x}\big{)}\). Multiplying (3) by \(\Delta x\mathbf{u}^{\top}\) and using the skew-symmetry of \(\mathbf{D}\) yields \[\frac{\mathrm{d}}{\mathrm{d}t}\frac{1}{2}\|\mathbf{u}\|^{2} =-2\Delta x(\mathbf{u}^{\top}\mathbf{D}\mathrm{diag}(\mathbf{u}) \mathbf{u}+\mathbf{u}^{\top}\mathrm{diag}(\mathbf{u})\mathbf{D}\mathbf{u})\] \[=-2\Delta x(\mathbf{u}^{\top}\mathbf{D}\mathrm{diag}(\mathbf{u}) \mathbf{u}+\mathbf{u}^{\top}\mathbf{D}^{\top}\mathrm{diag}(\mathbf{u}) \mathbf{u})\] \[=-2\Delta x(\mathbf{u}^{\top}(\mathbf{D}+\mathbf{D}^{\top}) \mathrm{diag}(\mathbf{u})\mathbf{u}=\mathbf{0}.\] The final equality follows from the skew-symmetry of \(\mathbf{D}\). Here we have defined the discrete norm \(\|\mathbf{u}\|^{2}\equiv\Delta x\mathbf{u}^{\top}\mathbf{u}\) with a slight notational abuse. Evidently the semi-discrete entropy \(\eta(\mathbf{u})=\frac{1}{2}\|\mathbf{u}\|^{2}\) is conserved. ### The fully discrete case Since the entropy \(\eta\) is a quadratic functional of the solution, a fully discrete scheme that conserves entropy is obtained by discretizing (3) in time using the implicit midpoint rule. Further, any \(B\)-stable Runge-Kutta method will be entropy dissipative [8, Section 357]. Here, we consider both the (conservative) implicit midpoint rule and the (dissipative) fourth-order, three-stage Lobatto IIIC method. The entropy analyses in this section and the next are for simplicity reserved for the midpoint rule. The analogous results for Lobatto IIIC are found in Appendix A. They heavily utilize the fact that Lobatto IIIC is equivalent to a Summation-By-Parts (SBP) method in time [39]; see [50, 36, 33, 7, 31] for further developments of this topic. For a generic system of ordinary differential equations \(\mathbf{u}_{t}=\mathbf{f}(\mathbf{u})\), the midpoint rule can be expressed as a Runge-Kutta method as follows: \[\mathbf{U} =\mathbf{u}^{n}+\frac{\Delta t_{n}}{2}\mathbf{f}(\mathbf{U})\] \[\mathbf{u}^{n+1} =\mathbf{u}^{n}+\Delta t_{n}\mathbf{f}(\mathbf{U})\equiv 2 \mathbf{U}-\mathbf{u}^{n}.\] Here, \(\mathbf{U}\) denotes an intermediate stage used to define the numerical solution \(\mathbf{u}^{n+1}\approx\mathbf{u}(t_{n+1})\) in terms of the solution at the previous time step; \(\mathbf{u}^{n}\approx\mathbf{u}(t_{n})\). It is understood that \(\Delta t_{n}=t_{n+1}-t_{n}\). Applied to the semi-discretization (3), the fully discrete scheme becomes, after slight rearrangement, \[\begin{split}\mathbf{F}(\mathbf{U})&:=\mathbf{U}- \mathbf{u}^{n}+\Delta t_{n}(\mathbf{D}\mathrm{diag}(\mathbf{U})\mathbf{U}+ \mathrm{diag}(\mathbf{U})\mathbf{D}\mathbf{U})=\mathbf{0},\\ \mathbf{u}^{n+1}&=2\mathbf{U}-\mathbf{u}^{n}.\end{split} \tag{4}\] From the second line in (4), the entropy \(\eta(\mathbf{u}^{n+1})\) is given by \[\eta(\mathbf{u}^{n+1})=\frac{\Delta x}{2}(2\mathbf{U}-\mathbf{u}^{n})^{\top}(2 \mathbf{U}-\mathbf{u}^{n})=\eta(\mathbf{u}^{n})+2\Delta x(\mathbf{U}^{\top} \mathbf{U}-\mathbf{U}^{\top}\mathbf{u}^{n}). \tag{5}\] Left multiplication of the first line in (4) by \(\mathbf{U}^{\top}\) reveals that \[\begin{split}\mathbf{U}^{\top}\mathbf{U}-\mathbf{U}^{\top} \mathbf{u}^{n}&=-\Delta t_{n}\mathbf{U}^{\top}\mathbf{f}( \mathbf{U})\\ &=-\Delta t_{n}\mathbf{U}^{\top}(\mathbf{D}\mathrm{diag}(\mathbf{ U})+\mathrm{diag}(\mathbf{U})\mathbf{D})\mathbf{U}\\ &=-\Delta t_{n}\mathbf{U}^{\top}((\mathbf{D}+\mathbf{D}^{\top}) \mathrm{diag}(\mathbf{U}))\mathbf{U}=0,\end{split} \tag{6}\] where the skew-symmetry of \(\mathbf{D}\) has been used in the same way as in the semi-discrete analysis. Consequently, \(\eta(\mathbf{u}^{n+1})=\eta(\mathbf{u}^{n})\), hence entropy is conserved. ### Newton's method The analysis above assumes that the stage equations in the first line of (4) are solved exactly. In practice this is infeasible. Instead, the stage vector \(\mathbf{U}\) is approximated using iterative methods. Here we consider Newton's method as an illustrative example. If Newton iterates are computed until the residual is sufficiently small, then entropy conservation is effectively retained. However, for efficiency reason it is desirable to terminate the iterates when the residual is smaller than some tolerance, chosen to reflect the overall expected accuracy of the scheme. We will show that Newton's method is not entropy conservative from one iteration to the next. Consequently, schemes utilizing Newton's method (and other iterative methods) tend to lose the design principle upon which the discrete scheme is built, namely entropy conservation. The iterates produced by Newton's method applied to the stage equation in (4) are obtained as \[\begin{split}\mathbf{F}^{\prime}(\mathbf{U}^{(k)})\Delta\mathbf{ U}+\mathbf{F}(\mathbf{U}^{(k)})&=\mathbf{0},\\ \mathbf{U}^{(k+1)}&=\mathbf{U}^{(k)}+\Delta\mathbf{ U},\quad k=0,1,\ldots\end{split} \tag{7}\] Here, \(\mathbf{F}^{\prime}\) denotes the Jacobian of \(\mathbf{F}\) and can be explicitly evaluated; see [11]: \[\mathbf{F}^{\prime}(\mathbf{U}^{(k)})=\mathbf{l}+\Delta t_{n}\left(\mathrm{ diag}(\mathbf{U}^{(k)})\mathbf{D}+\mathrm{diag}(\mathbf{D}\mathbf{U}^{(k)})+2 \mathbf{D}\mathrm{diag}(\mathbf{U}^{(k)})\right). \tag{8}\] Here, \(\mathbf{l}\) is the identity matrix. Inserting this expression into (7), explicitly writing out \(\mathbf{F}(\mathbf{U}^{(k)})\) from (4), and utilizing the fact that \(\Delta\mathbf{U}=\mathbf{U}^{(k+1)}-\mathbf{U}^{(k)}\) leads to the following equation for \(\mathbf{U}^{(k+1)}\): \[\begin{split}\mathbf{U}^{(k+1)}-\mathbf{u}^{n}&+ \Delta t_{n}\left(\mathrm{diag}(\mathbf{U}^{(k)})\mathbf{D}\mathbf{U}^{(k+1)}+ \mathbf{D}\mathrm{diag}(\mathbf{U}^{(k)})\mathbf{U}^{(k+1)}\right)\\ &+\Delta t_{n}\underbrace{\left[\mathrm{diag}(\mathbf{D}\mathbf{ U}^{(k)})+\mathbf{D}\mathrm{diag}(\mathbf{U}^{(k)})\right]}_{\mathbf{M}}\Delta \mathbf{U}=\mathbf{0}.\end{split} \tag{9}\] This equation replaces the stage equations in (4) when \(k+1\) Newton iterations are performed. Suppose that Newton's method is terminated after \(k+1\) iterations and that the solution is updated as \(\mathbf{u}^{n+1}=2\mathbf{U}^{(k+1)}-\mathbf{u}^{n}\). As in (5), the entropy is given by \[\eta(\mathbf{u}^{n+1})=\eta(\mathbf{u}^{n})+2\Delta x\left((\mathbf{U}^{(k+1) })^{\top}\mathbf{U}^{(k+1)}-(\mathbf{U}^{(k+1)})^{\top}\mathbf{u}^{n}\right).\] Upon computing the parenthesized term, note that the first line of (9) is precisely \(\mathbf{U}^{(k+1)}-\mathbf{u}^{n}+\Delta t_{n}\mathbf{f}(\mathbf{U}^{(k)})\). By the same derivation as in (6) it follows that \((\mathbf{U}^{(k)})^{\top}\mathbf{f}(\mathbf{U}^{(k)})=0\). Consequently, \[(\mathbf{U}^{(k+1)})^{\top}\mathbf{U}^{(k+1)}-(\mathbf{U}^{(k+1)})^{\top} \mathbf{u}^{n}=-\Delta t_{n}(\mathbf{U}^{(k+1)})^{\top}\mathbf{M}\Delta \mathbf{U},\] and hence \[\eta(\mathbf{u}^{n+1})=\eta(\mathbf{u}^{n})-2\Delta x\Delta t_{n}(\mathbf{U}^{(k+1 )})^{\top}\mathbf{M}\Delta\mathbf{U}.\] A slightly more elegant expression is obtained by replacing \((\mathbf{U}^{(k)})^{\top}\) with \(\Delta\mathbf{U}^{\top}\). This can be done since the vector \(\mathbf{U}^{(k)}\) lies in the kernel of \(\mathbf{M}^{\top}\). To see this, note that \[\mathbf{M}^{\top}\mathbf{U}^{(k)} =\mathrm{diag}(\mathbf{D}\mathbf{U}^{(k)})\mathbf{U}^{(k)}+ \mathrm{diag}(\mathbf{U}^{(k)})\mathbf{D}^{\top}\mathbf{U}^{(k)}\] \[=\mathrm{diag}(\mathbf{D}\mathbf{U}^{(k)})\mathrm{diag}(\mathbf{ U}^{(k)})\mathbf{1}-\mathrm{diag}(\mathbf{U}^{(k)})\mathrm{diag}(\mathbf{D} \mathbf{U}^{(k)})\mathbf{1}=\mathbf{0}.\] Here, the skew-symmetry of \(\mathbf{D}\) has been used together with the fact that any vector \(\mathbf{v}\) satisfies \(\mathbf{v}=\mathrm{diag}(\mathbf{v})\mathbf{1}\), where \(\mathbf{1}\) is the vector of ones. The final equality follows by the commutativity of diagonal matrices. We summarize these observations in the following: **Proposition 1**.: _Consider the discretization (4) of Burgers' equation (2) and apply \((k+1)\) iterations with Newton's method to the stage equations. The entropy of the resulting numerical solution satisfies_ \[\eta(\mathbf{u}^{n+1})=\eta(\mathbf{u}^{n})-2\Delta x\Delta t_{n}\Delta \mathbf{U}^{\top}\mathbf{M}\Delta\mathbf{U}. \tag{10}\] _Thus, the entropy error \(\eta(\mathbf{u}^{n+1})-\eta(\mathbf{u}^{n})\) depends on the indefinite quadratic form_ \[\Delta\mathbf{U}^{\top}\mathbf{M}\Delta\mathbf{U}=\Delta\mathbf{U}^{\top} \left[\mathrm{diag}(\mathbf{D}\mathbf{U}^{(k)})+\mathbf{D}\mathrm{diag}( \mathbf{U}^{(k)})\right]\Delta\mathbf{U}.\] _Consequently, Newton's method may cause both entropy growth and decay._ With Lobatto IIIC in place of the midpoint rule, the entropy error has an identical form, although \(\Delta\mathbf{U}\) and \(\mathbf{M}\) incorporate all intermediate stages in this case. Assuming that the iterates converge, \(\Delta\mathbf{U}\) will decrease until the entropy error is vanishingly small. However, in practical applications, we terminate the iterations at some tolerance matching the accuracy of the scheme. In such circumstances, the entropy error must be corrected by other means. As an illustrative example we compute a single time step for Burgers' equation (2). The spatial domain is set to \((-10,10]\) and the initial condition is given by \[u_{0}=\frac{c}{2}\operatorname{sech}^{2}\left(\frac{\sqrt{c}}{2}x\right), \tag{11}\] where \(c=2\). In the spatial discretization we set \(\Delta x=0.1\). Both Lobatto IIIC and the midpoint rule are used in time with \(\Delta t=0.5\). The entropy \(\eta(\mathbf{u}^{n})\) (with \(n=1\)) and residual \(\|\mathbf{F}(\mathbf{U}^{(k)})\|\) are shown in Fig. 1 when \(k\) Newton iterations have been used to approximate the solution to the stage equation in (4). Throughout, \(\mathbf{U}^{(0)}=\mathbf{u}^{n}\) is used as initial guess. This choice is known to conserve linear invariants [6, 29]. The entropy visibly grows when too few Newton iterations are used. As expected, it drops back to its correct value with further iterations, testifying to the indefiniteness of the quadratic form in (10). The residual converges quadratically as expected for Newton's method. These examples show that if Newton's method is applied with a tolerance around \(10^{-3}\), then entropy growth is to be expected in these simulations. Note that entropy growth may occur even for the provably entropy dissipative discretization. Figure 1: Entropy \(\eta(\mathbf{u}^{n})=\|\mathbf{u}^{n}\|/2\) and residual \(\|\mathbf{F}(\mathbf{U}^{(k)})\|\) after \(n=1\) time steps with \(k\) Newton iterations applied to Burgers’ equation with \(\Delta x=0.1\) and \(\Delta t=0.1\). Strategies for recovering entropy conservation There are several ways in which we can recover entropy conservation within Newton's method. The most obvious is to simply keep iterating until the entropy error is negligible. However, this process might be costly since we may be forced to iterate to residuals that are smaller than what is motivated by the accuracy of the discretization. The entropy error in (10) opens up several avenues for modifying Newton's method to recover entropy conservation in each iteration. In the following subsections we describe two strategies that are independent of the choice of tolerance. However, as we will see, they both converge slower than Newton's method. The two methods are, respectively * Method of Newton-type: By modifying the Jacobian, the entropy error is removed in its entirety. * Inexact Newton: Entropy conservation is recovered using a line search. Throughout this section, we confine our attention to the entropy conservative discretization that utilizes the implicit midpoint rule. ### Method of Newton-type The entropy error (10) caused by Newton's method originates from the Jacobian of the spatial discretization. A simple remedy is thus to modify the Jacobian by removing the problematic terms, thereby obtaining a method of Newton-type. In other words, replacing the exact Jacobian \(\mathsf{F}^{\prime}\) in (8) with the approximation \[\tilde{\mathsf{F}}^{\prime}(\mathsf{U}^{(k)}):=\mathsf{I}+\Delta t_{n}\left( \operatorname{diag}(\mathsf{U}^{(k)})\mathsf{D}+\mathsf{D}\operatorname{diag} (\mathsf{U}^{(k)})\right)\] should result in a vanishing entropy error. However, this will simultaneously reduce the convergence speed from quadratic to linear [22, Chapter 5]. Repeating the experiment in section 2.4 reveals that this is indeed the case. Fig. 2 shows the entropy and residual. Clearly the entropy is conserved as desired. However, the convergence is no longer quadratic but linear. After 14 iterations, the residual is comparable to four iterations with Newton's method. It is thus questionable if anything has been gained. ### Inexact Newton As a second alternative, entropy conservation can be recovered through a line search. We replace Newton's method as stated in (7) by \[\begin{split}\mathsf{F}^{\prime}(\mathsf{U}^{(k)})(\tilde{\mathsf{ U}}-\mathsf{U}^{(k)})+\mathsf{F}(\mathsf{U}^{(k)})&=\mathbf{0}, \\ \mathsf{U}^{(k+1)}&=\alpha_{k}\tilde{\mathsf{U}}+(1- \alpha_{k})\mathsf{U}^{(k)},\end{split} \tag{12}\] where \(\alpha_{k}\in[0,1]\) is a sequence of scalar parameters. This formulation is equivalent to the inexact Newton's method \[\begin{split}\mathsf{F}^{\prime}(\mathsf{U}^{(k)})\Delta\mathsf{ U}+\mathsf{F}(\mathsf{U}^{(k)})&=(1-\alpha_{k})\mathsf{F}( \mathsf{U}^{(k)}),\\ \mathsf{U}^{(k+1)}&=\mathsf{U}^{(k)}+\Delta\mathsf{ U}.\end{split}\] If the sequence \(\{\alpha_{k}\}\) is such that \((1-\alpha_{k})=\mathcal{O}(\|\mathsf{F}(\mathsf{U}^{(k)})\|)\), then quadratic convergence is retained under standard assumptions. Under the much less severe condition \(0\leq 1-\alpha_{k}<1\), convergence is generally linear [14]. The essential property used in the proof that the fully discrete scheme (4) is entropy conservative is that the stage vector \(\mathsf{U}\) satisfies \(\mathsf{U}^{\top}\mathsf{U}-\mathsf{U}^{\top}\mathsf{u}^{n}=0\). Proposition 1 shows that the same relation does not hold for the Newton iterations. To ensure entropy conservation for the inexact Newton method, \(\alpha_{k}\) must therefore be chosen such that \[(\mathsf{U}^{(k+1)})^{\top}\mathsf{U}^{(k+1)}-(\mathsf{U}^{(k+1)})^{\top} \mathsf{u}^{n}=0,\] where \(\mathsf{U}^{(k+1)}\) is given as in (12). This is a quadratic equation in \(\alpha_{k}\). Assuming that \(\mathsf{U}^{(k)}\) has correct entropy, the two roots are \(\alpha_{k}=0\) and \[\alpha_{k}=-\frac{\langle\Delta\mathsf{U},\mathsf{U}^{(k)}\rangle+\langle \tilde{\mathsf{U}},\mathsf{U}^{(k)}-\mathsf{u}^{n}\rangle}{\|\Delta\mathsf{ U}\|^{2}}.\] Here, \(\langle\cdot,\cdot\rangle\) denotes the Euclidean inner product scaled by \(\Delta x\). The trivial root \(\alpha_{k}=0\) results in \(\mathsf{U}^{(k+1)}=\mathsf{U}^{(k)}\) and is of no use, hence only the second root needs to be considered. Repeating the previous experiment with inexact Newton leads to the results displayed in Fig. 3. Entropy is conserved as expected. The convergence is faster than it was with the method of Newton-type, however it is still linear. This happens since \(\alpha_{k}\in[0,1]\) for each \(k\) but does not approach unity, as is necessary for quadratic convergence. Instead, its value appears to plateau around \(0.95\) in this particular experiment. ## 4 Theory of relaxation methods The idea of a line search discussed in the previous section can be applied, not necessarily after each Newton iteration, but alternatively just once after a full time step computed using multiple Newton iterations. The resulting schemes are known as relaxation methods [23, 45, 43]. As we will see, relaxation solves the problem with entropy growth without impacting the convergence of Newton's method, or that of any other iterative method. While a general theory of relaxation methods that includes multistep methods is available [43], we restrict the following discussion to one-step methods for simplicity. Thus, we consider a system of ODEs \(\mathbf{u}^{\prime}(t)=f\big{(}\mathbf{u}(t)\big{)}\) and a one-step method. We again assume that there is a (nonlinear and sufficiently smooth) functional \(\eta\) of interest, which we call an entropy. In this setting, the basic idea of relaxation methods is to perform a line search along the secant line connecting the new approxiation \(\mathbf{u}^{n+1}\) produced by a given time integration method, and the previous value \(\mathbf{u}^{n}\). This results in the relaxed value \[\mathbf{u}_{\gamma}^{n+1}=\mathbf{u}^{n}+\gamma(\mathbf{u}^{n+1}-\mathbf{u}^{n}),\] where \(\gamma\approx 1\) is the relaxation parameter chosen to enforce the desired conservation or dissipation of \(\eta\). This idea goes back to Sanz-Serna [48, 49] and Dekker & Verwer [13, pp. 265-266], who considered entropies \(\eta\) given by inner product norms. However, the first approaches resulted in an order reduction [9], which has been fixed in [23] for inner product norms and extended to general entropies in [45, 43]. Some further developments can be found in [42, 41, 2, 21, 26, 27]. To motivate the approach, we augment the ODE by the equations \(t^{\prime}=1\) and the entropy evolution given by the chain rule, leading to \[\frac{\mathrm{d}}{\mathrm{d}t}\begin{pmatrix}t\\ \mathbf{u}(t)\\ \eta\big{(}\mathbf{u}(t)\big{)}\end{pmatrix}=\begin{pmatrix}1\\ f\big{(}\mathbf{u}(t)\big{)}\\ \eta^{\prime}\big{(}\mathbf{u}(t)\big{)}f\big{(}\mathbf{u}(t)\big{)}\end{pmatrix}.\] Given a suitable estimate of the entropy \(\eta^{\mathrm{new}}=\eta\big{(}\mathbf{u}(t^{n+1})\big{)}+\mathcal{O}(\Delta t ^{p+1})\), where \(p\) is the order of the time integration method, relaxation methods enforce \[\begin{pmatrix}t_{\gamma}^{n+1}\\ \mathbf{u}_{\gamma}^{n+1}\\ \eta\big{(}\mathbf{u}_{\gamma}^{n+1}\big{)}\end{pmatrix}=\begin{pmatrix}t^{n} \\ \mathbf{u}^{n}\\ \eta\big{(}\mathbf{u}^{n}\big{)}\end{pmatrix}+\gamma\Delta t\begin{pmatrix}1 \\ \mathbf{u}^{n+1}-\mathbf{u}^{n}\\ \eta^{\mathrm{new}}-\eta\big{(}\mathbf{u}^{n}\big{)}\end{pmatrix} \tag{13}\] by inserting the second equation into the third one, resulting in a scalar equation for the relaxation parameter \(\gamma\). Next, the numerical solution is updated according to the second equation and the current simulation time is set to \(t_{\gamma}^{n+1}\). The last step is required to avoid order reduction. Finally, the time integration is continued using \((t_{\gamma}^{n+1},\mathbf{u}_{\gamma}^{n+1})\) instead of \((t^{n+1},\mathbf{u}^{n+1})\). If entropy conservation is to be enforced, the canonical choice of the entropy estimate is \(\eta^{\mathrm{new}}=\eta\big{(}\mathbf{u}^{n}\big{)}\). In case of entropy dissipation, \(\eta^{\mathrm{new}}\) can be estimated [23, 45, 43]: For an RK method \[\mathbf{y}^{i} =\mathbf{u}^{n}+\Delta t\sum_{j=1}^{s}a_{ij}\,f\big{(}\mathbf{y} ^{j}\big{)},\qquad i\in\{1,\ldots,s\},\] \[\mathbf{u}^{n+1} =\mathbf{u}^{n}+\Delta t\sum_{i=1}^{s}b_{i}\,f\big{(}\mathbf{y} ^{i}\big{)},\] with non-negative weights \(b_{i}\geq 0\), choose \[\eta^{\mathrm{new}}=\eta(\mathbf{u}^{n})+\Delta t\sum_{i=1}^{s}b_{i}(\eta^{ \prime}f)(\mathbf{y}^{i}).\] This leads to an entropy dissipative scheme. The general theory of relaxation methods yields the following result [45, 43]: **Theorem 1**.: _Consider the system of ODEs \(\textbf{u}^{\prime}(t)=f\big{(}\textbf{u}(t)\big{)}\). Assume \(\textbf{u}^{n}=\textbf{u}(t^{n})\) and \(\textbf{u}^{n+1}=\textbf{u}(t^{n+1})+\mathcal{O}(\Delta t^{p+1})\) with \(p\geq 2\) and \(t^{n+1}=t^{n}+\Delta t\). If \(\eta^{\rm new}=\eta\big{(}\textbf{u}(t^{n+1})\big{)}+\mathcal{O}(\Delta t^{p+1})\), \(\Delta t>0\) is small enough, and the non-degeneracy condition_ \[\eta^{\prime}(\textbf{u}^{n+1})\frac{\textbf{u}^{n+1}-\textbf{u}^{n}}{\| \textbf{u}^{n+1}-\textbf{u}^{n}\|}=c\Delta t+\mathcal{O}(\Delta t^{2}),\] _is satisfied with \(c\neq 0\), then there is a unique \(\gamma=1+\mathcal{O}(\Delta t^{p-1})\) that satisfies the relaxation condition (13) and the resulting relaxation method is of order \(p\) such that_ \[\textbf{u}_{\gamma}^{n+1}=\textbf{u}(t_{\gamma}^{n+1})+\mathcal{O}(\Delta t^ {p+1}).\] The crucial observation for our application in this article is that there is no further assumption on the preliminary time step update \(\textbf{u}^{n+1}\) other than the accuracy constraint \(\textbf{u}^{n+1}=\textbf{u}(t^{n+1})+\mathcal{O}(\Delta t^{p+1})\) in Theorem 1. In particular, the results can be applied to implicit Runge-Kutta methods where the stage equations are solved inexactly with Newton's method (or another iterative method) up to some tolerance as long as the tolerance of the nonlinear solver does not affect the accuracy of the time integration method. A perturbation analysis shows that an approximation \(\textbf{u}^{n+1}+\boldsymbol{\varepsilon}\) results in a perturbed relaxation parameter that can be estimated. In general, we need that the size of the perturbation \(\boldsymbol{\varepsilon}\) is at most of the same order as the local error of the baseline time integration method, i.e. \(\mathcal{O}(\Delta t^{p+1})\). This is precisely the situation desired for iterative methods in practice. ## 5 Korteweg-de Vries equation We have motivated our study by looking at the influence of inexact solutions obtained by Newton's method applied to provably entropy conservative and entropy dissipative time discretization methods for Burgers' equation. Next, we will look at a more complicated equation where i) implicit methods are required due to stiffness constraints and ii) entropy conservative methods have a significant advantage. Indeed, the classical Korteweg-de Vries (KdV) equation \[u_{t}+6uu_{x}+u_{xxx}=0,\quad u(x,t=0)=u_{0}. \tag{14}\] is stiff due to the linear dispersive term \(u_{xxx}\). We consider solutions of the form \[u(x,t)=\frac{c}{2}\operatorname{sech}^{2}\left(\frac{\sqrt{c}}{2}(x-ct)\right). \tag{15}\] In the context of the KdV equation, (15) describes a soliton propagating with speed \(c\). The behavior of numerical time integration methods applied to soliton solutions has been analyzed in [12]: If the time integration method conserves the linear invariant \(\int u\) and the quadratic invariant \(\int u^{2}\), the error of the numerical solution has a leading-order term that grows linearly with time. Otherwise, the error grows quadratically in time at leading order. De Frutos and Sanz-Serna [12] verified this numerically using the implicit midpoint rule and an entropy non-conservative third order SDIRK method. They used essentially exact solutions of the stage equations. Under such conditions, relaxation has been demonstrated to yield the same reduced error growth when applied to the SDIRK method [42]. Here, we extend the investigations and focus on the influence of non-negligible tolerances of the nonlinear solver. In other words, we approximate the stage solutions to the stage equations with a tolerance that matches that of the time discretization. ### Numerical methods We consider the soliton solution (15) of the KdV equation (14) in the periodic domain \(x\in(-10,10]\). The spatial discretization is the same as for Burgers' equation, i.e., five-point centered (i.e. skew-symmetric) differences with a spatial increment \(\Delta x=0.1\). In time, the implicit midpoint rule is used with time step \(\Delta t=0.05\). The final time is set to \(t=1000\). The solution to the nonlinear system arising within each time step is approximated with Newton-GMRES. Here, the absolute tolerance is set to zero and the relative tolerance is chosen as \(tol\in\{10^{-3},10^{-4},10^{-5}\}\). Applying relaxation to preserve a quadratic invariant yields a linear equation for the relaxation parameter that we solve analytically. For the implicit midpoint rule, it is given by \[\gamma=\frac{\|\mathbf{u}^{n}\|^{2}-\langle\mathbf{U},\mathbf{u}^{n}\rangle}{ \|\mathbf{U}-\mathbf{u}^{n}\|^{2}}.\] The tolerances for GMRES are chosen using the procedure suggested by Eisenstat and Walker [14], which is designed to recover quadratic convergence without over-iterating the linear solver in the early Newton iterations. This procedure requires the choice of two scalar parameters, \(\eta_{\max}\) and \(\gamma\) (not to be confused with the relaxation parameter). Here, we follow the implementation described in [22, Chapter 6.3] with the parameter choices \((\gamma,\eta_{\max})=(0.9,0.9)\). ### Numerical results Fig. 4 shows the discrete \(L^{2}\) error and the error in the quadratic invariant \(\eta(\mathbf{u})=\frac{1}{2}\|\mathbf{u}\|^{2}\) for four cases: The unrelaxed solver with relative tolerance chosen from the set \(\{10^{-3},10^{-4},10^{-5}\}\) and the relaxed solver with relative tolerance \(10^{-3}\). The absolute tolerance is set to zero. The discrete \(L^{2}\) error after the first time step is roughly \(10^{-3}\). Newton's method with a relative tolerance of \(10^{-3}\) results in a superlinear error growth in time. The error eventually plateaus due to the fact that the numerical solution drifts completely out of phase with the true solution. It subsequently drifts in and out of phase, which gives rise to the oscillating nature of the error. Subsequent growth is also a consequence of the soliton losing its initial shape. Shrinking the tolerance to \(10^{-4}\) reduces the error growth rate significantly. However, the growth is still superlinear and reaches the same plateau seen with the larger tolerance. Yet another tolerance reduction to \(10^{-5}\) leads to considerably improved results. The error grows linearly throughout the simulation time. However, the quadratic invariant is still not conserved. By instead applying relaxation after every time step and using the original tolerance \(10^{-3}\), the error growth rate is reduced even further. Additionally, the entropy is conserved up to machine precision. Thus, entropy conservation yields a better numerical solution even with a two orders of magnitude larger tolerance compared to the non-conservative scheme. Figure 5 shows the results using the same space discretization but the fourth-order, three-stage Lobatto IIIC method in time with \(\Delta t=0.1\). Since this method is \(B\)-stable, it dissipates the quadratic entropy when the stage equations are solved exactly. Here, we again apply Newton-GMRES with different tolerances. In this case, we set the absolute and relative tolerances to the same value, again chosen from the set \(\{10^{-3},10^{-4},10^{-5}\}\). The \(L^{2}\) error after one time step is approximately \(10^{-3}\). The resulting scheme is anti-dissipative with tolerances \(10^{-3}\) and \(10^{-4}\). The error displays similar characteristics to that of the midpoint rule with an initially superlinear growth. With the tolerance \(10^{-5}\) the entropy error is negative. The \(L^{2}\) error shows a hump, suggesting that the numerical solution first drifts out of phase in one direction, then turns around and drifts the other way. Eventually the error starts to grow superlinearly as seen for the other tolerances. As for the midpoint rule, relaxation leads to entropy conservation, a slower error growth and an overall better numerical solution. Again, this holds even when the tolerance is two orders of magnitude larger than the non-conservative scheme. ## 6 Benjamin-Bona-Mahony equation An alternative to the stiff dispersive term of the KdV equation has been proposed by Benjamin, Bona, and Mahony (BBM) in [3], leading to the BBM equation \[u_{t}+u_{x}+uu_{x}-u_{txx}=0. \tag{16}\] Due to the mixed-derivative term \(-u_{txx}\), the system is not stiff, but a linear elliptic equation must be solved to evaluate the time derivative \(u_{t}\). Assuming periodic boundary conditions, Figure 4: Error of numerical solutions and entropy error of the KdV equation (14) with periodic boundary conditions using the implicit midpoint rule with time step size \(\Delta t=0.05\). The implicit equations are solved with Newton-GMRES and different relative tolerances. the functionals \[J_{1}^{\text{BBM}}(u) =\int u,\] \[J_{2}^{\text{BBM}}(u) =\frac{1}{2}\int(u^{2}+(u_{x})^{2})=\frac{1}{2}\int u(\operatorname{ I}-\partial_{x}^{2})u, \tag{17}\] \[J_{3}^{\text{BBM}}(u) =\int(u+1)^{3},\] are invariants of solutions to the BBM equation [37]. The error growth under time discretization has been analyzed in [1]: For methods conserving the linear invariant and one of the nonlinear invariants, the error of solitary waves grows linearly in time while general methods have a quadratically growing error (both at leading order). We use this example to investigate the behavior of the methods also for non-quadratic entropies. ### Numerical methods We introduce numerical methods conserving important invariants of the BBM equation (16) with periodic boundary conditions. To broaden the scope of the following derivations, we assume that a discrete inner product is given by a diagonal, symmetric, and positive-definite matrix \(\boldsymbol{\mathsf{M}}\). For the classical finite-difference methods described above, \(\boldsymbol{\mathsf{M}}=\Delta x\boldsymbol{\mathsf{I}}\). Next, we need derivative operators \(\boldsymbol{\mathsf{D}}_{1,2}\) approximating the first and second derivative operators. We require compatibility with integration by parts, i.e., we need that \(\boldsymbol{\mathsf{D}}_{1}\) is skew-symmetric with respect to \(\boldsymbol{\mathsf{M}}\) and that \(\boldsymbol{\mathsf{D}}_{2}\) is symmetric and negative semidefinite with respect to \(\boldsymbol{\mathsf{M}}\). This is satisfied by classical central finite differences and Fourier collocation methods but also by appropriate continuous and discontinuous Galerkin methods; see e.g. [44]. Figure 5: Error of numerical solutions and entropy error of the KdV equation (14) with periodic boundary conditions using the fourth-order, three-stage Lobatto IIIC method with time step size \(\Delta t=0.1\). The implicit equations are solved with Newton-GMRES and different absolute and relative tolerances. For brevity, let \(\mathbf{u}^{2}=\mathrm{diag}(\mathbf{u})\mathbf{u}\) denote the elementwise square of \(\mathbf{u}\). The convention can similarly be extended to other elementwise powers as necessary. Using such periodic derivative operators, the semidiscretization \[\mathbf{u}_{t}+(\mathbf{l}-\mathbf{D}_{2})^{-1}\left(\frac{1}{3}\mathbf{D}_{1} \mathbf{u}^{2}+\frac{1}{3}\mathrm{diag}(\mathbf{u})\mathbf{D}_{1}\mathbf{u}+ \mathbf{D}_{1}\mathbf{u}\right)=\mathbf{0} \tag{18}\] conserves both the linear and the quadratic invariant [44], which are discretely represented as \[J_{1}^{\mathrm{BBM}}(\mathbf{u})=\mathbf{1}^{\top}\mathbf{M}\mathbf{u},\qquad J _{2}^{\mathrm{BBM}}(\mathbf{u})=\mathbf{u}^{\top}(\mathbf{l}-\mathbf{D}_{2}) \mathbf{u}.\] All symplectic Runge-Kutta methods, such as the implicit midpoint rule, conserve these linear and quadratic invariants [19]. Next, we construct a method conserving the cubic invariant. **Theorem 2**.: _The semidiscretization_ \[\mathbf{u}_{t}+(\mathbf{l}-\mathbf{D}_{2})^{-1}\mathbf{D}_{1}\left(\frac{1}{ 2}\mathbf{u}^{2}+\mathbf{u}\right)=\mathbf{0} \tag{19}\] _conserves the linear invariant \(J_{1}^{\mathrm{BBM}}(\mathbf{u})\) and the cubic invariant_ \[J_{3}^{\mathrm{BBM}}(\mathbf{u})=\mathbf{1}^{\top}\mathbf{M}(\mathbf{1}+ \mathbf{u})^{3}\] _for commuting periodic derivative operators \(\mathbf{D}_{1}\), \(\mathbf{D}_{2}\)._ Proof.: Conservation of the linear invariant follows from Lemma 2.1 and Lemma 2.2 of [44] since \[\mathbf{1}^{\top}\mathbf{M}\mathbf{u}_{t}=-\mathbf{1}^{\top}\mathbf{M}( \mathbf{l}-\mathbf{D}_{2})^{-1}\mathbf{D}_{1}\left(\frac{1}{2}\mathbf{u}^{2}+ \mathbf{u}\right)=-\mathbf{1}^{\top}\mathbf{D}_{1}\left(\frac{1}{2}\mathbf{u} ^{2}+\mathbf{u}\right)=0.\] Given conservation of the linear invariant, conservation of the cubic invariant is equivalent to conservation of the Hamiltonian \[\mathcal{H}(\mathbf{u})=\mathbf{1}^{\top}\mathbf{M}\left(\frac{1}{6}\mathbf{u }^{3}+\frac{1}{2}\mathbf{u}^{2}\right).\] This Hamiltonian is conserved, since \(\mathbf{M}(\mathbf{l}-\mathbf{D}_{2})^{-1}\mathbf{D}_{1}\) is skew-symmetric [44, Lemma 2.3] and \[\mathcal{H}_{t}(\mathbf{u})=\mathcal{H}^{\prime}(\mathbf{u})\mathbf{u}_{t}=- \left(\frac{1}{6}\mathbf{u}^{3}+\frac{1}{2}\mathbf{u}^{2}\right)^{\top} \mathbf{M}(\mathbf{l}-\mathbf{D}_{2})^{-1}\mathbf{D}_{1}\left(\frac{1}{6} \mathbf{u}^{3}+\frac{1}{2}\mathbf{u}^{2}\right)=0.\] The average vector field (AVF) method [34] for an ODE \(\mathbf{u}^{\prime}(t)=f\big{(}\mathbf{u}(t)\big{)}\) is given by \[\mathbf{u}^{n+1}=\mathbf{u}^{n}+\Delta t\int_{0}^{1}f\big{(}s\mathbf{u}^{n+1} +(1-s)\mathbf{u}^{n}\big{)}\mathrm{d}s.\] It conserves the Hamiltonian \(\mathcal{H}\) of Hamiltonian systems \(\mathbf{u}^{\prime}(t)=S\,\mathcal{H}^{\prime}\big{(}\mathbf{u}(t)\big{)}\) with a constant (in time) skew-symmetric operator \(S\)[38]. For quadratic Hamiltonians, it is equivalent to the implicit midpoint rule; for (up to) quartic Hamiltonians, it is equivalent to \[\mathbf{u}^{n+1}=\mathbf{u}^{n}+\frac{\Delta t}{6}\left(f(\mathbf{u}^{n})+4f \bigg{(}\frac{\mathbf{u}^{n+1}+\mathbf{u}^{n}}{2}\bigg{)}+f(\mathbf{u}^{n+1})\right) \tag{20}\] see [10]. Thus, we obtain **Proposition 2**.: _The time integration method (20) applied to the semidiscretization (19) of the BBM equation with periodic boundary conditions conserves the linear and cubic invariants under the conditions given in Theorem 2._ ### Numerical results We consider the traveling wave solution \[u(t,x)=A\,\mathrm{sech}\big{(}K(x-ct)\big{)}^{2},\quad A=3(c-1),\quad K=\frac{ 1}{2}\sqrt{1-1/c}, \tag{21}\] with speed \(c=1.2\) in the periodic domain \((-90,90]\). We apply the semidiscretizations described above with Fourier collocation methods [24, Chapter 4] using \(2^{6}\) nodes. The time integration methods use fixed time steps \(\Delta t=0.25\). The nonlinear systems are solved with the same Newton-GMRES method as for the KdV equation. The relative tolerances are, as before, chosen as \(tol\in\{10^{-3},10^{-4}\}\). Applying relaxation to preserve a quadratic invariant yields a linear equation for the relaxation parameter that we solve analytically. For a cubic invariant, the relaxation parameter is determined by a quadratic equation that we solve analytically; we always choose the solution closer to unity. The results of these numerical experiments are shown in Fig. 6. The discrete \(L^{2}\) error after the first time step is of the order \(10^{-3}\). Using a relative tolerance of \(10^{-3}\) for Newton-GMRES results in a discrete \(L^{2}\) error that grows superlinearly in time. Reducing the relative tolerance of Newton's method to \(10^{-4}\) reduces the error growth rate to linear, resulting in significantly better results for long-time simulations. Applying relaxation after each time step with a relative tolerance of \(10^{-3}\) for Newton's method also yields a linear error growth rate and even slightly smaller discrete \(L^{2}\) errors for long-time simulations. ## 7 Conclusion We have analyzed entropy properties of iterative solvers in the context of time integration methods for nonlinear conservation laws. Continuing recent work on linear invariants in [6, 29], we have focused on the conservation of nonlinear functionals. In particular, we have considered combinations of space and implicit time discretization that result in entropy conservative and entropy dissipative schemes when the arising equation systems were solved exactly. In practice, the iterative solver is terminated once a tolerance is reached. This tolerance is chosen such that the iteration error is smaller than the time integration error. We have demonstrated that, in this situation, Newton's method can result in a qualitatively wrong behavior of the entropy, both for entropy conservative and dissipative schemes. Figure 6: Discrete \(L^{2}\) error of numerical methods for the BBM equation (16) with periodic boundary conditions. The implicit equations are solved with Newton-GMRES and different relative tolerances. Based on an analysis for Burgers' equation, we have explored several possible entropy fixes for Newton's method. Of these, an idea stemming from the recently developed relaxation methods, is most performant. These methods are designed as small modifications of time integrations schemes that are able to preserve the correct evolution of nonlinear functionals. Here we have shown that as long as the iteration error is small enough in the sense described above, they can also be used within implicit time integrators with inexact solves. We have demonstrated that Newton's method with inexact linear solves and reasonable tolerances combines well with the relaxation approach, in particular for nonlinear dispersive wave equations. The numerical results show that, for the problems considered here, entropy conservation leads to smaller errors than non-conservative methods, even when the tolerance of the iterative method is an order of magnitude larger. ## Acknowledgments We thank Gregor Gassner for stimulating discussions about this research topic and comments on an early draft of the manuscript. ## Statements and Declarations ### Funding Viktor Linders was partially funded by The Royal Physiographic Society in Lund. Hendrik Ranocha was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, project number 513301895) and the Daimler und Benz Stiftung (Daimler and Benz foundation, project number 32-10/22). ### Code availability We have set up a reproducibility repository [32] for this article, containing all Julia source code required to fully reproduce the numerical experiments discussed in this article. ## Appendix A Entropy analysis for Lobatto IIIC The purpose of this appendix is to provide details of the entropic behavior of the Lobatto IIIC method used in the numerical experiments in Section 2. The analysis will be kept general enough to encompass methods of arbitrary order of accuracy. It utilizes the Summation-By-Parts (SBP) property [36, 7] satisfied by the RK method. The same argument can be made for any RK method associated with the SBP framework such as Radau IA and IIA [39]. See [31, 50] for details about the properties and implementation of such methods. ### The fully discrete scheme For a system of ordinary differential equations \(\mathbf{u}_{t}=\mathbf{f}(\mathbf{u})\), the Runge-Kutta stage equations and update are given by \[\begin{split}\mathbf{y}^{i}&=\mathbf{u}^{n}+\Delta t _{n}\sum_{j=1}^{s}a_{ij}\,\mathbf{f}\big{(}\mathbf{y}^{j}\big{)},\qquad i\in\{1, \ldots,s\},\\ \mathbf{u}^{n+1}&=\mathbf{u}^{n}+\Delta t_{n}\sum_{ i=1}^{s}b_{i}\,\mathbf{f}\big{(}\mathbf{y}^{i}\big{)}.\end{split} \tag{22}\] In the experiment in Section 2, the 3-stage Lobatto IIIC method is used, which is associated with the Butcher tableau \[\begin{array}{c|cccc}0&1/6&-1/3&1/6\\ 1/2&1/6&5/12&-1/12\\ 1&1/6&2/3&1/6\\ \hline&1/6&2/3&1/6\\ \end{array}.\] It will be convenient for our purposes to express (22) in a vector format as \[\begin{split}\mathbf{F}(\mathbf{U}):&=\mathbf{U}-\mathbf{1} \otimes\mathbf{u}^{n}+\Delta t_{n}(\boldsymbol{A}\otimes\mathbf{l})\mathbf{f} (\mathbf{U})=\mathbf{0},\\ \mathbf{u}^{n+1}&=\mathbf{y}^{s}.\end{split} \tag{23}\] Here, \(\boldsymbol{A}=\{a_{ij}\}_{i,j=1}^{s}\) is the Butcher coefficient matrix, \(\mathbf{1}\) is the \(s\)-element vector of all ones, \(\mathbf{U}\) is the (column) vector containing the stacked stage vectors \(\mathbf{y}^{1},\ldots,\mathbf{y}^{s}\) and \(\otimes\) denotes the Kronecker product. The update \(\mathbf{u}^{n+1}\) can be computed as in (23) due to the fact that the final row of \(\boldsymbol{A}\) is identical to the vector \(\boldsymbol{b}^{\top}\). This property can be generalized by considering methods for which there is a vector \(\mathbf{v}\in\mathbb{R}^{s}\) such that \(\boldsymbol{A}^{\top}\boldsymbol{v}=\boldsymbol{b}\), in which case \(\mathbf{u}^{n+1}=(\boldsymbol{v}^{\top}\otimes\mathbf{l})\mathbf{U}\). For the 3-stage Lobatto IIIC method, we have \(\boldsymbol{v}=(0,0,1)^{\top}\). We now revisit the spatial discretization of Burgers' equation in (3) by setting \(\mathbf{f}(\mathbf{u})=-2(\mathbf{D}\mathrm{diag}(\mathbf{u})\mathbf{u}+ \mathrm{diag}(\mathbf{u})\mathbf{D}\mathbf{u})\). The entropy behavior of the fully discrete scheme can be analyzed with the aid of the SBP property, which can be expressed as \[\boldsymbol{B}\boldsymbol{A}^{-1}+\boldsymbol{A}^{-\top}\boldsymbol{B}= \mathrm{diag}(1,0,\ldots,0,1)=\boldsymbol{e}_{1}\boldsymbol{e}_{1}^{\top}+ \boldsymbol{e}_{s}\boldsymbol{e}_{s}^{\top}. \tag{24}\] Here, \(\boldsymbol{B}=\mathrm{diag}(\boldsymbol{b})\) and \(\boldsymbol{e}_{j}\) denotes the \(j\)th column of the \(s\times s\) identity matrix. In this context, \(\boldsymbol{A}^{-1}\) can be viewed as a difference operator adjoined with an initial condition, and \(\boldsymbol{B}\) as a quadrature rule [30]. The SBP property (24) is thus a discrete version of integration by parts. We begin by left-multiplying the stage equations in (23) by \(\Delta x\mathbf{U}^{\top}(\boldsymbol{B}\boldsymbol{A}^{-1}\otimes\mathbf{l})\). This is a well-defined operation since \(\boldsymbol{A}\) is invertible for any \(s\)[28]. By a derivation identical to (6), it holds that \((\mathbf{y}^{i})^{\top}\mathbf{f}(\mathbf{y}^{i})=\mathbf{0}\) for each \(i=1,\ldots,s\). Since \(\boldsymbol{B}\) is diagonal it follows that \(\Delta x\mathbf{U}^{\top}(\boldsymbol{B}\otimes\mathbf{l})\mathbf{f}( \mathbf{U})=\mathbf{0}\), and consequently \[\Delta x\mathbf{U}^{\top}(\boldsymbol{B}\boldsymbol{A}^{-1}\otimes\mathbf{l} )\mathbf{U}=\Delta x\mathbf{U}^{\top}(\boldsymbol{B}\boldsymbol{A}^{-1} \mathbf{1}\otimes\mathbf{u}^{n}). \tag{25}\] The left-hand side of (25) is a quadratic form and is therefore equal to its symmetric part. The right-hand side is simplified by the relation \(\boldsymbol{B}\boldsymbol{A}^{-1}\boldsymbol{1}=\boldsymbol{e}_{1}\); see [31, Lemma 3]. The identity (25) therefore reduces to \[\Delta x\boldsymbol{\mathsf{U}}^{\top}\left(\frac{\boldsymbol{B}\boldsymbol{A} ^{-1}+\boldsymbol{A}^{-\top}\boldsymbol{B}}{2}\otimes\boldsymbol{\mathsf{I}} \right)\boldsymbol{\mathsf{U}}=\Delta x\boldsymbol{\mathsf{U}}^{\top}( \boldsymbol{e}_{1}\otimes\boldsymbol{\mathsf{u}}^{n}).\] Simplification using the SBP property (24) allows us to express this in terms of the first and last stages as \[\frac{1}{2}\|\boldsymbol{\mathsf{y}}^{1}\|^{2}+\frac{1}{2}\|\boldsymbol{ \mathsf{y}}^{s}\|^{2}=\Delta x(\boldsymbol{\mathsf{y}}^{1})^{\top}\boldsymbol {\mathsf{u}}^{n}.\] By adding and subtracting \(\frac{1}{2}\|\boldsymbol{\mathsf{u}}^{n}\|^{2}\) from the right-hand side, the entropy, given by \(\eta(\boldsymbol{\mathsf{u}}^{n+1})=\frac{1}{2}\|\boldsymbol{\mathsf{y}}^{s}\| ^{2}\), can be expressed as \[\eta(\boldsymbol{\mathsf{u}}^{n+1}) =\frac{1}{2}\|\boldsymbol{\mathsf{u}}^{n}\|^{2}-\frac{1}{2}\| \boldsymbol{\mathsf{y}}^{1}\|^{2}+\Delta x(\boldsymbol{\mathsf{y}}^{1})^{ \top}\boldsymbol{\mathsf{u}}^{n}-\frac{1}{2}\|\boldsymbol{\mathsf{u}}^{n}\|^{2}\] \[=\eta(\boldsymbol{\mathsf{u}}^{n})-\eta(\boldsymbol{\mathsf{y}}^ {1}-\boldsymbol{\mathsf{u}}^{n}).\] Lobatto IIIC therefore dissipates entropy. This result is independent of the number of stages and holds more generally for RK methods with the SBP property. A slight generalization (to the so called gSBP property [15]) is necessary for the analysis to hold for some RK methods such as the Radau family. ### Newton's method The Jacobian \(\boldsymbol{\mathsf{F}}^{\prime}\) is explicitly given by \[\boldsymbol{\mathsf{F}}^{\prime}(\boldsymbol{\mathsf{U}}^{(k)})=\boldsymbol{ \mathsf{I}}+\Delta t_{n}(\boldsymbol{A}\otimes\boldsymbol{\mathsf{I}}) \boldsymbol{\mathsf{f}}^{\prime}(\boldsymbol{\mathsf{U}}^{(k)}), \tag{26}\] where \(\boldsymbol{\mathsf{f}}^{\prime}\) is the Jacobian of the spatial discretization. The form of \(\boldsymbol{\mathsf{f}}^{\prime}\) is identical to that seen for the midpoint rule, except that it is now repeated for each stage \(\boldsymbol{\mathsf{y}}^{i}\) in a block-diagonal fashion. This leads to an equation for \(\boldsymbol{\mathsf{U}}^{(k+1)}\) of the form \[\boldsymbol{\mathsf{U}}^{(k+1)}-\boldsymbol{1}\otimes\boldsymbol{\mathsf{u} }^{n}+\Delta t_{n}(\boldsymbol{A}\otimes\boldsymbol{\mathsf{I}})\boldsymbol {\mathsf{f}}(\boldsymbol{\mathsf{U}}^{(k)})+2\Delta t_{n}(\boldsymbol{A} \otimes\boldsymbol{\mathsf{I}})\tilde{\boldsymbol{\mathsf{M}}}\Delta \boldsymbol{\mathsf{U}}=\boldsymbol{0}. \tag{27}\] The matrix \(\tilde{\boldsymbol{\mathsf{M}}}\) is block-diagonal with each block identical to the case for the midpoint rule, but evaluated at the individual stages. As before, (27) represents a linearization of the fully discrete scheme around the iterate \(\boldsymbol{\mathsf{U}}^{(k)}\), perturbed by a term arising from the spatial Jacobian. By an analysis completely analogous to that of in the previous subsection, the entropy relation evaluates to \[\eta(\boldsymbol{\mathsf{u}}^{n+1})=\eta(\boldsymbol{\mathsf{u}}^{n})-\eta( \boldsymbol{\mathsf{y}}^{1}-\boldsymbol{\mathsf{u}}^{n})-2\Delta t_{n}\Delta x \Delta\boldsymbol{\mathsf{U}}^{\top}\boldsymbol{\mathsf{M}}\Delta\boldsymbol{ \mathsf{U}},\] where \(\boldsymbol{\mathsf{M}}=(\boldsymbol{B}\otimes\boldsymbol{\mathsf{D}})\text{ diag}(\boldsymbol{\mathsf{U}}^{(k)})+\text{diag}((\boldsymbol{B}\otimes\boldsymbol{\mathsf{D}}) \boldsymbol{\mathsf{U}}^{(k)})\). The entropy error induced by Newton's method is thus of the same form as for the midpoint rule (for which \(\boldsymbol{B}=1\)), but includes all \(s\) stages of the Runge-Kutta scheme.
2310.13768
PACE: Human and Camera Motion Estimation from in-the-wild Videos
We present a method to estimate human motion in a global scene from moving cameras. This is a highly challenging task due to the coupling of human and camera motions in the video. To address this problem, we propose a joint optimization framework that disentangles human and camera motions using both foreground human motion priors and background scene features. Unlike existing methods that use SLAM as initialization, we propose to tightly integrate SLAM and human motion priors in an optimization that is inspired by bundle adjustment. Specifically, we optimize human and camera motions to match both the observed human pose and scene features. This design combines the strengths of SLAM and motion priors, which leads to significant improvements in human and camera motion estimation. We additionally introduce a motion prior that is suitable for batch optimization, making our approach significantly more efficient than existing approaches. Finally, we propose a novel synthetic dataset that enables evaluating camera motion in addition to human motion from dynamic videos. Experiments on the synthetic and real-world RICH datasets demonstrate that our approach substantially outperforms prior art in recovering both human and camera motions.
Muhammed Kocabas, Ye Yuan, Pavlo Molchanov, Yunrong Guo, Michael J. Black, Otmar Hilliges, Jan Kautz, Umar Iqbal
2023-10-20T19:04:14Z
http://arxiv.org/abs/2310.13768v1
# PACE: Human and Camera Motion Estimation from in-the-wild Videos ###### Abstract We present a method to estimate human motion in a global scene from moving cameras. This is a highly challenging task due to the coupling of human and camera motions in the video. To address this problem, we propose a joint optimization framework that disentangles human and camera motions using both foreground human motion priors and background scene features. Unlike existing methods that use SLAM as initialization, we propose to tightly integrate SLAM and human motion priors in an optimization that is inspired by bundle adjustment. Specifically, we optimize human and camera motions to match both the observed human pose and scene features. This design combines the strengths of SLAM and motion priors, which leads to significant improvements in human and camera motion estimation. We additionally introduce a motion prior that is suitable for batch optimization, making our approach significantly more efficient than existing approaches. Finally, we propose a novel synthetic dataset that enables evaluating camera motion in addition to human motion from dynamic videos. Experiments on the synthetic and real-world RICH datasets demonstrate that our approach substantially outperforms prior art in recovering both human and camera motions. ## 1 Introduction Jointly estimating global human and camera motion from dynamic RGB videos is an important problem with numerous applications in areas such as robotics, sports and mixed reality. However, it is a very challenging task because the observed human and camera motions in the video are entangled. Estimating human motion by itself from videos is highly under-constrained since subject and camera motion are interchangeable. Analogously, camera motion estimation is more challenging in dynamic scenes due to spurious correspondences. Finally, pure monocular approaches can only estimate camera trajectories up to scale. There are only a few works that address the problem of global pose estimation [55, 106, 107]. These methods leverage the insight that the global human root trajectory is correlated with the local body movements; _e.g_., observing a running motion is indicative of forward motion. Hence, they suggest that global root trajectories can be estimated by exploiting learned motion priors [107] or by enforcing physics-based constraints on the reconstructed human motion [55, 106]. While this idea can help to estimate global human trajectories, motion priors or physical constraints are not enough to fully resolve the ambiguity in the mapping from local motion to global trajectories, especially under root rotations. Others utilize SLAM methods (_e.g_., COLMAP) to estimate camera poses [63, 105], then keep the camera poses fixed and estimate the global scale. However, in-the-wild videos often contain moving objects which can degrade the camera pose localization and subsequently affect the human motion estimates. In this paper, we propose a novel approach, called PACE (Person And Camera Estimation), to tackle the above problems. We formulate the problem as a global optimization and jointly optimize human and camera motions, leveraging a bundle adjustment objective to match both human pose and background scene features. In this way, the SLAM algorithm uses mostly static scene features, that do not correspond to human motion. Simultaneously, the human motion prior helps correct inaccurate camera trajectories that are incompatible with the local body movements, and informs about the global scale based on human motion statistics. We show that this formulation provides robustness to inaccurate initial human or camera motion estimates. A further contribution lies in the human motion prior itself. Commonly used human priors _e.g_., HuMoR [86] are typically autoregressive and become prohibitively slow when incorporated in a per-frame optimization, in particular for long motion sequences. In this work, we show that neural motion field (NeMF [30]) can be used to design a parallel motion prior that drastically improves computational efficiency. We divide the entire sequence into overlapping clips and maximize the likelihood of the human motion under the prior. This results in a significantly more efficient implementation without compromising reconstruction quality. Notably, the parallel motion prior allows the runtime of PACE to grow sub-linearly w.r.t. the sequence length in contrast to the linear rate in prior work. Since it is difficult to obtain ground-truth human and camera poses for in-the-wild videos, we also propose a new synthetic dataset for benchmarking human and camera motion estimation from dynamic videos called the Human and Camera Motion (HCM) dataset. It is the first dataset that provides ground-truth human and camera motion information for this task. We will make the dataset publicly available to facilitate research in this direction. We evaluate PACE on two datasets: the newly proposed synthetic HCM dataset and the RICH dataset [34], which contains a moving camera with ground truth 3D human pose and shape. Results show that our method substantially outperforms state-of-the-art (SOTA) approaches in accurately recovering human motions from dynamic cameras. Notably, our method also significantly improves camera motion estimation over SOTA SLAM algorithms for this task, which demonstrates the advantage of our global optimization framework. Additionally, we conduct extensive ablation studies to validate the impact of various design choices on performance. In summary, our contributions are as follows: * We present a novel approach for precise global human and camera motion estimation from dynamic cameras, which tightly integrates human motion priors and SLAM into a unified optimization framework that leverages both human pose and scene information. * We propose a parallel motion prior optimization scheme, which significantly improves efficiency without sacrificing accuracy, and allows the runtime to grow sub-linearly w.r.t. the sequence length. * We introduce HCM, a synthetic dataset for benchmarking global human and camera motion estimation. * Our method outperforms the SOTA methods significantly in recovering both human and camera motions, achieving 52% and 74% improvements respectively, which fully demonstrate the synergy of our unified approach. ## 2 Related Work **Camera-Space Human Pose Estimation.** Due to the difficulty in monocular depth estimation, most existing methods estimate human poses in the coordinate frame centered around the pelvis of the human body [3, 7, 10, 11, 12, 41, 43, 44, 48, 50, 51, 52, 53, 54, 55, 56, 57, 58, 80, 86, 88, 91, 92, 94, 100, 103, 113, 117, 122, 128]. These methods adopt an orthographic camera projection model and ignore the absolute 3D translation of the person with respect to the camera. To overcome this limitation, recent methods estimate human meshes in the camera coordinates [37, 40, 58, 63, 82, 85, 89, 101, 114, 116, 118]. Some methods use an optimization framework to recover the absolute translation of the person [70, 71, 72, 87, 115] or exploit various scene constraints to improve depth prediction [99, 114]. Others employ physics-based constraints to ensure the physical plausibility of the estimated poses [13, 21, 38, 89, 101, 112], use limb-length constraints [36] or approximate depth using the bounding box size [40, 74, 118]. Several approaches employ inverse kinematics to estimate human meshes with absolute trans lations in the camera coordinates [37, 58]. Heatmap-based representations have also been used to directly predict the absolute depths of multiple people [19, 93, 126]. A few methods learn to also predict the camera parameters from the image, which are used for absolute pose regression in the camera coordinates [49, 60, 116]. While these methods achieve impressive results for camera-relative pose estimation, they fail to decouple human and camera motions from dynamic videos, and therefore cannot recover global human trajectories as our method does. **Global Human Pose Estimation.** The majority of current methods for estimating 3D poses in world coordinates rely on synchronized, calibrated, and static multi-view capture setups [6, 14, 15, 16, 18, 33, 42, 84, 85, 123, 124, 127]. Huang _et al_. [8] use uncalibrated cameras but still assume time synchronization and static camera setups. Hasler _et al_. [27] handle unsynchronized moving cameras but assume multi-view input and rely on an audio stream for synchronization. Recently, Dong _et al_. [17] proposed to recover 3D poses from unaligned internet videos of different actors performing the same activity from unknown cameras, assuming that multiple viewpoints of the same pose are available in the videos. Luvizon _et al_. [67] estimate the global human poses of multiple people using the scene point cloud for static cameras. In contrast, our approach estimates human meshes in global coordinates from _monocular_ videos recorded with dynamic cameras. Several methods rely on additional IMU sensors or pre-scanned environments to recover global human motions [25, 79, 98], which is impractical for large-scale adoption. Another line of work has recently focused on estimating accurate human-scene interaction [29, 34, 66, 106]. Recent work uses human motion priors [107] and physics-based constraints [55, 106] to decouple human and camera motions but does not consider background scene features, which limits performance on in-the-wild videos. Liu _et al_. [63] obtain global human pose using SLAM and convert the pose from the camera to global coordinates. BodySLAM [31] uses features of both humans and scenes, but it only demonstrates results of a single unoccluded person slowly walking in an indoor scene. Along this line, a recent work [105] obtains initial camera trajectories with SLAM and optimizes the scale of the camera trajectories using a human motion prior [86]. In contrast, our approach tightly integrates SLAM and human motion priors into a joint optimization framework, where the entire SLAM camera trajectories (not only scale) are optimized jointly to match observed human pose and background scene features. This not only leads to more accurate human trajectory estimation but also improves full camera trajectory estimation over SLAM significantly, which has not been achieved by prior work. Additionally, our parallel motion optimization scheme also makes our approach substantially (50 times) faster than [105] for a sequence of 1000 frames. Our parallel scheme also allows PACE's time cost to grow sub-linearly w.r.t. sequence length in contrast to the linear rate of [105]. **Human Motion Prior.** There has been a significant amount of research on 3D human dynamics for various tasks, including motion prediction and synthesis [4, 5, 9, 20, 23, 28, 39, 61, 69, 81, 108, 109, 110, 104, 108, 109, 111, 100, 112, 101, 102, 103, 104, 108, 105, 106, 107, 108, 109, 110, 111, 10]. Recently, human pose estimation methods have started to incorporate learned human motion priors to help resolve pose ambiguity [48, 86, 121]. Motion-infilling approaches have also been proposed to generate complete motions from partially observed motions [26, 32, 45, 46]. Diffusion models [90] have also been used as priors for motion synthesis and infilling [35, 96, 111, 119]. Recently, He _et al_. [30] proposes the neural motion field (NeMF), which expresses human motion as a time-conditioned continuous function and demonstrates superior motion synthesis performance. Our approach extends NeMF by leveraging it as a motion prior for human pose estimation. Additionally, our proposed parallel motion optimization scheme enables efficient optimization of human motions. ## 3 Method The input to PACE is an in-the-wild RGB video \(\mathbf{I}{=}\{\mathbf{I}_{1},\cdots,\mathbf{I}_{T}\}\) with \(T\) frames captured by a moving camera. Our goal is to estimate both the camera motion and the motion of all visible people in the video in a global world coordinate system. The camera motion \(\{\mathbf{R}_{t},\mathbf{T}_{t}\}_{t=1}^{T}\) consists of the camera rotation \(\mathbf{R}_{t}\in\mathbb{R}^{3\times 3}\) and translation \(\mathbf{T}_{t}\in\mathbb{R}^{3}\) for every timestep \(t\) in the video. The global motion \(\mathbf{Q}^{i}{=}\{\Phi_{t}^{i}{=}\{\Phi_{t}^{i},\tau_{t}^{i},\theta_{t}^{i}, \beta^{i}\}\}_{t=s^{i}}^{e^{i}}\) for person \(i\) consists of the global translation \(\tau_{t}^{i}\in\mathbb{R}^{3}\), global orientation \(\Phi_{t}^{i}\in\mathbb{R}^{3\times 3}\), and the body pose parameters \(\theta_{t}^{i}\in\mathbb{R}^{23\times 3}\) for all time steps \(t\in\{s^{i}\cdots e^{i}\}\), where \(s^{i}\) and \(e^{i}\) correspond to the first and last frame in which person \(i\) is visible. The body shape parameters \(\beta^{i}\) are shared across all time steps. We use the SMPL body model [64] to obtain the articulated body meshes \(\mathbf{V}^{i}{=}\{V_{t}^{i}\}_{t=s^{i}}^{e^{i}}\) from \(\mathbf{Q}^{i}\). Specifically, SMPL consists of a linear function \(\mathcal{M}(\Phi,\tau,\theta,\beta)\) that maps the body motion \(Q_{t}^{i}{=}\{\Phi_{t}^{i},\tau_{t}^{i},\theta_{t}^{i},\beta_{t}^{i}\}\) to a triangulated body mesh \(V_{t}^{i}\in\mathbb{R}^{6890\times 3}\) with \(6890\) vertices. In the rest of this paper, we drop the superscript \(i\) from all variables for brevity but always assume the visibility of multiple people. Our key insight is to harness the complementary properties of SLAM and human motion priors. The human motion prior can be used to explain foreground human motion, which typically is dynamic and therefore has been treated as unwanted noise in existing SLAM algorithms. Leveraging the motion prior in a joint optimization regularizes the camera trajectories to be in agreement with plausible human motion and provides information about the global scale. On the other hand, SLAM leverages mostly static background features, which provide information about the camera motion and can be leveraged to resolve ambiguity in the motion space of the human motion priors. We introduce a novel unified framework, illustrated in Fig. 2, that simultaneously recovers the camera and human motion using a joint optimization objective (Sec. 3.3). Since this is a highly ill-posed problem, we exploit data-driven models to initialize our objective (Sec. 3.1) and use human motion priors to constrain the solution space (Sec. 3.2). ### Initialization We start by obtaining bounding box sequences for all visible subjects using an off-the-shelf multi-object tracking and re-identification algorithm [125]. We then estimate body pose information for each detected bounding box using the state-of-the-art method HybrIK [58]. HybrIK provides body poses in the camera coordinate frame which we represent as \(\hat{Q}_{t}^{c}{=}(\hat{\Phi}_{t}^{c},\hat{\tau}_{t}^{c},\hat{\theta}_{t},\hat{ \beta}_{t})\). The super-script \(c\) corresponds to the camera coordinate frame. Note that the local body pose \(\theta_{t}\) and shape \(\beta_{t}\) are agnostic to camera motion. For videos recorded with dynamic cameras, the estimated translation \(\hat{\tau}_{t}^{c}\) and root orientation \(\hat{\Phi}_{t}^{c}\) must be transformed from camera coordinates to a consistent world coordinate frame. This requires knowledge of the per-frame camera-to-world transforms \(\{R_{t},T_{t}\}_{t=1}^{T}\). For this, we leverage a data-driven SLAM method, namely DROID-SLAM [95], which uses the information of the static scene to estimate per-frame camera-to-world transforms \(\{\hat{R}_{t},\hat{T}_{t}\}_{t=1}^{T}\). SLAM methods, however, provide camera translations \(\hat{T}_{t}\) up to scale. Hence, at this stage, we only use the camera rotation information to obtain a person's root orientation in the world coordinate frame: \(\hat{\Phi}_{t}=\hat{R}_{t}^{-1}\hat{\Phi}_{t}^{c}\). We then use a neural network similar to [56, 30] to estimate the initial global root translations \(\{\hat{\tau}_{t}\}_{t=s}^{e}\) from the local pose parameters \(\{\hat{\Phi}_{t},\hat{\theta}_{t}\}_{t=s}^{e}\). We use a single value for shape parameters \(\beta\) for each person that we initialize with the average of the per-frame estimates from HybrIK i.e., \(\hat{\beta}{=}\frac{\sum_{t=s}^{e}\beta_{t}}{e-s}\). This forms our initial estimate of the global human motion \(\hat{Q}{=}\{\hat{Q}_{t}{=}(\hat{\Phi}_{t},\hat{\tau}_{t},\hat{\theta}_{t},\hat {\beta})\}_{t=s}^{e}\) in the world coordinate frame. In the remainder of this paper, our goal is to refine these initial estimates via human motion priors and the background scene features, while recovering accurate global camera trajectories. ### Human Motion Prior Our goal is to develop a human motion prior that ensures that the estimated human motion is plausible and also helps constrain the solution space during joint optimization of human and camera motion. For this, we use a variational autoencoder (VAE) [47], which learns a latent representation \(\mathbf{z}\) of human motion and regularizes the distribution of the latent code to be a normal distribution. We want the decoder \(\mathcal{D}\) of the VAE to be non-autoregressive for faster sampling while not sacrificing accuracy. This is important because we want to use the motion prior in an iterative optimization, and auto-regressive motion priors (e.g., HuMoR [86]) are prohibitively slow when processing large motion sequences. In contrast, a non-autoregressive decoder can be evaluated for the entire sequence in parallel. To this end, we adopt a Neural Motion Field (NeMF) [30] based decoder to represent body motion as a continuous vector field of body poses via a NeRF-style MLP [73]. In Sec. 3.3, we show that NeMF can be extended to a parallel motion prior that enables efficient optimization. We follow [30] and only model the local body motion via the prior. Specifically, \(\mathcal{D}\) is an MLP that takes the latent codes \(\{\mathbf{z}_{\Phi},\mathbf{z}_{\theta}\}\) and a time step \(t\) as input and produces the orientation \(\hat{\Phi}_{t}\), local body pose \(\hat{\theta}_{t}\), and joint contacts \(\hat{\kappa}_{t}\) for a given time step: \[\mathcal{D}:(t,\mathbf{z}_{\Phi},\mathbf{z}_{\theta})\rightarrow(\hat{\Phi}_ {t},\hat{\theta}_{t},\hat{\kappa}_{t}), \tag{1}\] where \(\mathbf{z}_{\Phi}\) and \(\mathbf{z}_{\theta}\) control the root orientation \(\Phi\) and the local body pose \(\theta\) of the person, respectively. For a given pair of \(\mathbf{z}_{\Phi}\) and \(\mathbf{z}_{\theta}\) the entire sequence can be sampled in parallel by simply varying the values of \(t\). To incorporate the motion Figure 2: **PACE overview.** Given a video with dynamic human and camera motions, we first use off-the-shelf methods to obtain initial 2D human pose, 3D human motion, and camera motions. We propose a unified optimization framework that optimizes the global human motions and full camera trajectories to reduce 2D pose errors, increase motion likelihood under human motion prior, and match background features. The final output is coherent human and camera motion in global space. priors during global optimization, we optimize the latent codes \(\{\mathbf{z}_{\Phi},\mathbf{z}_{\theta}\}\) instead of directly optimizing the local body motion \(\{\Phi_{t},\theta_{t}\}_{t=s}^{e}\). We initialize the latent codes using the pre-trained encoders of the VAE; _i.e_., \(z_{\Phi}\)=\(\mathcal{E}_{\Phi}(\{\Phi\}_{t=s}^{e})\) and \(z_{\theta}\)=\(\mathcal{E}_{\theta}(\{\theta\}_{t=s}^{e})\). We refer to [30] for training details. **Global Translation Estimation.** We use a fully convolutional network to generate the global translation \(\tau_{t}^{i}\) of the root joint, based on the local joint positions, velocities, rotations, and angular velocities as inputs. All quantities can be computed from joint rotations. Our approach, which is similar to [57, 129], takes into account the fact that the subject's global translation is conditioned on its local poses. In order to avoid any ambiguity in the output, we predict the velocity \(\hat{\tau}_{t}\) rather than \(\tau_{t}\) directly, and then integrate the velocity using the forward Euler method to obtain \(\tau_{t+1}\)=\(\tau_{t}+\hat{\tau}_{t}\Delta t\). We also predict the height of the root joint using the same convolutional network to prevent any cumulative errors that could cause the subject to float above or sink into the ground. Since changing the latent codes \(\{\mathbf{z}_{\Phi},\mathbf{z}_{\theta}\}\) also impacts the global translations \(\tau_{t}\), for simplicity, we refer to the mapping from latent codes to global human motion as \[\mathcal{P}:(t,\mathbf{z}_{\Phi},\mathbf{z}_{\theta})\rightarrow(\hat{\Phi}_ {t},\hat{\theta}_{t},\hat{\tau}_{t}). \tag{2}\] ### Global Optimization Here we detail the proposed optimization formulation for the joint reconstruction of global human and camera motion. Our goal is to optimize the latent code \(\mathbf{z}\)=\(\{\mathbf{z}_{\Phi}\), \(\mathbf{z}_{\theta}\}\) and camera-to-world transforms \(\{R_{t},sT_{t}\}\) with correct scale \(s\). Note that SLAM methods assume the camera at the first frame (\(t=0\)) to be at the origin. To align all coordinate frames, we also optimize the camera height \(h_{0}\) and orientation \(R_{0}\) for the first frame. More specifically, we optimize the following objective function: \[\min_{\begin{subarray}{c}\beta,\mathbf{z}\\ s,h_{0},R_{0},\{R_{t},T_{t}\}_{t=1}^{T}\end{subarray}}E_{\text{body}}+E_{ \text{scene}}+E_{\text{camera}}, \tag{3}\] where \[E_{\text{body}} =E_{\text{2D}}+E_{\beta}+E_{\text{pose}}+E_{\text{smooth}}^{ \text{b}}\] \[\qquad+E_{\text{VAE}}+E_{\text{consist}},\] \[E_{\text{scene}} =E_{\text{contact}}+E_{\text{height}},\] \[E_{\text{camera}} =E_{\text{PCL}}+E_{\text{smooth}}^{\text{c}}.\] The error term \(E_{\text{body}}\) ensures that the reconstructed human motion is plausible and agrees with the image evidence. \(E_{\text{2D}}\) measures the 2D reprojection error between the estimated 3D motion and 2D body joints \(\mathbf{x}_{t}\) obtained using a state-of-the-art 2D joint detector [102]: \[E_{\text{2D}}=\sum_{i=1}^{N}\sum_{t=s_{i}}^{e_{i}}\omega_{t}\zeta(\Pi(R_{0}R_{ t}J_{t}^{i}+sT_{t}+\begin{bmatrix}0\\ 0\\ h_{0}\end{bmatrix})-\mathbf{x}_{t}^{i}). \tag{4}\] Here \(\omega_{t}\) are the body joint detection confidences, \(\zeta\) is the robust Geman-McClure function [22], \(\Pi\) corresponds to perspective projection using the known camera intrinsic matrix \(K\), and \(J_{t}^{i}\) corresponds to 3D body joints that are obtained from the SMPL body mesh via a pre-trained regressor \(\mathcal{W}\): \[J_{t}^{i}=\mathcal{W}(\mathcal{M}(\mathcal{P}(\mathbf{z},t),\beta_{t}^{i})). \tag{5}\] The error term \(E_{\text{pose}}\) penalizes large deviations of the local body pose \(\hat{\theta}_{t}\) from the HybrIK predictions, \(E_{\beta}\) is prior over body shapes [43], and \(E_{\text{VAE}}\) a motion prior loss defined as: \[E_{\text{VAE}}=-\sum_{i}^{N}\log\mathcal{N}(\mathbf{z}_{\Phi}^{i };\mu_{\Phi}(\{\Phi_{t}^{i}\}),\sigma_{\Phi}(\{\Phi_{t}^{i}\}))+ \tag{6}\] \[\log\mathcal{N}(\mathbf{z}_{\theta}^{i};\mu_{\theta}(\{\theta_{t }^{i}\}),\sigma_{\theta}(\{\theta_{t}^{i}\})).\] The term \(E_{\text{contact}}\) encourages zero velocities for joints that are predicted to be in contact \(\hat{\kappa}_{t}\) with the ground plane: \[E_{\text{contact}}=\sum_{i=1}^{N}\sum_{t=s_{i}}^{e^{i}}\hat{\kappa}_{t}^{i}||J_ {t}^{i}-J_{t-1}^{i}||^{2}, \tag{7}\] where \(\hat{\kappa}_{t}^{i}\in\mathbb{R}^{24}\) is the contact probability output from the motion prior decoder \(\mathcal{D}\) for each joint. \(E_{\text{height}}\) prevents in-contact joints from being far away from the ground plane: \[E_{\text{height}}=\hat{\kappa}_{t}^{i}\text{max}(|J_{t}^{i}|-\delta,0). \tag{8}\] The ground plane is kept fixed and assumed to be \(xy\)-plane aligned with \(+z\)-axis as the up direction. This parameterization allows us to optimize all variables in this consistent coordinate frame without the need to optimize an additional ground plane equation. The error term \(E_{\text{camera}}\) in Eq. (3) ensures that the reconstructed camera motion is smooth and consistent with the static scene motion. Since DROID-SLAM is trained on videos with static scenes only, its estimates can be noisy due to the dynamic humans present in our target videos. Hence, we propose to use the point cloud recovered by SLAM as a direct constraint in our optimization, instead of directly relying on the camera predictions. To ensure that the points on dynamic humans do not influence camera reconstruction, we remove all points that lie inside the person bounding boxes. The term \(E_{\text{PCL}}\) then computes the re-projection error of the pruned point cloud similar to Eq. (4). The term \(E_{\text{smooth}}^{\text{b}}\) ensures that the optimized parameters are temporally smooth. We empirically chose the weights of different error terms in our objective and provide more details in the appendix (Table 5). **Parallel Motion Optimization.** Our specific choice of human motion prior, NeMF [30], allows us to design a parallel motion prior that is suitable for batch optimization, which significantly enhances the efficiency of our approach. Concretely, we split a motion sequence into overlapping windows of \(T{=}128\) frames. We use 16 overlapping frames to help reduce jitter and discontinuities across windows. Dividing motions into overlapping windows also allows the latent codes of the prior to model a fixed length of motion. Since our motion prior is non-autoregressive, we can optimize all windows in parallel. To ensure smooth transitions between clips we additionally compute a batch consistency term \(E_{\text{consist}}\), defined as the \(\ell_{2}\) distance between 3D joints \(J_{t}^{i}\) of overlapping frames. Multi-Stage Optimization.The task of reasoning about the camera and human motion from a video is inherently ill-posed, as optimizing both camera motion \(R_{t}.T_{t}\) and motion prior latent codes \(\{\mathbf{z}_{\Phi},\mathbf{z}_{\theta}\}\) simultaneously can result in local minima. To address this challenge, we adopt a multi-stage optimization pipeline, with different parameters optimized in different stages to avoid bad minima. After obtaining initial camera motion results from SLAM and human motion results from the motion prior, the optimization process is carried out in four stages, as outlined in Table 1. In Stage-1, we optimize only the first frame camera parameters \((R_{0},h_{0})\), camera scale \(s\), and the subjects' body shape \(\beta\) based on the initial camera and human motion. In Stage-2, we incorporate the global orientation latent code \(\mathbf{z}_{\Phi}\) to jointly adjust the subjects' global orientation and camera. In Stage-3, we optimize the local body motion \(\mathbf{z}_{\theta}\) as well. Finally, in Stage-4, we jointly optimize the full camera trajectory along with \(\mathbf{z}_{\Phi}\) and \(\mathbf{z}_{\theta}\). Each stage is run for 500 steps. The \(\lambda\) coefficients used for each objective term can be found in the appendix (Table 5). **Occlusion Handling.** Our approach offers a natural solution for occlusions due to subjects in the scene. We achieve this by excluding error terms for occluded frames during optimization and solely optimize the latent codes \(\{\mathbf{z}_{\Phi},\mathbf{z}_{\theta}\}\) for visible frames. After optimization, we sample motions from the motion prior to infill the missing poses which will be consistent with their visible neighbors. ## 4 Experiments We design our experiments to answer the following questions: (1) Can our unified approach, PACE, achieve SOTA human motion estimation performance for dynamic videos? (2) Can PACE improve camera motion estimation of a SOTA SLAM method? (3) What are the critical components in PACE that significantly impact performance? ### Datasets and Metrics **HCM Synthetic Dataset.** Currently, available datasets that provide dynamic videos (_e.g_., [63, 98]) for evaluating human pose and shape estimation have been primarily focused on evaluating the accuracy of local body estimation while neglecting the importance of global human motion estimation. Furthermore, evaluation datasets for simultaneous localization and mapping (SLAM) algorithms do not feature humans and do not provide human motion information. As such, there is a need to create a comprehensive dataset that provides accurate labels for global human and camera motion. To address this need, we have created the HCM (Human and Camera Motion) dataset, which enables the evaluation of both human and camera motion. We use the characters from the RenderPeople [1] dataset and animate them in the scenes obtained from Unreal Engine marketplace [2]. We obtain motion capture (MoCap) clips from the AMASS dataset [68]. For camera trajectory, we designed heuristics to replicate typical camera movements observed in everyday videos and professional movies. Final images were rendered using NVIDIA Omniverse. Additional information regarding the data generation process can be found in the appendix (Sec. A.3). Some example sequences can be seen in Fig 3. **RICH Dataset.** The RICH dataset [34] was collected using a total of 7 static and one moving camera. While the ground truth poses are available for the persons and static cameras, the ground truth poses of the moving camera are not available. As such, we only assess the performance of global human motion estimation using this dataset. **Metrics.** We report various metrics for both human and camera motion, with an emphasis on those that compute the error in world coordinates. Regarding human motion evaluation, the W-MPJPE metric is used to report MPJPE after aligning the first frames of the predicted and ground truth data. The WA-MPJPE metric is used to report MPJPE after aligning the entire trajectories of the predicted and ground truth data using Procrustes Alignment. Additionally, the \begin{table} \begin{tabular}{l|c|c|l} \hline **Stages** & **Opt. Variables** & **Loss Functions** & **Description** \\ \hline Stage-1 & \(s,h_{0},R_{0},\beta,\) & \(E_{2D}+E_{3}\) & camera traj. transform \\ \hline Stage-2 & \(s,h_{0},R_{0},\beta,\mathbf{z}_{\Phi}\) & \(E_{\text{body}}+E_{\text{camera}}\) & + global human orientation \\ \hline Stage-3 & \(s,h_{0},R_{0},\beta,\mathbf{z}_{\Phi},\mathbf{z}_{\Phi}\) & \(E_{\text{body}}+E_{\text{camera}}\) & + local body pose \\ \hline Stage-4 & \(\beta,\mathbf{z}_{\Phi},\mathbf{z}_{\Phi},R_{\tau},T_{t}\) & \(E_{\text{body}}+E_{\text{camera}}+E_{\text{camera}}\) & + full camera trajectory \\ \hline \end{tabular} \end{table} Table 1: Optimization stages. Figure 3: Some examples of our proposed HCM dataset. (animated figure, see in Adobe Acrobat). PA-MPJPE metric is employed to report the MPJPE error after aligning every frame of the predicted and ground truth data. We also include an ACCEL metric that measures the joint acceleration difference between ground-truth and predicted human motion. For camera motion evaluation, we follow SLAM methods and report the average translation error (ATE) after rigidly aligning the camera trajectories, the average translation error without scale alignment (ATE-S), and the CAM ACCEL camera acceleration error. The ATE-S metric provides a more accurate reflection of inaccuracies in the captured scale of the scene. ### Comparison with State-of-the-Art Methods Human Motion Estimation.We compare PACE with the following baselines on the HCM and RICH datasets: GLAMR [107], SLAHMR [105], SOTA global human and camera estimation approaches; HybrIK [58] + SLAM, which estimates the camera motions using DROID-SLAM [95] and then transforms the human motion estimated by HybrIK from camera to world space. As observed in Tables 2 and 3, PACE outperforms GLAMR, SLAHMR and HybrIK in human motion estimation significantly. In particular, PACE drastically reduces the global pose errors, _i.e_., decreasing W-MPJPE by 24% and WA-MPJPE 27% on the HCM dataset, and reducing W-MPJPE by 40% and WA-MPJPE by 52% on the RICH dataset. PACE can also recover accurate local human pose, as indicated by better PA-MPJPE on HCM and competitive PA-MPJPE on RICH. Additionally, PACE estimates much smoother motion by reducing the acceleration error (ACCEL) by 50% on HCM and 56% on RICH. which shows that it is essential to use the background scene features in our unified optimization framework. During our experiments, we also evaluated the case when all variables are optimized from the beginning without stagewise optimization. We found that the optimization does not converge at all in this case. ## 5 Conclusion We presented PACE, a novel approach for accurate global human and camera motion estimation from dynamic cameras. Our approach leverages the complementary benefits of human motion priors and SLAM methods and integrates them into a unified optimization framework that jointly optimizes human and camera motions. We also introduced a new synthetic dataset called HCM for benchmarking global human and camera motion estimation. We demonstrated that our approach achieves superior performance as compared to the state-of-the-art methods in accurately recovering both human and camera motion. Although our method can refine camera trajectories obtained from SLAM, it may not be effective in scenarios where SLAM methods fail catastrophically. We believe that the integration of physics-based constraints to prevent camera errors from overriding human motion priors would be an interesting future direction. Another limitation of our method is the assumption of a planar ground caused by the lack of scene annotation in the AMASS dataset. Also, while our proposed optimization is efficient, it is not real-time and requires batch processing to exploit future and past tempo Figure 4: **Qualitative results** on HCM (row 1), RICH (row 2), and in-the-wild videos (rows 3 & 4). PACE can estimate more accurate human and camera motion than the SOTA, GLAMR [107], for both datasets and in-the-wild videos. Figure 5: Comparison of camera motion estimation on HCM dataset. PACE estimates more accurate camera motions compared to GLAMR. ral information. Jointly solving camera and human motion in real-time and online fashion is a significant challenge. ## Appendix A Appendix In this appendix, we provide results on an additional dataset, EgoBody [120], and also provide additional implementation details. ### Experiments on EgoBody dataset EgoBody [120] is a large-scale dataset capturing ground-truth 3D human motions during social interactions in 3D scenes. EgoBody is captured with a head-mounted camera on an interactor, who sees and interacts with a second inter-actee. The camera moves as the interactor moves, and the ground truth 3D poses of the interacete are recorded in the world coordinate frame. We follow [105] and use the validation split of the dataset for evaluation. We use DROID-SLAM with the ground-truth camera intrinsics provided by the dataset. Table 4 compares PACE with the state-of-the-art methods GLAMR [107] and SLAHMR [105]. As the results indicate, PACE significantly outperforms GLAMR while achieving performance on par with SLAHMR in terms of accuracy. However, PACE offers a significant computational advantage over SLAHMR, being up to 50 times faster for a sequence with 1000 frames. Note that the runtime of SLAHMR grows linearly with the sequence length, whereas our runtime increases sub-linearly. This improvement in efficiency demonstrates the potential of PACE as a practical and effective solution for human and camera motion estimation from videos. ### Global optimization implementation details We empirically chose the weights of all error terms involved in the optimization, as summarized in Table 5. ### HCM dataset generation To create our HCM (Human and Camera Motion) dataset we used the characters from the RenderPeople [1] dataset with 3D scenes from the Unreal Engine Marketplace [2]. We manually labeled the navigable areas in each 3D scene _i.e_., sufficiently large, unobstructed flat areas within the scene. To generate a sequence, we randomly selected a 3D scene and a navigable area within it. We also randomly chose the number of people to be animated in the scene, ranging from 1 to 8 individuals. For each person, we selected a motion sequence from the validation set of the AMASS [68] dataset. To ensure that each person's motion sequence was optimized for the scene, we iteratively added one person at a time. We optimized their global translation to ensure that they remained within the bounds of the navigable area and did not intersect with existing people in the scene. We also check the terrain height of the navigable area and adjusted each character's root translation accordingly to ensure they were at the correct height relative to the terrain. Finally, we rendered the animated 3D scene into a video sequence using a moving camera. To generate camera trajectories, we designed heuristics to replicate typical camera movements observed in everyday videos and professional movies. More specifically, we used dolly zoom, random arc motion towards a person, camera motions from the MannequinChallenge dataset [59], cameras tracking a specific person, etc. This approach allowed us to generate a diverse set of sequences with varying numbers of people and diverse body and camera motions. In total, we generated 25 video sequences for evaluation. Some examples can be seen in the project page. We believe our HCM dataset will be extremely useful for evaluating human and camera motion estimation methods and furthering research in this direction.
2303.02985
Description of inclusive $(d,d^{\prime}x)$ reaction with the semiclassical distorted wave model
The description of deuteron-induced inclusive reactions has been an important subject in direct nuclear reaction studies and nuclear data science. For proton-induced inclusive processes, the semiclassical distorted wave model (SCDW) is one of the most successful models based on quantum mechanics. We improve SCDW for deuteron-induced inclusive processes and clarify the importance of the proper treatment of the kinematics of the deuteron inside a nucleus. The double differential cross section (DDX) of the inclusive deuteron-emission process $(d,d^{\prime}x)$ is described by one-step SCDW. The changes in the kinematics due to the distortion effect, the refraction effect, is taken into account by the local semiclassical approximation (LSCA). The calculated DDXs of $(d,d^{\prime}x)$ reasonably reproduce experimental data in the small energy-transfer region and at forward and middle angles with some exceptions. The angular distributions of $(d,d^{\prime}x)$ are improved by including the refraction effect. The proper treatment of the changes in the kinematics of the deuteron inside a nucleus is necessary in describing the ($d$,$d'x$) reaction. The effect of the changes on the DDX of $(d,d^{\prime}x)$ is significant compared to on the proton-induced inclusive process $(p,p^{\prime}x)$ because of the stronger distortion effect on the deuteron.
Hibiki Nakada, Kazuki Yoshida, Kazuyuki Ogata
2023-03-06T09:25:15Z
http://arxiv.org/abs/2303.02985v2
# Description of inclusive \((d,d^{\prime}x)\) reaction with the semiclassical distorted wave model ###### Abstract **Background**: The description of deuteron-induced inclusive reactions has been an important subject in direct nuclear reaction studies and nuclear data science. For proton-induced inclusive processes, the semiclassical distorted wave model (SCDW) is one of the most successful models based on quantum mechanics. **Purpose**: We improve SCDW for deuteron-induced inclusive processes and clarify the importance of the proper treatment of the kinematics of the deuteron inside a nucleus. **Methods**: The double differential cross section (DDX) of the inclusive deuteron-emission process \((d,d^{\prime}x)\) is described by one-step SCDW. **Results**: The calculated DDXs of \((d,d^{\prime}x)\) reproduce experimental data by taking into account the changes in the kinematics of the deuteron due to the distorting potential, in the small energy-transfer region and at forward angles. **Conclusion**: It is confirmed that the proper treatment of the changes in the kinematics of the deuteron inside a nucleus is necessary to reproduce experimental data. The effect of the changes on the DDX of \((d,d^{\prime}x)\) is significant compared to the proton-induced inclusive process \((p,p^{\prime}x)\) because of the stronger distortion effect on the deuteron. ## I Introduction Deuteron has the smallest binding energy among all stable nuclei. As originated from the idea of Butler [1], the weakly-bound nature of deuteron has been utilized for carrying out one-nucleon transfer reactions to study the single-particle (s.p.) structure of nuclei [2]. Furthermore, deuteron-induced reactions have opened many physics cases to reveal three-body dynamics of reaction systems in which a fragile nucleus is involved [3; 4; 5; 6; 7; 8]. Roles of deuteron breakup channels, in which proton and neutron are in continuum states, have intensively been investigated. The fragileness of deuteron is also important for nuclear data science. The international fusion materials irradiation facility (IFMIF) [9], which aims at using the inclusive \((d,nx)\) reaction at 40 MeV as an intense neutron source, is one of the most well-known international scientific projects using deuteron accelerator. The central idea of IFMIF is that the incident deuteron is broken up by interacting with the target and intense neutron with about half the deuteron incident energy is emitted; statistical decay after forming a compound nucleus is also considered to contribute to the neutron emission for large energy transfer. Quite recently, an integrated code system describing deuteron-induced reactions, which is designated as DEURACS, has been constructed and successfully applied to analysis of \((d,nx)\) reaction data [10; 11; 12]. It was found that the description of deuteron breakup channels is of crucial importance for accurately evaluating the amount of the emitted neutron, its angular and energy distribution in particular. From the viewpoint of direct nuclear reaction study, the most challenging part for describing \((d,nx)\) is the deuteron breakup with exciting the target nucleus A, which is called the nonelastic breakup (NEB). NEB contains a huge number of final states of A and it is almost impossible to describe each nuclear state accurately. DEURACS employs the Glauber model [13] to circumvent the difficulty; the eikonal and adiabatic approximations allow one to describe NEB as a combination of neutron elastic and proton nonelastic processes, and the latter can easily be evaluated by using the closure property of the proton scattering matrix [14; 15]. The validity of the Glauber model is, however, rather questionable at low incident energy and/or for large momentum and energy transfer. In fact, the agreement between the result of DEURACS and experimental data for \((d,nx)\) at middle emission angles is slightly flawed compared with that at forward angles [12]. Although the neutron emission cross section is forward-peaked and the "deviation" is not very serious for practical use, the description of NEB of deuteron without using the eikonal and adiabatic approximations will be an important subject of nuclear reaction study. Recently, the Ichimura-Austern-Vincent (IAV) model [16] has successfully been applied to NEB in several cases [17; 8; 18]. It should be noted, however, that in the IAV model for \((d,nx)\), the kinematics of the neutron are not affected at all by the nonelastic processes for which the proton and A undergo. In this sense, the three-body kinematics are not treated in a fully consistent manner in the IAV model. On the other hand, for proton-induced inclusive processes, \((p,p^{\prime}x)\), several quantum-mechanical models [19; 20; 21; 22] have been developed and successfully reproduced experimental data. Among them, the semiclassical distorted wave model (SCDW) [22; 23; 24; 25; 26; 27] has no free adjustable parameter and allows a simple intuitive picture of \((p,p^{\prime}x)\). The original SCDW adopted the local Fermi-gas model (LFG) for initial and final nuclear s.p. states. Although LFG will be totally unrealistic for modeling specific nuclear states, it will reasonably describe the total response of a nucleus to which many initial and final states contribute. It should be noted that, in SCDW, there is no kinematical assumption or restriction for the reaction particles. This idea for treating processes via a huge number of nuclear states is expected to work also for deuteron-induced reactions. Note that the latest version of SCDW adopts the Wigner transform of one-body density matrices calculated with a s.p. model for nuclei [26] instead of LFG; for reducing numerical task, we use LFG in this work. The main purpose of this study is to extend SCDW to deuteron-induced inclusive processes. Although our ultimate goal is to describe \((d,nx)\), as the first step, we focus on the inclusive deuteron-emission process \((d,d^{\prime}x)\). We assume for simplicity that scattering waves of the incoming and outgoing deuteron can be described with a phenomenological optical potential, meaning that deuteron breakup channels do not directly contribute to \((d,d^{\prime}x)\). On the other hand, as in SCDW studies on \((p,p^{\prime}x)\), we respect kinematics of deuteron inside a nucleus, by using the local semiclassical approximation (LSCA) [22] to the deuteron distorted waves. We clarify how the proper treatment of the "refraction" of deuteron by the distorting potential is important to describe \((d,d^{\prime}x)\) experimental data. We include only the one-step process and mainly discuss the small energy-transfer region. The construction of this paper is as follows. In Sec. II we describe SCDW for the inclusive \((d,d^{\prime}x)\) reaction, applying LFG and LSCA. In Sec. III we compare the calculated DDXs of the inclusive \((d,d^{\prime}x)\) reaction with experimental data and demonstrate the effect of nuclear refraction. Finally, a summary is given in Sec. IV ## II Formalism We describe the inclusive \((d,d^{\prime}x)\) reaction by one-step SCDW. The foundation of SCDW is the DWBA series expansion of the transition matrix (_T_ matrix). The _T_-matrix element, for which the target nucleus is excited from the initial single particle state \(\phi_{\alpha}\) to the final one \(\phi_{\beta}\), is given by \[T_{\beta\alpha}=\Big{\langle}\chi_{f}^{(-)}(\mathbf{r}_{0})\phi_{ \beta}(\mathbf{r})\,\Big{|}\,v(\mathbf{r}_{0}-\mathbf{r})\,\Big{|}\,\chi_{i}^{(+)}(\mathbf{r}_ {0})\phi_{\alpha}(\mathbf{r})\Big{\rangle}, \tag{1}\] where \(\mathbf{r}_{0}\) and \(\mathbf{r}\) are the coordinates of the incident deuteron and the nucleon inside the target, respectively. \(\chi_{i}\) (\(\chi_{f}\)) is the distorted wave for the deuteron in the initial (final) state. The superscripts \((+)\) and \((-)\) denote the outgoing and incoming boundary conditions for \(\chi\), respectively. \(v\) is the effective interaction between the deuteron and the target nucleon. The double differential cross section (DDX) for the emitted deuteron energy \(E_{f}\) and the solid angle \(\Omega_{f}\) is given by \[\frac{\partial^{2}\sigma}{\partial E_{f}\partial\Omega_{f}}=C \frac{k_{f}}{k_{i}}\sum_{\alpha,\beta}|T_{\beta\alpha}|^{2}\delta(E_{i}+ \varepsilon_{\alpha}-E_{f}-\varepsilon_{\beta}), \tag{2}\] where \(C=4\mu^{2}/(2\pi\hbar)^{2}\), \(E_{i}\) is the deuteron incident energy, \(\mu\) is the reduced mass between the deuteron and the target nucleus and \(k_{i}\) (\(k_{f}\)) is the asymptotic momentum of the incident (emitted) deuteron. \(\varepsilon_{\gamma}\) (\(\gamma=\alpha\) or \(\beta\)) is the kinetic energy of the target nucleon. The summation is taken over all the initial and the final single-particle states, \(\alpha\) and \(\beta\), which are relevant to the inclusive \((d,d^{\prime}x)\) reaction. On expanding the squared modulus in Eq. (2), one obtains \[\frac{\partial^{2}\sigma}{\partial E_{f}\partial\Omega_{f}}=C \frac{k_{f}}{k_{i}}\int d\mathbf{r}_{0}d\mathbf{r}\chi_{f}^{s(-)}(\mathbf{r}_{0})v(\mathbf{r} _{0}-\mathbf{r})\chi_{i}^{(+)}(\mathbf{r}_{0})\] \[\times\int d\mathbf{r}_{0}^{\prime}d\mathbf{r}^{\prime}\chi_{f}^{(-)}(\bm {r}_{0}^{\prime})v^{*}(\mathbf{r}_{0}^{\prime}-\mathbf{r}^{\prime})\chi_{i}^{*(+)}(\bm {r}_{0}^{\prime})K(\mathbf{r},\mathbf{r}^{\prime}), \tag{3}\] where the kernel \(K(\mathbf{r},\mathbf{r}^{\prime})\) is define by \[K(\mathbf{r},\mathbf{r}^{\prime}) \equiv\sum_{\alpha}\phi_{\alpha}(\mathbf{r})\phi_{\alpha}^{*}(\mathbf{r} ^{\prime})\sum_{\beta}\phi_{\beta}^{*}(\mathbf{r})\phi_{\beta}(\mathbf{r}^{\prime})\] \[\times\delta(E_{i}+\varepsilon_{\alpha}-E_{f}-\varepsilon_{\beta }). \tag{4}\] When a large number of single-particle states are involved, \(K(\mathbf{r},\mathbf{r}^{\prime})\) becomes a short-ranged function of \(|\mathbf{r}-\mathbf{r}^{\prime}|\)[23; 24; 28]. The center-of-mass and relative coordinates of the \(d\)-\(N\) system, \(\mathbf{R}\) and \(\mathbf{s}\), respectively, are given by \[\mathbf{R} =\frac{A_{d}}{A_{d}+1}\mathbf{r}_{0}+\frac{1}{A_{d}+1}\mathbf{r}, \tag{5}\] \[\mathbf{s} =\mathbf{r}_{0}-\mathbf{r}. \tag{6}\] Inversely, \(\mathbf{r}_{0}\) and \(\mathbf{r}\) are written as \[\mathbf{r}_{0} =\mathbf{R}+\frac{1}{A_{d}+1}\mathbf{s}, \tag{7}\] \[\mathbf{r} =\mathbf{R}-\frac{A_{d}}{A_{d}+1}\mathbf{s}, \tag{8}\] where \(A_{d}\) is the mass number of deuteron, i.e., \(A_{d}=2\). With the coordinates \(\mathbf{R}\) and \(\mathbf{s}\), one can rewrite Eq. (3) as \[\frac{\partial^{2}\sigma}{\partial E_{f}\partial\Omega_{f}} =C\frac{k_{f}}{k_{i}}\int d\mathbf{R}\,d\mathbf{s}\,d\mathbf{R}^{\prime}\,d\mathbf{ s}^{\prime}\] \[\times\chi_{f}^{(-)}(\mathbf{R}+\mathbf{s}/3)v(\mathbf{s})\chi_{i}^{(+)}(\mathbf{R }+\mathbf{s}/3)\] \[\times\chi_{f}^{(-)}(\mathbf{R}^{\prime}+\mathbf{s}^{\prime}/3)v^{*}(\mathbf{ s}^{\prime})\chi_{i}^{*(+)}(\mathbf{R}^{\prime}+\mathbf{s}^{\prime}/3)\] \[\times K(\mathbf{R},\mathbf{s},\mathbf{R}^{\prime},\mathbf{s}^{\prime}), \tag{9}\] where \[K(\mathbf{R},\mathbf{s},\mathbf{R}^{\prime},\mathbf{s}^{\prime}) =\sum_{\alpha}\phi_{\alpha}(\mathbf{R}-2\mathbf{s}/3)\phi_{\alpha}^{*}( \mathbf{R}^{\prime}-2\mathbf{s}^{\prime}/3)\] \[\times\sum_{\beta}\phi_{\beta}^{*}(\mathbf{R}-2\mathbf{s}/3)\phi_{\beta}( \mathbf{R}^{\prime}-2\mathbf{s}^{\prime}/3)\] \[\times\delta(E_{i}+\varepsilon_{\alpha}-E_{f}-\varepsilon_{\beta }). \tag{10}\] Here, we make two approximations to Eq. (9). One is LFG for nuclear states and the other is LSCA for the distorted waves as mentioned in Sec. I. In LFG, \(\phi_{\gamma}\) (\(\gamma=\alpha\) or \(\beta\)) is approximated by the plane wave with momentum \(k_{\gamma}\) within a smaller cell-size of \(|\mathbf{s}|\) than the range of \(v\). The summation of \(\gamma\) is then expressed as an integral over \(k_{\gamma}\), where the threshold momentum between channels \(\alpha\) and \(\beta\) is the local Fermi momentum \(k_{F}(\mathbf{R})\), which is related to the nuclear density \(\rho(\mathbf{R})\) through \[\rho(\mathbf{R})=4\frac{4\pi}{3}\frac{k_{F}^{3}(\mathbf{R})}{(2\pi)^{3}}. \tag{11}\] In LSCA, the short-range propagation of the distorted wave \(\chi_{c}\) (\(c=i\) or \(f\)) from a reference point \(\mathbf{R}\) is approximated by the plane wave, i.e., \[\chi_{c}(\mathbf{R}+\mathbf{s}/3)\simeq\chi_{c}(\mathbf{R})e^{i\mathbf{k}_{c}(\mathbf{R})\cdot\mathbf{ s}/3}. \tag{12}\] This approximation is valid because the range of the \(d\)-\(N\) interaction \(v\) is short and therefore only a small \(\mathbf{s}\) is relevant to the reaction. In Eq. (12), \(\mathbf{k}_{c}(\mathbf{R})\) is the local momentum of the deuteron. The direction of \(\mathbf{k}_{c}(\mathbf{R})\) is taken to be the same as that of the flux of the distorted wave \(\chi_{c}(\mathbf{R})\). The norm of \(\mathbf{k}_{c}(\mathbf{R})\) is given by the real part of the complex momentum \(\mathbf{k}_{c}(\mathbf{R})\) satisfying the local energy conservation [24] \[\frac{\hbar^{2}k_{c}^{2}}{2\mu}=\frac{\hbar^{2}k_{c}^{2}(\mathbf{R})}{2\mu}+U_{c}( \mathbf{R}), \tag{13}\] where \(U_{c}(\mathbf{R})\) (\(c=i\) or \(f\)) is a complex distorting potential for the deuteron. LSCA can incorporate the distortion effect on the kinematics of the incident and emitted particles. This effect can be regarded as refraction due to the distorting potential because the direction of the local momentum changes continuously as a function of \(\mathbf{R}\). To clarify the refraction effect, we also consider the asymptotic momentum approximation (AMA), which replaces \(\mathbf{k}_{c}(\mathbf{R})\) with \(\mathbf{k}_{c}\), i.e., \(\mathbf{k}_{c}(\mathbf{R})\rightarrow\mathbf{k}_{c}\) in Eq. (12). The effect of the refraction is discussed in Sec. III.3. The validity of LSCA and AMA is given in Appendix. Using LFG and LSCA, one can rewrite Eq. (9) as \[\frac{\partial^{2}\sigma}{\partial E_{f}\partial\Omega_{f}} =\frac{C}{(2\pi)^{3}}\frac{k_{f}}{k_{i}}\int d\mathbf{R}|\chi_{f}^{(- )}(\mathbf{R})|^{2}|\chi_{i}^{(+)}(\mathbf{R})|^{2}\] \[\times\int_{k_{\alpha}\leq k_{F}(\mathbf{R})}d\mathbf{k}_{\alpha}\int_{k_ {\beta}>k_{F}(\mathbf{R})}d\mathbf{k}_{\beta}\] \[\times\left|\int d\mathbf{s}v(\mathbf{s})e^{-i\mathbf{q}(\mathbf{R})\cdot\mathbf{s}} \right|^{2}\] \[\times\delta(\mathbf{k}_{i}(\mathbf{R})+\mathbf{k}_{\alpha}-\mathbf{k}_{f}(\mathbf{R} )-\mathbf{k}_{\beta})\] \[\times\delta(E_{i}+\varepsilon_{\alpha}-E_{f}-\varepsilon_{\beta }), \tag{14}\] where \(\mathbf{q}(\mathbf{R})\) is the local momentum transfer defined by \(\mathbf{k}_{i}(\mathbf{R})-\mathbf{k}_{f}(\mathbf{R})\). In Eq. (II.3), Dirac's delta functions and the ranges of the integrations, \(k_{\alpha}\leq k_{F}(\mathbf{R})\) and \(k_{\beta}>k_{F}(\mathbf{R})\), guarantee that the \(d\)-\(N\) elementary process satisfies the Pauli principle and the energy and local momentum conservation in the \((d,d^{\prime}x)\) reaction. We make the on-the-energy-shell approximation to the squared modulus of the matrix element of \(v\): \[\frac{\mu_{dN}^{2}}{(2\pi\hbar^{2})^{2}}\left|\int d\mathbf{s}v(\mathbf{s})e^{-i\mathbf{q} (\mathbf{R})\cdot\mathbf{s}}\right|^{2}\simeq\left(\frac{\partial\sigma_{dN}}{ \partial\Omega}\right)_{\theta_{dN}(\mathbf{R}),E_{dN}(\mathbf{R})}, \tag{15}\] where \(\mu_{dN}\) is the reduced mass of the \(d\)-\(N\) system. \(\theta_{dN}(\mathbf{R})\) is the local \(d\)-\(N\) scattering angle between the initial relative momentum \(\mathbf{\kappa}(\mathbf{R})\) and the final one \(\mathbf{\kappa}^{\prime}(\mathbf{R})\), which are defined by \[\mathbf{\kappa}(\mathbf{R}) \equiv\frac{1}{A_{d}+1}\mathbf{k}_{i}(\mathbf{R})-\frac{A_{d}}{A_{d}+1} \mathbf{k}_{\alpha}, \tag{16}\] \[\mathbf{\kappa}^{\prime}(\mathbf{R}) \equiv\frac{A_{d}}{A_{d}+1}\mathbf{k}_{f}(\mathbf{R})-\frac{1}{A_{d}+1} \mathbf{k}_{\beta}. \tag{17}\] The local \(d\)-\(N\) scattering energy \(E_{\mathbf{R}}\) is defined by \[E_{dN}(\mathbf{R})=\frac{\hbar^{2}[\kappa(\mathbf{R})]^{2}}{2\mu_{dN}}. \tag{18}\] By substituting Eq. (15) for Eq. (II.3) and integrating over \(\mathbf{k}_{\beta}\), one obtains the following closed form of the DDX of the inclusive \((d,d^{\prime}x)\) reaction: \[\frac{\partial^{2}\sigma}{\partial E_{f}\partial\Omega_{f}} =\left[\frac{A_{d}A}{A_{d}+A}\right]^{2}\frac{k_{f}}{k_{i}}\int d \mathbf{R}\] \[\times|\chi_{f}^{(-)}(\mathbf{R})|^{2}|\chi_{i}^{(+)}(\mathbf{R})|^{2} \left[\frac{\partial^{2}\sigma}{\partial E_{f}\partial\Omega_{f}}\right]_{\mathbf{R} }\rho(\mathbf{R}), \tag{19}\] where \(A\) is the mass number of the target nucleus. The DDX of the elementary process averaged over \(\mathbf{k}_{\alpha}\) at \(\mathbf{R}\) in the Fermi sphere characterized by \(k_{F}^{3}(\mathbf{R})\) is given by \[\left[\frac{\partial^{2}\sigma}{\partial E_{f}\partial\Omega_{f} }\right]_{\mathbf{R}} =\frac{1}{(4\pi/3)k_{F}^{3}(\mathbf{R})}\left[\frac{A_{d}+1}{A_{d}} \right]^{2}\] \[\times\int_{k_{\alpha}\leq k_{F}(\mathbf{R})}d\mathbf{k}_{\alpha}\left( \frac{\partial\sigma_{dN}}{\partial\Omega}\right)_{\theta_{dN}(\mathbf{R}),E_{dN}( \mathbf{R})}\] \[\times\delta(E_{i}+\varepsilon_{\alpha}-E_{f}-\varepsilon_{\beta}). \tag{20}\] ## III Results and Discussion ### Numerical inputs We assume the Woods-Saxon shaped global optical potential by An and Cai [31] for the deuteron scattering off target nuclei. The effect of the nonlocality of the deuteron distorting potentials is taken into account by multiplying the scattering waves by the Perey factor [32]\(F_{c}(R)=[1-\mu\beta^{2}/(2\hbar^{2})U_{c}(R)]^{-1/2}\), where \(\mu\) is the reduced mass of the deuteron and the target. The range of nonlocality \(\beta\) for the deuteron is taken to be 0.54 [33]. We assume the Woods-Saxon form for the nuclear density as \[\rho(R)=\frac{\rho_{0}}{1+\exp\left(\frac{R-R_{\rho}}{a_{\rho}}\right)}, \tag{21}\] where the radial parameter is given by \(R_{\rho}=r_{\rho}A^{1/3}\) with \(r_{\rho}=1.15\) fm, the diffuseness parameter is set to \(a_{\rho}=0.5\) fm, and \(A\) being the mass number of the target nucleus. The constant \(\rho_{0}\) is determined to normalize the integrated value of \(\rho(R)\) to \(A\). The local Fermi momentum is calculated from the nucleon density as in Eq. (11). For the free \(d\)-\(N\) scattering cross section, we use the numerical table fitted with several Gaussian functions to reproduce the experimental data of \(p\)-\(d\) scattering from 5 to 800 MeV [34]. In this table, the cross section does not diverge at \(0^{\circ}\) because we neglect the Coulomb elastic scattering. For the free \(p\)-\(N\) scattering cross section used in the calculation of the \((p,p^{\prime}x)\) process, we use the nucleon-nucleon \(t\) matrix by Frange and Love [35; 36]. ### Results of the SCDW calculation for \((d,d^{\prime}x)\) reactions and comparison with data We show the DDXs of the inclusive \((d,d^{\prime}x)\) reaction calculated with SCDW using LSCA and AMA, and compare them with experimental data. Below we discuss the DDXs as a function of the emission energy of the deuteron \(E_{f}\) with fixed \(\Omega_{f}\). Figure 1 shows the calculated DDXs as a function of \(E_{f}\) at several scattering angle \(\theta\). The DDXs for \({}^{58}\)Ni and \({}^{27}\)Al are calculated at the incident energy \(E_{i}=100\) and 80 MeV and those for \({}^{93}\)Nb (\({}^{90}\)Zr) are calculated at 100 MeV (70 MeV). The experimental data at 100 MeV are taken from Ref. [29] and those at \(E_{i}=80\) and 70 MeV are taken from Ref. [30]. In the experimental data at 80 and 70 MeV, the sharp increase at very large \(E_{f}\) is due to the elastic scattering events. The solid (dashed) line represents the DDX using LSCA (AMA). From Fig. 1, one can see that the present calculations of the DDXs with LSCA reproduce the experimental data well in the small energy-transfer region: \(\omega=E_{i}-E_{f}\lesssim 15\) MeV, where the one-step process is considered to be dominant. It is known that the multi-step processes, which are not included in the present calculations, become more important as \(\omega\) or \(\theta\) increases [24]. For this reason the present calculations undershoot the data in all the cases when \(E_{f}\) is small or \(\theta\) is large. The undershooting for the \({}^{90}\)Zr\((d,d^{\prime}x)\) at 70 MeV is more pronounced possibly because of the lower incident energy and the heavy target nucleus. In contrast to the DDXs with LSCA, those with AMA underestimate the experimental data even in the small \(\omega\) region. In particular, at \(\theta\gtrsim 30^{\circ}\), the DDXs calculated with AMA cannot reproduce the experimental data in all the cases. Comparing the DDXs with LSCA and AMA, one can see that the inclusion of the nuclear refraction, i.e., the change in the kinematics of the deuteron inside the nucleus, is necessary to reproduce the experimental data. Figure 1: Comparison of the experimental data and calculated DDXs of the inclusive \((d,d^{\prime}x)\) reaction on \({}^{58}\)Ni, \({}^{27}\)Al at 100 MeV and 80 MeV, \({}^{93}\)Nb at 100 MeV, and \({}^{90}\)Zr at 70 MeV, for different deuteron emission angles. The solid (dashed) line represents the DDXs with LSCA (AMA). The experimental data at 100 MeV are taken from Ref. [29] and at 80, 70 MeV are from Ref. [30]. ### The effect of the refraction Below we discuss the nuclear refraction effect on the DDX of the \({}^{27}\)Al(\(d,d^{\prime}x\)) at 100 MeV. In Fig. 2, the solid (dashed) line represents the DDX using LSCA (AMA) as a function of \(\theta\) with \(\omega\)=10 MeV. One can see that there are mainly two effects of the nuclear refraction. One is the extension of the allowed region of \(\theta\). For the DDX with AMA, the (\(d,d^{\prime}x\)) reaction is only allowed in \(\theta=2^{\circ}\)-52\({}^{\circ}\). On the other hand, the DDX with LSCA extends up to \(\theta=73^{\circ}\) and does not drop off at very small \(\theta\). This is because kinematics forbidden in AMA become allowed in LSCA by the refraction of the momentum of the deuteron in the target nucleus. It should be noted that, as one may find from Eq. (19), the \(\mathbf{R}\) dependence of the kinematics of the deuteron due to the refraction dictates the averaged local cross section of the \(d\)-\(N\) elementary process. In other words, whether the \(d\)-\(N\) process can take place or not depend on \(\mathbf{R}\) through \(\mathbf{q}(\mathbf{R})\) and \(k_{F}(\mathbf{R})\). To see this more clearly, we show in Fig. 3(a) \(\mathbf{q}(\mathbf{R})\) calculated with LSCA, and in Figs. 3(b) and (c) the kinematically-allowed reaction regions of the elementary process with LSCA and AMA, respectively, at \(\theta=60^{\circ}\) of the Fig. 2 case. In Figs. 3(b) and (c), a color bar with the value of "1" indicates the regions where the \(d\)-\(N\) processes are allowed while "0" indicates the regions where the \(d\)-\(N\) processes are not allowed. In AMA, which does not include the nuclear refraction, \(\mathbf{q}(\mathbf{R})\) is the same as the asymptotic momentum transfer \(\mathbf{q}\). Figure 3(c) shows that there are no kinematically-allowed reaction regions; it is found that this is because \(q\) is too large to allow the elementary process. On the other hand, with LSCA, there is a region in which the \(d\)-\(N\) process is allowed because \(\mathbf{q}(\mathbf{R})\) is dispersed by the nuclear refraction and may have smaller values. The other effect of refraction is the decrease in the DDX at forward angles. This is because LSCA makes kinematically-allowed reaction regions narrower. Figures 3(d), (e), and (f) are the same as Figs. 3(a), (b), and (c), respectively, but at \(\theta=10^{\circ}\). In Fig. 3(f), one can see that the \(d\)-\(N\) process is kinematically allowed in the very broad region when AMA is used. On the other hand, in Fig. 3(e), the reaction region with LSCA becomes narrower than that with AMA. This is because \(\mathbf{q}(\mathbf{R})\) is dispersed and may have too large values, as in Fig. 3(d), to kinematically allow the \(d\)-\(N\) process. From these results, we conclude that the two effects of the nuclear refraction can be understood as the changes in the kinematically-allowed reaction regions associated with the dispersion of \(\mathbf{q}(\mathbf{R})\). Figure 4 is the same as Fig. 2 but with \(\omega=40\) MeV. By comparing Figs. 4 and 2, one can see that the two effects of the refraction remain when \(\omega\) is large. Figures 5(a) and (b) show the DDXs of \({}^{27}\)Al(\(d,d^{\prime}x\)) and \({}^{27}\)Al(\(p,p^{\prime}x\)), respectively, at 50 MeV per nucleon with \(\omega=10\) MeV. One sees that the effects of the refraction are more significant in the (\(d,d^{\prime}x\)) reaction than in the (\(p,p^{\prime}x\)) reaction. This is mainly because the distorting potential between the deuteron and the target is deeper than that between the proton and the target. Al Figure 2: DDXs of the \({}^{27}\)Al(\(d,d^{\prime}x\)) at 100 MeV as a function of the scattering angle. The solid (dashed) line corresponds to the calculation with LSCA (AMA). Figure 3: (a) The local momentum transfer with LSCA of \({}^{27}\)Al(\(d,d^{\prime}x\)) at 100 MeV with \(\omega=10\) MeV at \(\theta=60^{\circ}\). (b) The reaction region with LSCA. The color bar with the value of “1” means the region in which the \(d\)-\(N\) process is allowed, while with the value of “0” means the region in which the process is not allowed. (c) Same as (b) but with AMA. (d), (e) and (f) same as (a), (b) and (c), respectively, but at \(\theta=10^{\circ}\). though the importance of the nuclear refraction has been pointed out in the preceding studies of \((p,p^{\prime}x)\) reactions with SCDW [22], its effect is found to be not very significant. For \((d,d^{\prime}x)\), as shown in Fig. 1, the nuclear refraction completely changes the behavior of the DDX. To analyze the \((d,d^{\prime}x)\) reaction data, inclusion of the nuclear refraction will be necessary. ## IV Summary We have improved SCDW to the inclusive \((d,d^{\prime}x)\) reaction. The calculated DDXs of the \((d,d^{\prime}x)\) were compared with the experimental data of various targets at several deuteron emission angles. The calculated DDXs with LSCA reproduce the experimental data well in the regions where one step is dominant, i.e., for small energy transfer and at forward angles. On the other hand, the DDXs with AMA, which does not include changes in the kinematics of the deuteron due to the distorting potential, undershoot the experimental data even in those regions. By comparing the LSCA and AMA results, it was found that nuclear refraction effect on the \(d\)-\(N\) elementary process is necessary to reproduce the experimental data. We have shown two effects of the refraction by comparing the DDXs with LSCA and AMA as a function of the scattering angle. One is the extension of the kinematically-allowed scattering angles to the backward region. The other is the decrease in the DDX at forward angles. Both effects can be understood by the changes in the regions where the \(d\)-\(N\) elementary processes are allowed. It was confirmed that the refraction effect is more significant on the \((d,d^{\prime}x)\) than on the \((p,p^{\prime}x)\) by comparing the changes in the DDX of the \((d,d^{\prime}x)\) and the \((p,p^{\prime}x)\) with LSCA and AMA. To reproduce experimental data for the inclusive \((d,d^{\prime}x)\) reactions in the large energy-transfer region, it will be necessary to modify the present SCDW model for multi step. Another future work will be to consider the deuteron breakup, which is not explicitly treated in this study, to describe the inclusive \((d,nx)\) reaction that is important in nuclear data science. ###### Acknowledgements. The authors thank Y. Chazono for providing us with the \(d\)-\(N\) scattering cross section. H.N. and K.O. thank Y. Watanabe for fruitful discussions. This work has been supported in part by Grants-in-Aid of the Japan Society for the Promotion of Science (Grants No. JP20K14475, No. JP21H00125, and No. JP21H04975). The computation was carried out with the computer facilities at the Research Center for Nuclear Physics, Osaka University. ## Appendix A Validity of LSCA and AMA The validity of LSCA in nucleon scattering has been examined in Refs. [37; 24]. In these papers, it is shown that LSCA works well for the propagation up to about 1.5 fm at energies above 50 MeV. The validity of LSCA for the \(\alpha\) particle was also verified in Ref. [38]. For the deuteron, however, its validity has not been confirmed. In Fig. 6, we examine the validity of LSCA and AMA for the \(d\)-\({}^{58}\)Ni distorted wave \(\chi_{i}^{(+)}\) at 50 MeV per nucleon. Fig. 6 shows the propagation in the radial direction from (a) \(\mathbf{R}_{a}\equiv(6\ {\rm fm},\ 120^{\circ},\ 0^{\circ})\) and (b) \(\mathbf{R}_{b}\equiv(2\ {\rm fm},\ 90^{\circ},\ 0^{\circ})\) in the spherical coordinate representation. The solid, dashed, and dotted lines show, respectively, the real part Figure 5: (a) DDXs of \({}^{27}\)Al\((d,d^{\prime}x)\) at 50 MeV per nucleon as a function of the momentum transfer. (b) Same as (a) but of \({}^{27}\)Al\((p,p^{\prime}x)\). of the exact wave function, that with LSCA, and that with AMA. In Fig. 6(a), both approximations reproduce well the propagation up to about 0.7 fm. It should be noted that the range of the interaction between the deuteron and the nucleon is about 2.2 fm, and from the factor \(1/(A_{d}+1)=1/3\) for \(\mathbf{s}\) in Eq. (7), LSCA and AMA are required to be valid for the propagation up to about 0.7 fm. In Fig. 6(b), on the other hand, while LSCA reproduces the propagation of the wave function well, AMA does not. This is because the direction of the propagation direction \(\mathbf{s}\) from \(\mathbf{R}_{b}\) is orthogonal to the asymptotic momentum \(\mathbf{k}_{c}\) of the deuteron, i.e., \(\mathbf{k}_{c}\cdot\mathbf{s}=0\) in Eq. (12). These results show that the kinematics of the deuteron at \(\mathbf{R}_{b}\) are significantly different from the asymptotic ones due to the distarating potential, thus LSCA is essential to trace the deuteron momentum inside the target nucleus.
2304.09877
Neutrino Constraints and the ATOMKI X17 Anomaly
Recent data from the ATOMKI group continues to confirm their claim of the existence of a new $\sim17$ MeV particle. We review and numerically analyze the data and then put into context constraints from other experiments, notably neutrino scattering experiments such as the latest reactor anti-neutrino coherent elastic neutrino nucleus scattering data and unitarity constraints from solar neutrino observations. We show that minimal scenarios are disfavored and discuss the model requirements to evade these constraints.
Peter B. Denton, Julia Gehrlein
2023-04-19T18:00:00Z
http://arxiv.org/abs/2304.09877v2
# Neutrino Constraints and the ATOMKI X17 Anomaly ###### Abstract Recent data from the ATOMKI group continues to confirm their claim of the existence of a new \(\sim 17\) MeV particle. We review and numerically analyze the data and then put into context constraints from other experiments, notably neutrino scattering experiments such as the latest reactor anti-neutrino coherent elastic neutrino nucleus scattering data and unitarity constraints from solar neutrino observations. We show that minimal scenarios are disfavored and discuss the model requirements to evade these constraints. + Footnote †: preprint: CERN-TH-2023-053 ## I Introduction Understanding the validity and meaning of any particle physics anomaly requires a careful understanding of the data pointing towards the anomaly, studies of the new physics scenarios compatible with the anomaly, a confirmation that any physics scenario is consistent with all other data, and finally predictions for upcoming experiments all within a statistical framework. In the following we will focus on data from the ATOMKI collaboration which has reported evidence for an anomaly in a suite of measurements looking at the angular distributions of the decays of excited light nuclei to \(e^{+}e^{-}\) each of which is individually preferred over the Standard Model (SM) at \(>5\sigma\)[1, 2, 3, 4]; for a recent summary of the status see [5]. In [6, 7] nuclear physics explanations of the anomaly have been put forward, however an explanation due to unknown nuclear physics has been deemed to be unlikely strengthening the case for a particle physics explanation of the data. Similarly, explanations within the Standard Model based on the presence of new exotic QCD states [8, 9, 10] or so far unaccounted for Standard Model effects [11, 12, 13] have up to now also not led to a conclusive explanation of the ATOMKI data. Therefore we turn our focus to explanations beyond the Standard Model; indeed all the data seems to be pointing to a new state with a mass of about 17 MeV based on a straightforward examination of the kinematics of the data. The validity of the anomaly and the nature of the state is not yet fully understood. Nonetheless, some facts about it seem to be increasingly clear. After careful analyses of a variety of scenarios, the data seems to prefer a vector mediator [14, 15, 16, 17, 18], although an axial-vector mediator may also be allowed, depending on the exact treatment of other data sets and our understanding of nuclear physics [19, 20, 21, 22]. In addition, some early analyses found preference for protophobic structures [14], however this statement will be reexamined here. Such a MeV scale boson can be probed in neutrino scattering experiments, notably via the coherent elastic neutrino nucleus scattering (CEvNS) process [23]. In fact, CEvNS experiments provide strong bounds on new light mediators which couple to neutrinos and neutrons [24, 25, 26, 27, 28, 29, 30, 31, 32]. Crucial constraints on a 17 MeV mediator will come from reactor CEvNS experiments, at which there has not yet been a definitive detection. Nonetheless, several experiments have limits very close to the expected signal which are enough to constrain relevant parameter space. Recently several reactor CEvNS experiments have reported constraints close enough to the SM prediction to derive key constraints on the coupling of light mediators to nucleons and neutrinos [33, 34, 35, 36, 37]. In the following, we will use this new data to constrain explanations of ATOMKI which we will show provides important requirements on complete descriptions of the anomaly. We perform a new statistical analysis of parameters preferred by the latest ATOMKI data in the context of the vector mediator solution in section II. In particular, we examine the self-consistency of the data. We then discuss the generic constraints on such a scenario including the latest neutrino data from reactor CEvNS experiments in section III. We then turn to model specifics with an aim of understanding the minimal particle content required to explain the ATOMKI data beyond a new spin-1 boson at \(\sim 17\) MeV in section IV. We discuss future tests of the anomaly and conclude in section V. ## II Atomki hints for new physics Over the last several years, the ATOMKI collaboration reported several statistically significant excesses in the opening angle distributions of \(e^{+}e^{-}\) pairs produced in the decays of excited states of Be [1], He [2, 3], and C [4] with multiple individual significances of \(>5\sigma\) each. These results have been interpreted as a hint for a new boson coupling to nucleons and electrons with a mass of \(m_{X}\approx 17\) MeV. Previous studies of the anomaly in Be and He showed that it is difficult to simultaneously ex plain these results with a scalar or pseudoscalar boson [17]. An axial vector solution benefits from avoiding the strong constraint on its coupling to protons from \(\pi^{0}\) decays, but struggles due to large theory uncertainties [17], although see also [22]. In any case, we show that these constraints, when considered numerically along with the ATOMKI data, are not as limiting as previously thought. Therefore we will focus in the following on a vector boson solution which as we will show can explain the excess in all three elements. We consider a model with several free parameters, some to be constrained by the details of the production of \(X\) and others from the necessary decay requirements. Constraints and preferred values on these parameters from other experiments will be considered in the next section. The Lagrangian of the model reads \[\mathcal{L}\supset\mathrm{i}X_{\mu}e\varepsilon_{i}\overline{f}_{i}\gamma^{ \mu}f_{i}\,, \tag{1}\] where \(X_{\mu}\) is a new vector field which couples with coupling strength \(\varepsilon_{i}\) to fermions \(i\), \(i=n,\ p,\ e,\ \nu_{e}\) as minimally required by the ATOMKI data and \(e\) is the elementary charge. Turning now to our numerical analysis. The ATOMKI data is compelling because there is a fairly self-consistent picture of new physics at \(\sim 17\) MeV coupling to protons and/or neutrons and electrons from data from different angular distributions, widths, and elements. The ATOMKI data comes in two dimensions: the angle at which the \(e^{+}e^{-}\) excess over the background begins, and the rate leading to the excess. These can be parameterized in the quantities \(\theta^{\mathrm{min}}_{e^{+}e^{-}}\) and \(\Gamma_{X}/\Gamma_{\gamma}\) where the second parameter is the ratio of partial widths to the new \(X\) boson and to a photon which is both experimental and theoretically convenient ratio to take1. We use the calculations of the kinematics for the angle and the widths in the vector case from [17] to compare with the data. Footnote 1: Note that this ratio is often confusingly referred to as a “branching ratio.” Since \(\Gamma_{\gamma}\ll 1\) for all three elements, this ratio is quite different from the branching ratio to \(X\). For the angular data we use the data from [3; 4; 38] as extracted in [39] including three measurements with He, four measurements with Be, and four measurements with C, see fig. 1. For the width data we use [40] for the Be data, [4] for the C data, and [3] for the He data. For the He data we also include the theory uncertainty on \(\Gamma_{E0}\)[41], the width normalization used for He coming from the \(0^{+}\to 0^{+}\) transition. We perform a simple statistical \(\chi^{2}\) test of all the data from multiple experimental runs of each of the three elements including width and angular information to compute the preferred parameters and the internal goodness-of-fit of the model using the procedure outlined in [17]. We do not perform a model comparison test between new physics and the Standard Model as this requires more intimate knowledge of the experimental details and since new physics is preferred over no new physics at very high significance \(\gg 5\sigma\). An analysis with the angular data alone of 11 different measurements finds that the data is well described by a new particle of mass \(m_{X}=16.85\pm 0.04\) MeV with an internal goodness-of-fit of \(1.8\sigma\) calculated from Wilks' theorem at \(\chi^{2}/dof=17.3/10\). We use only the best fit and uncertainty of the maximum of the angular distribution; a more complete angular distribution might slightly modify the results due to fluctuations in the data. The data is compatible with the expected signature from a \(\sim 17\) MeV mediator, so we find it unlikely that this will significantly shift the results. The angular distributions are only sensitive to the mass of the particle which makes it a useful starting point in analyzing the ATOMKI measurements. Next, we add in to the analysis the latest width information from each element and include a prior on \(\varepsilon_{p}\) since \(X\) needs to couple to protons and/or neutrons on the production size. There is a stronger constraint on the coupling of \(X\) to protons from measurements of \(\pi^{0}\) decays than the constraint on the coupling to neutrons. We will include a prior on the coupling to protons \(|\varepsilon_{p}|\lesssim 1.2\times 10^{-3}/\sqrt{\mathrm{Br}(X\to e^{+}e^{-})}\) at 90% [42; 14]; see the next section for more information. We find an okay fit to the data at the same mass \(m_{X}=16.83\) MeV, \(\varepsilon_{n}=\pm 5.8\times 10^{-3}\), and \(\varepsilon_{p}=\pm 2.4\times 10^{-3}\), see fig. 2. We note that the signs of \(\varepsilon_{n}\) and \(\varepsilon_{p}\) must be the same due to the non-trivial degeneracy structure shown clearly in the \(\varepsilon_{n}\) - \(\varepsilon_{p}\) panel of fig. 2. We have confirmed that the mass constraint is dominated by the angular data and Figure 1: Measured opening angles of the \(e^{+}e^{-}\) pairs using the mass differences between different excited states and the ground state of He (blue), Be (orange), C (green). We show contours of different \(m_{X}\) using the relation \(\theta^{\mathrm{min}}_{e^{+}e^{-}}\approx 2\arcsin(m_{x}/(m_{N^{*}}-m_{N}))\)[17]. is only weakly affected by the width data. The internal goodness-of-fit is \(2.9\sigma\) indicating modest tension in the data within the explanation using a vector boson. We see that the preferred value of \(\varepsilon_{p}=\pm 2.4\times 10^{-3}\) is pulled somewhat above the existing 90% limit of \(1.2\times 10^{-3}\). The data prefers this larger value of \(|\varepsilon_{p}|\) because the rate measured by ATOMKI with carbon is lower than would be expected from the Be and He measurements if \(\varepsilon_{p}=0\). This difference can be partially accommodated because the widths for Be and He are proportional to \((\varepsilon_{n}+\varepsilon_{p})^{2}\) while the width for C is proportional to \((\varepsilon_{n}-\varepsilon_{p})^{2}\) and thus the inclusion of non-zero \(\varepsilon_{p}\) leads to a partial cancellation reducing the C rate, while it enhances the rates for Be and He. To summarize our analysis of the ATOMKI data, we find that the data is in excellent agreement on the mass of the mediator which is dominated by the angular data. The rate measurements which provide the information about the couplings \(\varepsilon_{p}\) and \(\varepsilon_{n}\) are not in perfect agreement, but the tension is not so large compared to the overall preference for new physics. ## III Constraints The interactions of a new mediator with \(\mathcal{O}(\text{MeV})\) mass scale can be probed with low-energy experiments. Below we summarize the dominant constraints on the couplings of a vector boson \(X\); appendix A contains further sub-dominant constraints coming from electron-neutrino scattering, invisible decays of \(X\), and the lifetime of \(X\). As briefly mentioned in the previous section, constraints on the couplings of \(X\) to quarks come from the search for rare pion decays \(\pi^{0}\to\gamma X,\ X\to e^{+}e^{-}\) where NA48/2 provides currently the strongest bound Figure 2: The parameter estimation at 1, 2, 3\(\sigma\) of \(m_{X}\), \(\varepsilon_{n}\), and \(\varepsilon_{p}\) using 11 separate angular measurements and the three latest width measurements from ATOMKI in addition to a prior on \(\varepsilon_{p}\) from \(\pi^{0}\to\gamma X\) constraints. The single ellipses on the left two panels are the 3\(\sigma\) ellipses. The not shown parameter is minimized over in each panel. The colors correspond to the preferred parameters from individual ATOMKI measurements and the black curves are the result of the combined fit. The red cross shows the SM value of \(\varepsilon_{n,p}=0\). We assume that \(\text{BR}(X\to e^{+}e^{-})=1\). The data prefers \(m_{X}=16.83\) MeV, \(\varepsilon_{n}=\pm 0.0058\), and \(\varepsilon_{p}=\pm 0.0024\) (correlated signs), and has only a modest internal tension at the \(2.9\sigma\) level (\(\Delta\chi^{2}/dof=27.9/11\)). [42]. We follow [14] to translate the bound to obtain the bound on the coupling to protons \(|2\varepsilon_{u}+\varepsilon_{d}|=|\varepsilon_{p}|<((0.8-1.2)\times 10^{-3})/ \sqrt{\text{BR}(X\to e^{+}e^{-})}\) at 90% C.L. where the range in the constraint comes from the fast oscillating nature of the bounds around \(m_{X}=17\) MeV. We somewhat optimistically take the \(1.2\times 10^{-3}\) number as the ATOMKI data prefers \(|\varepsilon_{p}|\) on the larger side. We note that the coupling to protons is really a combination of the couplings to up and down quarks which will be discussed in further detail below. Since \(X\) must decay to \(e^{+}e^{-}\), it must also couple to electrons. A new light mediator coupling to electrons leads to a contribution to the electron \(g_{e}-2\)[43]. Recently a new measurement of this quantity has been reported [44] which deviates from the SM expectation using the measured value of the electromagnetic fine structure constant by \(\sim 3\sigma\)[45; 46]2. Using the SM prediction from [45], this discrepancy leads to a mild preference for a new mediator with \(\varepsilon_{e}=(7.0\pm 1.5)\times 10^{-4}\) at \(m_{X}=17\) MeV but also disfavors \(\varepsilon_{e}>1.2\times 10^{-3}\) at 90% C.L. Footnote 2: Note that there are two independent measurements of the fine structure constant which disagree at \(5.4\sigma\)[45; 46]. A lower limit on the coupling of \(X\) to electrons comes from searches using the bremsstrahlung reaction \(e^{-}Z\to e^{-}ZX\) and the subsequent decay of \(X\) into an electron-positron pair. From the null results of this search at the NA64 experiment [47] we get \(|\varepsilon_{e}|>(6.3\times 10^{-4})/\sqrt{\text{BR}(X\to e^{+}e^{-})}\) for \(m_{X}=17\) MeV at 90% C.L. Combined with the \(g_{e}-2\) constraint this leads to an allowed range of \(\varepsilon_{e}\in[0.63,1.2]\times 10^{-3}\) for \(\text{BR}(X\to e^{+}e^{-})=1\). For smaller electron couplings \(X\) escapes the detector and no bounds can be derived at terrestrial experiments (\(|\varepsilon_{e}|<10^{-7}/\sqrt{\text{BR}(X\to e^{+}e^{-})}\)[48]). However from the absence of electromagnetic signals from the decay of a dark photon near the surface of a supernova progenitor star3 we get the very strong constraint \(|\varepsilon_{e}|<10^{-12}/\sqrt{\text{BR}(X\to e^{+}e^{-})}\)[53] leading to second allowed region for small electron couplings of \(X\). Footnote 3: Other analyses [49; 50; 51; 52] do not include this effect and find constraints weaker by two orders of magnitude at 17 MeV. Even if this effect is not included, neither our discussion here nor the unitarity issue presented later change. Depending on the details of the model, the mediator may well couple to neutrinos in addition to electrons. Note that a vector mediator which couples to charged leptons automatically also couples to neutrinos. A new light mediator which couples to neutrinos and neutrons is constrained by CEvNS with reactor neutrinos [29; 31; 54; 34]4. From Dresden-II data, we obtain as constraints at \(m_{X}=17\) MeV, \(\sqrt{|\varepsilon_{n}\varepsilon_{\nu_{e}}|}<13.6~{}\times 10^{-5}\) for \(\varepsilon_{n}\varepsilon_{\nu_{e}}>0\), \(\sqrt{|\varepsilon_{n}\varepsilon_{\nu_{e}}|}<8.0~{}\times 10^{-5}\) for \(\varepsilon_{n}\varepsilon_{\nu_{e}}<0\) at 90% C.L. Similar constraints exist from COHERENT and CONUS [29; 32; 36; 54; 56]. Since the ATOMKI data is equally explained for nucleon couplings of either sign, we take the more conservative of the two constraints: the positive product constraint. This then sets the constraint on the neutrino coupling that must be avoided. Footnote 4: Note that the constraints from reactor experiments only apply to the coupling with electron neutrinos. Nevertheless, the constraints on coupling to muon neutrinos are only slightly less stringent while the coupling to tau neutrinos is about an order of magnitude less constrained [55]. To summarize this section, a model with a vector mediator explaining the ATOMKI anomaly at a minimum needs to fulfill the following requirements: * feature a vector mediator with mass \(m_{X}\approx 17\) MeV, * \(X\) needs to couple to neutrons with strength \(|\varepsilon_{n}|\approx 0.0058\), * \(X\) needs to couple to protons with strength \(|\varepsilon_{p}|\approx 0.0024\), * the product of neutron and proton couplings of \(X\) need to fulfill \(\varepsilon_{n}\varepsilon_{p}>0\), * the coupling of \(X\) to electrons needs to be either \(|\varepsilon_{e}|\in[0.63,1.2]\times 10^{-3}\) or \(|\varepsilon_{e}|<10^{-12}\) for \(\text{BR}(X\to e^{+}e^{-})=1\), and * the coupling of \(X\) to electron neutrinos needs to be smaller than \(|\varepsilon_{\nu_{e}}|<3\times 10^{-6}\). While we have considered the constraints in terms of \(\varepsilon_{n}\) and \(\varepsilon_{p}\), we can recast the constraints in terms of up and down quarks. We find that, for our best fit values, \(\varepsilon_{d}=\pm 0.0031\) and \(\varepsilon_{u}=\mp 0.00033\) where the signs are again correlated. From these constraints we see that any model to explain the anomaly needs to violate \(SU(2)_{L}\) invariance as the required couplings to electrons and neutrinos do not follow the expectation \(2\varepsilon_{\nu_{e}}=\varepsilon_{e}\). Similarly the couplings to up and down quarks need to be unequal. In fact, an "upphobic" (\(\varepsilon_{u}=0\)) scenario, where the coupling of \(X\) to up quarks is suppressed, fits the data about as well as the general scenario. Finally, a new mediator that explains the ATOMKI anomaly is only required to couple to first generation fermions; if it also couples to the other generation potentially more constraints need to be taken into account. The scenario with \(\varepsilon_{e}=10^{-3}\) and \(\varepsilon_{\nu_{e}}=3\times 10^{-6}\) also leads to non-standard neutrino interactions (NSI) that affect neutrino oscillation experiments [57; 58]. Given \(\varepsilon_{\nu_{e}}=3\times 10^{-6}\) at the limit from CEvNS, we find that at \(m_{X}=16.8\) MeV, the relevant NSI parameter is \(\varepsilon_{ee}^{d}=\pm 0.09\) which is currently allowed by fits to oscillation data [59]. As the least constrained NSI parameter, improving constraints on \(\varepsilon_{ee}\) is a top priority for oscillation experiments. Future probes of NSs by comparing [31] measurements of \(\Delta m_{21}^{2}\) from JUNO [60] and with solar neutrinos at DUNE [61; 62] will be sensitive to \(\varepsilon_{ee}^{d}=0.019\) at \(1\sigma\) and thus provides a \(\gtrsim 4\sigma\) means of probing this scenario. While COHERENT and other \(\pi\)-DAR CEvNS experiments lose sensitivity in this mediator mass range [63], improved measurements of CEvNS with reactor neutrinos will improve upon these constraints as well [64]. Future probes of the parameter space will narrow it substantially down as shown in fig. 3 increasing the challenges of building a viable model. Therefore the experimental progress should also be accompanied by model building advances to find a viable model to explain the anomaly. ## IV Scenarios Following the general requirements on models to explain ATOMKI, we face potential model building challenges to realize small neutrino and up quark couplings while allowing for sizable electron and down quark couplings. In the following we outline model scenarios which achieve this feat. We split the scenarios into two main categories: those with large \(\varepsilon_{e}\sim 10^{-3}\) and those with small \(\varepsilon_{e}\lesssim 10^{-7}\). ### Large \(\varepsilon_{e}\) Scenarios We set \(\varepsilon_{e}=10^{-3}\) in between the \(g_{e}-2\) and NA64 constraints. In fact, this region may be slightly preferred by the \(g_{e}-2\) measurements and thus a discovery could be imminent by both \(g_{e}-2\) measurements and NA64-like experiments. This leads to BR\((X\to e^{+}e^{-})=1\). The BR to \(e^{+}e^{-}\) is independent of the coupling to neutrons since we have \(|\varepsilon_{n}|=0.0058\) and thus the upper limit on \(|\varepsilon_{\nu_{e}}|\) is \(3\times 10^{-6}\) from CEvNS at 90% C.L. #### iv.1.1 Flavor non-universal \(U(1)_{x}\) or anomalous \(U(1)_{b}\) We are aware of two possible ways to proceed. The first is to set \(\varepsilon_{\nu}=0\) for example via a flavor non-universal \(U(1)_{X}\) model [16], where the charge of the first and second quark generations are identical and different from the ones of the third generation quarks while the lepton charges are universal. A charge assignment can be found which allows to cancel all anomalies within the SM particle content and no new fermions need to be introduced. In this model the new gauge boson mixes with the hypercharge gauge boson which leads to \(\varepsilon_{\nu_{e}}=0\). Alternatively, a \(U(1)_{B}\) model could be introduced which is however anomalous [14]. In this scenario the new boson (which will become the 17 MeV \(X\) state) mixes with the photon allowing for different \(\varepsilon_{p}\) and \(\varepsilon_{n}\). There is a body of literature on the additional particle content required to cancel anomalies; any such method can be applied [69; 70; 71; 72; 73; 74; 75; 76; 77]. #### iv.1.2 Anomaly Free \(U(1)_{b-L}\) If one chooses to avoid an anomalous model but wants to make use of an accidental global symmetry of the SM like baryon number or \(B-L\), we quantitatively present here an anomaly free model that allows for different \(\varepsilon_{p}\) and \(\varepsilon_{n}\) in the same way as in the \(U(1)_{B}\) model. To be concrete, we focus on a broken \(U(1)_{B-L}\) model as described in [14] which immediately leads to \(\varepsilon_{p}\neq\varepsilon_{n}=-\varepsilon_{\nu_{e}}\) via a kinetic mixing between the new boson (which will become the 17 MeV \(X\) state) and the photon as in the \(U(1)_{B}\) model above. We update this model to comply with additional neutrino constraints which leads to two possible ways of proceeding. Mass in the dark sector is generated via a new \(B-L\) Higgs boson with a vev of 3.4 GeV that gives mass to \(X\). Since we need \(|\varepsilon_{\nu}|\) much smaller than \(\varepsilon_{n}\), we again follow [14] with the suggested extension of including an additional vectorlike leptonic \(SU(2)_{L}\) doublet. After diagonalizing the mass matrix of the various neutrinos, we find that the remaining contribution to the neutrino coupling to \(X\) is \[\varepsilon_{\nu}=-\varepsilon_{n}\cos 2\theta\,, \tag{2}\] where \(\theta\) is the mixing between the active neutrino and the new vectorlike neutrino \(\nu_{4}\). Thus we require \(|1-\tan\theta|<5\times 10^{-4}\) to be consistent with neutrino scattering data which implies a fairly specific relation between seemingly unrelated parameters in the model. The mixing angle depends on the number of new fermions and their masses. For \(N\) new neutrinos their masses must be given by this expression: \[\sqrt{\tan\theta}=\left(\frac{60~{}\text{GeV}}{m_{\nu_{4}}}\right)\left(\frac{ 0.006}{|\varepsilon_{n}|}\right)\left(\frac{\sqrt{N}\lambda}{4\pi}\right)\simeq 1\,, \tag{3}\] Figure 3: Constraints on \(\varepsilon_{\nu_{e}}\) and \(\varepsilon_{e}\) for \(m_{X}=17\) MeV. The dark cyan regions shows the current constraint from CEvNS setting \(\varepsilon_{n}=0.0058\), the lighter cyan regions shows the future constraints from NSI at upcoming oscillation experiments. The red and pink regions show the excluded regions from NA64 and \(g_{e}-2\), the lighter pink region to the left shows the constraint from SN. The purple hatched region shows the preferred region for \(\varepsilon_{e}\) from \(g_{e}-2\). The currently allowed region of parameter space is shown in white. The allowed region for \(\varepsilon_{e}\) can be probed with future collider and beam dump experiments [65; 66; 67; 68]. where the coupling \(\lambda\) between the active neutrino and the \(\nu_{4}\) state mediated by the new Higgs boson can be as large as \(4\pi\). Smaller values of \(\lambda\) lead to smaller physical masses \(m_{\nu_{4}}\). This implies that we must have a new neutrino with a mass \(\lesssim 60\) GeV. This new state cannot be lighter than \(m_{Z}/2\)[78], it must be heavier than \(\sim 50\) GeV, otherwise it would contribute to the well measured \(Z\) width. In addition, since the mixing angle with the light neutrino needs to be very close to \(45^{\circ}\), this predicts very large unitary violation of the \(\nu_{e}\) row of the measurable \(3\times 3\) PMNS matrix. This can be constrained by comparing theoretical predictions for the reactor, solar, or radioactive source neutrino fluxes. The measurement of \({}^{7}\)Be neutrinos is in good agreement on the flux which, combined with shape information from KamLAND [79] provides a fairly direct constraint on the unitarity of the \(\nu_{e}\) row at the few % level. Reactor neutrinos had a hint of a \(\sim 10\%\) deviation between the theoretical prediction and the measurement [80], although careful measurements of the relative fluxes from different isotopes indicate that a nuclear physics issue may explain this tension [81; 82]. Finally, there exists an unresolved tension in the comparison of the expected rate of neutrinos from \({}^{37}\)Ar and \({}^{51}\)Cr decays and the measurements [83; 84; 85; 86; 87; 88; 89] which seems to predict quite large mixing at the \(\sim 40\%\) level, although in tension with solar results. The strongest solar neutrino bound that directly contains the electron neutrino row normalization is the \({}^{7}\)Be measurement which is in the low energy vacuum regime. It is measured at the 8% level at 90% C.L. and is consistent with the expectation at \(<1\sigma\)[79]. Since the \({}^{7}\)Be is mostly the vacuum dominated regime, the probability, without assuming unitarity, is \[P_{ee,D,vac}=(|U_{e1}|^{2}+|U_{e2}|^{2}+|U_{e3}|^{2})^{2}-2|U_{e1}|^{2}|U_{e2}| ^{2}\,, \tag{4}\] up to small \(|U_{e3}|^{2}\) corrections. Using the measurement from KamLAND and the theory prediction of the flux, this implies an uncertainty on the electron row normalization of 4% at 90% C.L., strongly disfavoring a maximal active-sterile mixing angle. That is, at 90% C.L. the deviation is constrained to be \(\delta_{e}\equiv 1-(|U_{e1}|^{2}+|U_{e2}|^{2}+|U_{e3}|^{2})<0.04\) from solar neutrino measurements5. Footnote 5: Constraints from fits to a large number of oscillation observables also exist [90; 91; 92], however these analyses are not truly global as they do not include all available data neither anomalous results. In addition, some of the analyses assume that other experiments measured exactly the standard prediction, even when they did not. Nonetheless, a more comprehensive analysis may well lead to stronger constraints on \(\delta_{e}\) than that quoted here. **Following the gallium anomaly**: If we take the gallium anomaly's \(>5\sigma\) hint of large unitarity violation in the \(\nu_{e}\) row seriously, then the above model is valid and we are already seeing the large predicted unitarity violation. The model also predicts a Majorana mass term for the SM singlet, right handed neutrinos which explains the mass of the active neutrinos through a seesaw via the same \(B-L\) Higgs boson. This Majorana mass needs to be \(m_{M}\lesssim 10\) GeV at the largest allowed coupling from unitarity to get the known active neutrino masses. Thus we need a Dirac mass contribution of \[m_{D}=14\text{ keV}\sqrt{\frac{m_{M}}{10\text{ GeV}}}\,, \tag{5}\] which gives a mass for the active neutrino of \(m_{\nu_{L}}=0.01\) eV, and an additional neutrino exists at \(m_{\nu_{R}}=m_{M}\lesssim 10\) GeV. Thus the active neutrino mixes with the right handed sterile neutrino at the level \[\psi^{2}=1.4\times 10^{-6}\sqrt{\frac{10\text{ GeV}}{m_{M}}}\,. \tag{6}\] At the maximum value of \(m_{M}\), this mixing angle is allowed but will be tested by LHCb, ATLAS, and CMS [93]. For \(2\text{ MeV}<m_{\nu_{R}}<2\) GeV existing data from CHARM, T2K, PIENU, and Borexino already rule this out [94; 95; 96; 97; 98]. Sterile neutrinos with \(m_{\nu_{R}}\) below 2 MeV are allowed. The entire region below 1 GeV is in (model dependent) tension with cosmological results from BBN as well as combined CMB and BAO results [99; 100]. **Following the neutrino unitarity constraints**: While the gallium result could be the first hint of the very large mixing this scenario predicts, we now continue as if it is disfavored, as suggested by solar neutrino data6. This constraint cannot be evaded by increasing the number of new neutrinos, e.g. increasing \(N\) in eq. 3, either as eq. 9 for \(N\) steriles with identical charge \(B-L=1\) always reads Footnote 6: See e.g. [101; 102] for scenarios with a sterile neutrino compatible with the gallium measurements that also evades the solar neutrino constraints. \[\varepsilon_{\nu}=-\varepsilon_{n}(1-2\delta_{e}) \tag{7}\] independent of the number of steriles. However one way to circumvent the unitarity violation constraints is to change the charge assignments of the new particles. Since we need to cancel the neutrino charge which has \(B-L=-1\), instead of adding in a new vectorlike neutrino with \(B-L=1\), we assign the new neutrino charge \(z_{\nu_{4}}>1\), which modifies eq. 2 to \[\varepsilon_{\nu}=-\varepsilon_{n}(\cos^{2}\theta-z_{\nu_{4}}\sin^{2}\theta)\,. \tag{8}\] Since the new fermions are all vectorlike, the anomalies are automatically cancelled. In this scenario we have that the new Higgs scalar has charge \(z_{\nu_{4}}+1\). We also see that, while this new Higgs scalar in the previous case automatically provided a Majorana mass term to the right handed neutrino mixing with the active neutrinos to give a traditional seesaw mass to those neutrinos, with the larger charge assignments \(z_{\nu_{4}}>1\), this is no longer possible. The active neutrinos can still get their masses from any number of scenarios including Dirac masses only, or via a seesaw with a third Higgs boson. Given a maximum deviation on the unitarity of the \(\nu_{e}\) row of \(\delta_{e}\), the charge of \(\nu_{4}\) must be greater than \[z_{\nu_{4}}\geq\frac{|\varepsilon_{\nu}|}{|\varepsilon_{n}|}+(1-\delta_{e}) \over\delta_{e}}\approx\frac{1}{\delta_{e}}\,, \tag{9}\] where the approximation applies when \(|\varepsilon_{\nu}|\ll|\varepsilon_{n}|\) and \(\delta_{e}\ll 1\). Thus we need \(z_{\nu_{4}}\gtrsim 24\) which allows one to evade the unitarity constraints on \(\delta_{e}\) and neutralize the neutrino charge to below the CEvNS limit on \(\varepsilon_{\nu}\). The behavior of eq. 9 is shown in fig. 4. The small mixing angle required by the unitarity constraint can be easily achieved by pushing \(m_{\nu_{4}}\) up to \(\sim 135\) GeV in eq. 3 which is safe from constraints. Future solar neutrino measurements from DUNE and HK as well as reactor measurements from JUNO will improve this unitarity constraint and either detect a deviation or further constrain \(\nu_{e}\) row normalization unitarity thus increasing the required charge. On the other hand, improvements to the constraint on \(\varepsilon_{\nu}\) from e.g. CEvNS will not increase the required charge since it is already known that \(|\varepsilon_{\nu}|\ll|\varepsilon_{n}|\). There may be additional ways to suppress the \(\varepsilon_{\nu}\) mixing in a \(U(1)_{B-L}\) scenario without the addition of new neutrinos, however these scenarios tend to be even more baroque. Alternatively, one could study a different gauge symmetry instead of \(U(1)_{B-L}\) however also in this case the neutrino couplings need to be suppressed via the introduction of additional fermions [103] as a vector boson which couples directly to charged leptons automatically also couples to neutrinos. ### Small \(\varepsilon_{e}\) Scenarios One could attempt to set \(|\varepsilon_{e}|\) much lower, below the limit from E137 [48] and from supernova [53]. This would automatically ensure that the neutrino bounds are evaded in e.g. a \(U(1)_{B-L}\) model, as typically the neutrino coupling is similar to the electron coupling, without relying on the specifics of any additional model building. Such scenarios experience other problems, however. If we consider the strongest constraints on \(\varepsilon_{e}\) in this region, we find that the largest \(\varepsilon_{e}\) can be is \(10^{-12}\), at the limit from supernova [53]. In this case \(X\) would not decay in time for the ATOMKI experiments, which require that the dominant \(\varepsilon\) contributing to its decay width to be \(\gtrsim 10^{-5}\). Thus we must introduce a new dark fermion \(\psi\) that couples to \(X\) at \(\varepsilon_{\psi}=10^{-5}\). While this satisfies the lifetime constraint, a new problem arises. Since \[\text{BR}(X\to e^{+}e^{-})=\frac{\varepsilon_{e}^{2}}{\varepsilon_{\psi}^{2}} =10^{-14}\,, \tag{10}\] we must increase \(\varepsilon_{n}\) and \(\varepsilon_{p}\) by a factor of \(10^{7}\) to get the correct widths to explain ATOMKI shown in fig. 2, at which point the couplings are well past the unitarity limit. If the strong supernova constraints are ignored, as may be the case in the presence of additional new physics, at the limit from E137, \(\varepsilon_{e}=10^{-7}\)[48]. In this case we would require increasing \(\varepsilon_{n}\) and \(\varepsilon_{p}\) by only a factor of \(10^{2}\) which remains below the unitarity limit. Nonetheless, \(\varepsilon_{n}\) and \(\varepsilon_{p}\) are now too large. The constraint on \(\varepsilon_{p}\) from \(\pi^{0}\) decays is no longer relevant due to the \(1/\sqrt{\text{BR}(X\to e^{+}e^{-})}\) factor and we require \(\varepsilon_{p}\sim 0.2\). Then, we require \(\varepsilon_{n}\sim 0.6\) which is in considerable tension with the constraint from neutron-lead scattering [14; 104] which is \(\varepsilon_{n}<0.02\) for a mediator at 17 MeV. To summarize this section we show that, while it is possible to realize an "upphobic" scenario by, for example gauging baryon number or \(B-L\) and invoke kinetic mixing with the photon, these models face severe constraints largely from the neutrino sector and the fact that \(X\) must decay within the detector. In addition, we confirm previous results that small \(\varepsilon_{e}\) scenarios cannot lead to viable models given existing constraints. We find that one should consider one of the following three scenarios to achieve a viable model: 1. A flavor non-universal \(U(1)_{X}\) model without the introduction of new fermions or an anomalous \(U(1)_{B}\) scenario which requires additional quarks to cancel the anomalies. 2. A \(U(1)_{B-L}\) scenario that explains neutrino masses with an additional heavy neutrino at \(50\) GeV \(\lesssim m_{\nu_{4}}\lesssim 60\) GeV and large mixing consistent with the Figure 4: The minimum required charge in the anomaly free \(U(1)_{B-L}\) scenario on the new Dirac neutrino, \(z_{\nu_{4}}\), to sufficiently neutralize the active neutrino charge below the limit from CEvNS while remaining consistent unitarity probes, see eq. 9. The regions above and to the right of the dotted lines are disfavored. gallium anomaly, but in tension with solar neutrinos. Additionally a Majorana neutrino with MeV-GeV mass is predicted which can be tested with upcoming experiments. 3. A \(U(1)_{B-L}\) scenario with an additional heavy neutrino at \(m_{\nu_{4}}\gtrsim 135\) GeV and large \(B-L\) charges. Additional more involved models are likely possible as well, see e.g. [105; 106]. ## V Conclusions ATOMKI has reported several measurements that indicate new physics at high significance. While their results have not been directly tested elsewhere, they are compelling due to their agreement in the implied mass of the particle from the measurements of the opening angles. While it is unambiguous that they seem to point to a new particle with a mass just below 17 MeV, the nature of that particle is unclear, as well as any new dark sector it may provide a window into. We provide an up-to-date statistical test of the data. We include angular and width data from measurements of three separate targets and separately constrain the coupling to protons and neutrons as well as the new particle's mass. We find that there are some non-trivial degeneracies. We also find that, while the different measurements do not perfectly agree with each other, the internal tension is not too large compared to the large preference for new physics over the Standard Model. Contrary to previous work, we allowed for the possibility of couplings to protons, which leads to the realization that the data seems to prefer couplings to down quarks, i.e. an "up-phobic" flavor structure. Reviewing other constraints on MeV scale physics makes it clear that the model building space is fairly constrained. Notably the latest reactor neutrino measurements and the unitarity of the neutrino mixing matrix place key constraints. We find that it is not possible to consistently explain the ATOMKI data with just one new particle and outline a set of relatively minimal scenarios in several different directions to generally illustrate the minimal model building requirements to explain the anomaly. In addition, since the parameter space is somewhat tightly constrained, we anticipate that a confirmation could happen elsewhere soon, or the constraints will require even more complicated models to explain the ATOMKI data. In fact, measurements of \(g_{e}-2\) show a slight anomaly in the relevant region of parameter space. In the future constraints from LHCb [65], DarkQuest [66], FASER [67], NA64 [107], Mu3E phase II [68], BE-SIII [108], and experiments using rare pion or kaon decays [109] will further test the couplings of \(X\) to quarks and electrons, potentially even closing the whole allowed parameter space. Also upcoming neutrino oscillation experiments as well as CEvNS experiments will improve the constraints on light mediators coupling to neutrinos and improving bounds on neutrino unitarity making it more and more challenging to develop self-consistent anomaly free models that explain the ATOMKI anomalies. Furthermore, several experiments are planned to directly test the ATOMKI anomaly like DarkLight at the TRIUMF ARIEL e-linc [110; 111], a recently approved electron scattering experiment at Jefferson Lab [112] as well as the PADME experiment [113; 114], see [5] for a discussion of ongoing and upcoming efforts to test this anomaly. While the model building to explain ATOMKI is somewhat involved, given the relatively compelling nature of the anomalies we anticipate a compelling story will evolve in the coming years, regardless of the outcome. ###### Acknowledgements. We thank I. Brivio and J. Feng for helpful comments. PBD acknowledges support by the United States Department of Energy under Grant Contract No. DE-SC0012704. JG thanks the HET group at BNL for kind hospitality during the writing of the paper. ## Appendix A Further constraints on \(X\) In sec. III we collected the dominant constraints on \(X\). Here we mention sub-dominant constraints which are nevertheless important for the validity of the model. A constraint on the coupling of the \(X\) boson to electrons comes from the required lifetime of \(X\) in the ATOMKI experiment. Following [14] we use that the distance between the target where the excited nuclear state is formed and the detector is \(\mathcal{O}(\text{cm})\). We then require that \(X\) propagates no more than 1 cm from its production point before it decays into electrons which leads to a constraint on its coupling to electrons as \(\varepsilon_{e}>1.3\times 10^{-5}\times\sqrt{\text{BR}(X\to e^{+}e^{-})}\). NA64 conducted also a search for \(X\) using its invisible decays in the process \(e^{-}Z\to e^{-}ZX\), \(X\to\text{invisible}\)[115]. The constraint is \(\varepsilon_{e}<(5.2\times 10^{-5})/\sqrt{\text{BR}(X\to\text{inv})}\) for \(m_{X}=17\) MeV at 90% C.L. A constraint from neutrino-electron scattering experiments bounds the product of \(\varepsilon_{e}\varepsilon_{\nu_{e}}\). The TEXONO experiment provides the strongest constraints for \(m_{X}\approx 17\) MeV [116] of \(\sqrt{|\varepsilon_{e}\varepsilon_{\nu_{e}}|}<7\times 10^{-5}\) for \(\varepsilon_{e}\varepsilon_{\nu_{e}}>0\), \(\sqrt{|\varepsilon_{e}\varepsilon_{\nu_{e}}|}<3\times 10^{-4}\) for \(\varepsilon_{e}\varepsilon_{\nu_{e}}<0\) at 90% C.L. These constraints are not dominant in the context of ATOMKI, but are orthogonal and depend on a different combination of parameters than the leading constraints. As these constraints improve in the future, the dominant constraints may change in nontrivial ways. Additional model-dependent constraints may also exist.
2306.11699
GenPlot: Increasing the Scale and Diversity of Chart Derendering Data
Vertical bars, horizontal bars, dot, scatter, and line plots provide a diverse set of visualizations to represent data. To understand these plots, one must be able to recognize textual components, locate data points in a plot, and process diverse visual contexts to extract information. In recent works such as Pix2Struct, Matcha, and Deplot, OCR-free chart-to-text translation has achieved state-of-the-art results on visual language tasks. These results outline the importance of chart-derendering as a pre-training objective, yet existing datasets provide a fixed set of training examples. In this paper, we propose GenPlot; a plot generator that can generate billions of additional plots for chart-derendering using synthetic data.
Brendan Artley
2023-06-20T17:25:53Z
http://arxiv.org/abs/2306.11699v1
# GenPlot: Increasing the Scale and Diversity of Chart Derendering Data ###### Abstract Vertical bars, horizontal bars, dot, scatter, and line plots provide a diverse set of visualizations to represent data. To understand these plots, one must be able to recognize textual components, locate data points in a plot, and process diverse visual contexts to extract information. In recent works such as Pix2Struct, Matcha, and Deplot, OCR-free chart-to-text translation has achieved state-of-the-art results on visual language tasks. These results outline the importance of chart-derendering as a pre-training objective, yet existing datasets provide a fixed set of training examples. In this paper, we propose GenPlot; a plot generator that can generate billions of additional plots for chart-derendering using synthetic data. ## 1 Introduction Traditionally, OCR-aware methods such as LayoutLM (Xu et al., 2020), PresSTU (Kil et al., 2022), PaLI (Chen et al., 2023), and ChartBERT (Akhtar et al., 2023) have been used to extract information from plots. While these models can accurately extract text, they require a dataset of labeled components which can be expensive to obtain. Additionally, plots do not always represent numerical components exactly (ie. Scientific Notation, Percent, etc.), and therefore post-processing is required to extract numerical information. In recent works like Donut (Kim et al., 2022), Pix2Struct (Lee et al., 2022), Matcha (Liu et al., 2023b), and Deplot (Liu et al., 2023a), OCR-free chart-to-text translation methods are used. Donut focuses on document understanding, whereas pix2struct aims to provide a generic pre-trained checkpoint for many downstream visual language tasks. Matcha and Deplot are concurrent to Pix2Struct as the models use the same underlying architecture. The models all require comprehensive datasets for pre-training tasks to achieve state-of-the-art results on the PlotQA (Methani et al., 2020) and ChartQA (Masry et al., 2022) benchmarks. It is computationally expensive to obtain these large datasets for pre-training, and this is where GenPlot can help. GenPlot provides a framework to generate billions of possible plot combinations for chart drendering tasks. GenPlot is a standalone Python script built using Matplotlib Hunter (2007) that can generate bar, scatter, line, and dot plots. The configuration of the hyperparameters in the script can be modified as each user sees fit. By default, we use similar categorical labels determined using GloVe Pennington et al. (2014) embeddings and randomly generated numerical data. We also provide a pre-generated set of 500,000 plots that can be used in place of the generator. GenPlot provides a way for researchers to quickly increase the scale of data for chart drendering tasks. ## 2 Related Work Recent works such as PlotQA Methani et al. (2020), ChartQA Masry et al. (2022), Matcha Liu et al. (2023), and DVQA Kafle et al. (2018) generate plots with a focus on Visual Question Answering (VQA) Agrawal et al. (2016). PlotQA provides bar, line, and scatter plots. CharQA provides bar, line, and pie plots, and DVQA provides bar plots. It is unclear which plot types are generated for Matcha pretraining, as the data is not available in the public domain. The result of these works is high-quality question-answer pairs, rather than a means to generate large-scale datasets. An existing work for chart generation is FigureQA Kahou et al. (2018). This source provides synthetically generated data, bounding boxes, and question-answer pairs. FigureQA also provides 4 different plot types, and the authors have released the code for data generation. The generation process is limited to 100 colors, and fixed chart components such as gridlines, labels, and legends. GenPlot extends the work of FigureQA by providing a means to generate a larger and more diverse set of plots. It does this through random color sampling, and variability in margins, grids, ticks, labels, plot sizes, and more. To our knowledge, there is no existing framework to generate plots with such a high degree of variability, which motivated us to build this framework. In table 1, we compare our dataset with existing works. Generate+ indicates whether the source provides the ability to generate new plots, "#" and Tables is the number of plots in the dataset. For "#" Plot Types we count the number of unique plot types (ie. Bar, Line, Plot, Pie, etc.). Orientation of the chart type is not considered. For example, horizontal bar plots and vertical bar plots are counted as one. ## 3 Metadata and Generation Metadata for each plot is stored in a similar format to pix2struct chart-to-table models Lee et al. (2022). For example, the string may look like "0 l 1 <0x0A> 1 | 2 <0x0A> 2 | 7". In this string, there are 3 data points, (0,1), (1,2), and (2,7). The first value in each pair is the x value, and the second value is the y value. The x and y values are separated by the "l" character in the string. Each pair is then separated by the "<0x0A>" character sequence. All data points are stored in order from left to right, with the expectation of horizontal bar plots which are stored in a top-down fashion. Regardless of the data type on each axis, the format of the metadata stays the same. ### Text Generation Plots can contain text in the title, subtitles, and as categorical labels. To obtain groups of related words we utilized GloVe Pennington et al. (2014) embeddings. First, we set a predefined list of common objects like "toothbush", "coffee", "notebook", etc. Then, we sampled the 25 most similar words for each of the objects and added these to a vocabulary list. Any words that contained non-ASCII characters were not included. We performed this step twice which resulted in 42744 groups of similar words. We sampled in this way to ensure that label groups were related rather than randomly selected words. For each plot, the main title and x-axis title contains a sample of 3 to 7 words from a random group. The y-axis title contains a sample of 1-4 words. We set this range to reduce the chance of title and tick label overlap during the plot generation process. For categorical labels, we use a list of place names, months, days, and part-numerical strings. The place names include a list of 2123 countries, regions, counties, and states. Months and days can be generated in numerous ways and occasionally contain numerical values. For example, the months and days could appear in the following formats: "Jan", "December", "Jan-Feb", "Apr-10", "Tues", "Friday", etc. The part-numerical strings appear as follows: "10-20", "30-40", etc. We included part-numerical strings as categorical labels to create "difficult" examples. ### Numerical Generation The numerical data generation is done using a random polynomial sampler and a random linear sampler implemented in Numpy Harris et al. (2020). Each sampler generates sequences of integers or floating point numbers and adds a small degree of Gaussian noise to the set of values. The values are then scaled by a factor between 0.01 and 1,000,000. Occasionally, random points are scaled again to simulate outliers in the data. Plot-specific modifications are also made to ensure the data was suitable for each plot type. ## 4 Plot Types and Styles In this section, we outline the default settings and parameter combinations in the plot generation process. Moreover, each plot is generated using a programming tool called Matplotlib Hunter (2007). We select 8 unique styles, and 9 font families as default parameters. At the time of generation, the style and font family are randomly selected. We also randomly remove ticks, grids, and spines from each plot. To generate the data point colors, we randomly sample RGB values between 40 and 200. This ensures that the data points can be seen on light and dark backgrounds, and gives more color variability to the plots. Finally, we enforce graph conventions from the Benetech - Making Graphs Accessible Competition Andrews et al. (2023). These conventions were in place so that generated plots are representative of non-generated plots. ### Bar plot There are 200,000 bar plots in the pre-generated data. Half of these are vertical bar plots, and the other half are horizontal bars. These plot types are very similar, with only the axes being flipped. We discuss the bar plots in the context of vertical bars for the remainder of this section. The number of generated bars is between 2 and 20, with oversampling to sizes of around 6 bars. Variation is added to the spacing between bars and the plot margins. The X-axis labels are integers or strings, and the Y-axis values are numerical. No value on the Y-axis can be 200 times greater than the minimum value in each set of values. Each X-axis label corresponds to a single bar in the plot. ### Scatter plot There are 100,000 scatter plots in the pre-generated data. The number of generated points in each plot is between 3 and 86. The two main variations of the scatter plot are randomly sampled points, and points that follow a path-like pattern. For a high number of randomly sampled points, values likely overlap. Therefore we implement a custom sampler to ensure that any two points do not completely overlap. The line-like scatter plot is generated using the polynomial sampler and provides a version of the plot type similar to the line plot. The X-axis labels and the Y-axis labels are numerical. ### Line plot There are 100,000 line plots in the pre-generated data. The number of generated points is between 2 and 20, with oversampling to sizes of around 7 \begin{table} \begin{tabular}{l c c c c c c c c} \hline **Dataset** & **Generate+** & **\# Tables** & **Bar** & **Line** & **Pie** & **Scatter** & **Dot** & **Unique Plot Types** \\ \hline ChartQA & No & 22k & 1 & 1 & 1 & 0 & 0 & 3 \\ PlotQA & No & 224k & 1 & 1 & 0 & 1 & 0 & 3 \\ MATCHA & No & 270k & 1 & 1 & 1 & 0 & 0 & 3 \\ DVQA & No & 300k & 1 & 0 & 0 & 0 & 0 & 1 \\ FigureQA & No & 100k & 1 & 1 & 1 & 1 & 0 & 5 \\ \hline GenPlot (ours) & Yes & 500k & 1 & 1 & 0 & 1 & 1 & 5 \\ \hline \end{tabular} \end{table} Table 1: Dataset Statistics Figure 1: Vertical Bar Plot values. Values found on the X-axis are either dates or integer values in ascending order. The Y-axis is strictly numerical. One unique characteristic of the line plot is that X-axis labels do not always correspond to data points. This is done to ensure that OCR-free systems do not just read the data labels, and instead learn from the entire context. Additionally, dots are randomly added to the line, and random smoothing is occasionally applied. This is done to increase variability. ### Dot Plot There are 100,000 dot plots in the pre-generated data. The number of generated points is between 2 and 21 values. Values on the Y-axis are always integers between 1 and 10, and the Y-axis values can be strings or numerical. A unique characteristic of the dot plots is that numerical X-axis labels are not always present, as they can be inferred from surrounding labels. Additionally, Y-axis ticks are sometimes completely removed, as counts can be inferred from the plot. See table 5 for an example of this. Also, as of this writing, there is no standardized Matplotlib function for dot plots, so we implement a custom dot plot function using the scatter function. ## 5 Conclusion We have proposed a framework for large-scale plot generation for chart-derendering. We use a variety of coloring, spacing, and sampling options to yield a diverse generation for 4 plot types. We provide a pre-generated set of 500,000 plots which exceeds the size of existing datasets. The code for GenPlot can be found on GitHub (link), along with instructions on how to get started. The dataset containing the pre-generated plots can be found on Kaggle. ### Limitations Though we built a diverse plot generator, there are still many plot types that we did not imple Figure 4: Line Plot Figure 5: Dot Plot Figure 3: Scatter Plot Figure 2: Horizontal Bar Plot ment. For example, we did not include pie plots, histograms, or treemaps. It remains up for debate whether adding more plot types would be beneficial for chart-derendering as a pre-training task. Additionally, we added safeguards to reduce the likelihood of label overlap, but there is still a possibility of this happening in newly generated plots. We were unable to find a way to validate this without a human in the loop. ### Ethics Statement During the data generation process, we considered several ethical issues. Firstly, we acknowledge Gender and Race bias found in GloVe embeddings. Occupations and degrees of similarity between words are stereotyped. We do not condone the stereotypes found in this model, and recognize how this is a result of its training data.
2301.02124
Rényi entropies for one-dimensional quantum systems with mixed boundary conditions
We present a general method for calculating R\'enyi entropies in the ground state of a one-dimensional critical system with mixed open boundaries, for an interval starting at one of its ends. In the conformal field theory framework, this computation boils down to the evaluation of the correlation function of one twist field and two boundary condition changing operators in the cyclic orbifold. Exploiting null-vectors of the cyclic orbifold, we derive ordinary differential equations satisfied by these correlation functions. In particular, we obtain an explicit expression for the second R\'enyi entropy valid for any diagonal minimal model, but with a particular set of mixed boundary conditions. In order to compare our results with numerical data for the Ising and three-state Potts critical chains, we also identify and compute the leading finite size corrections.
Benoit Estienne, Yacine Ikhlef, Andrei Rotaru
2023-01-05T16:01:36Z
http://arxiv.org/abs/2301.02124v1
# Renyi entropies for one-dimensional quantum systems with mixed boundary conditions ###### Abstract We present a general method for calculating Renyi entropies in the ground state of a one-dimensional critical system with mixed open boundaries, for an interval starting at one of its ends. In the conformal field theory framework, this computation boils down to the evaluation of the correlation function of one twist field and two boundary condition changing operators in the cyclic orbifold. Exploiting null-vectors of the cyclic orbifold, we derive ordinary differential equations satisfied by these correlation functions. In particular we obtain an explicit expression for the second Renyi entropy valid for any diagonal minimal model, but with a particular set of mixed boundary conditions. In order to compare our results with numerical data for the Ising and three-state Potts critical chains, we also identify and compute the leading finite size corrections. ###### Contents * 1 Introduction * 2 The cyclic orbifold * 2.1 The cyclic orbifold on the Riemann sphere * 2.1.1 Symmetry algebra and operator content * 2.1.2 Null vectors for untwisted operators * 2.1.3 The induction procedure * 2.2 The cyclic orbifold on the upper half plane * 2.3 Operator algebra of the cyclic orbifold BCFT * 2.3.1 Calculation of boundary-boundary structure constants * 2.3.2 Orbifold bulk-boundary structure constants * 3 Differential equations in the \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{3}\) orbifold BCFT * 3.1 Setup for the calculations * 3.2 The function \(\langle\Psi_{12}\cdot\sigma\cdot\Psi_{12}\rangle\) in a generic \(\mathbb{Z}_{2}\) orbifold * 3.3 The function \(\langle\Psi_{12}\cdot\sigma_{h}\cdot\Psi_{12}\rangle\) in a generic \(\mathbb{Z}_{2}\) orbifold * 3.4 The function \(\langle\Psi_{12}\cdot\sigma\cdot\Psi_{12}\rangle\) in a generic \(\mathbb{Z}_{3}\) orbifold * 3.5 The function \(\langle\Psi_{12}\cdot\sigma_{13}\cdot\Psi_{12}\rangle\) in the \(\mathbb{Z}_{3}\) orbifold of the Ising model * 3.6 More hypergeometric differential equations in the Ising cyclic orbifold BCFTs * 4 Numerical checks and finite-size corrections in quantum chains * 4.1 The Ising quantum chain with mixed BC * 4.2 The three-state Potts quantum chain with mixed BC * 5 Conclusion * A Mother BCFT conventions * B Computation of orbifold structure constants * B.1 Composite twist one-point structure constant in the \(\mathbb{Z}_{N}\) orbifold BCFT * B.2 Bulk-boundary structure constant in the \(\mathbb{Z}_{2}\) orbifold CFT * C Orbifold Ward identities for bulk fields * D Renyi entropies for the critical Ising chain with mixed fixed BC * E Hypergeometric differential equation * F Fusion rules in the \(\mathbb{Z}_{N}\) orbifold * G Derivation of differential equation in the \(Z_{3}\) orbifold BCFT * H Numerical implementation of the Frobenius method Introduction The understanding of quantum entanglement has proved to be a research topic of continued and central interest for physicists working in domains as diverse as high energy physics, condensed matter theory and quantum information. Entanglement measures have turned out to be useful diagnostic tools for tensor network algorithms, quantities of interest for the AdS/CFT correspondence, and, most relevantly for the present work, a powerful tool for probing the physics of quantum many-body systems. With respect to the latter, the study of entanglement has proved crucial to the study of phase transitions in one dimensional quantum systems, by allowing their detection and the characterization of their critical exponents and corresponding central charge [1, 2, 3, 4]. Important applications of entanglement are found in higher dimensions too. We mention, for two-dimensional systems, the establishment of intrinsic topological order and various anyonic quantum dimensions [5, 6] and the detection and counting of critical Dirac fermions [7, 8, 9, 10]. Finally, entanglement can also be used, in two [11, 12, 13, 14, 15] or higher dimensions [16, 17] to reveal gapless interface modes. The basic setup is as follows: we consider a quantum system in a pure state \(|\Psi\rangle\), and a spatial bipartition of said system into two complementary subregions \(A\) and \(B\). The entanglement between them is then encoded in the reduced density matrix \(\rho_{A}=\mathrm{Tr}_{B}|\Psi\rangle\langle\Psi|\) and it can be quantified through entanglement measures, such as the _Renyi entanglement entropies_[18, 19, 20, 21, 22] \[S_{n}(A)=\frac{1}{1-n}\log\mathrm{Tr}_{A}\left(\rho_{A}^{n}\right)\,, \tag{1.1}\] and in particular the \(n\to 1\) case corresponding to the well-known _von Neumann entropy_: \[S(A)=-\mathrm{Tr}_{A}\left(\rho_{A}\log\rho_{A}\right)\,. \tag{1.2}\] While the focus on entanglement entropies has been mostly theoretical, in recent years experimental proposals as well as actual experiments have been designed to measure them [23, 24, 25, 26, 27, 28]. For strongly correlated quantum systems, the theoretical computation of entanglement entropies is a technically challenging endeavour. However, if these systems are one-dimensional and critical, the formidable toolbox of two-dimensional Conformal Field Theory (CFT) is available to tackle such computations. The calculations of entanglement entropies through such methods rests on two crucial insights. The first insight is that, for integer values of \(n\), and a subsystem \(A=\cup_{i}[u_{i},v_{i}]\) built as the union of some disjoint intervals, the moments of the reduced density matrix \(\mathrm{Tr}_{A}\left(\rho_{A}^{n}\right)\) can be expressed as the partition function of an \(n\)-sheeted Riemann surface with conical singularities corresponding to the endpoints of the intervals \([u_{i},v_{i}]\)[29, 2]. Such partition functions have been evaluated, with significant toil, for free theories and some special cases of interacting models [4, 30, 31, 32, 33, 34, 35, 36, 37, 38]. In general, however, a second insight is needed to make progress: the replication of the _spacetime_ of the theory can be "exchanged" for the replication of the _target space_ of the CFT [39, 40, 41]. Such a construction, known in the literature as the _cyclic orbifold CFT_[40], is built from the permutation symmetric product of \(n\) copies of the original CFT (referred to as _the mother CFT_), by modding out the discrete subgroup \(\mathbb{Z}_{n}\) of cyclic permutations. In this framework, the conical singularities of the mother CFT defined on the replicated surface are accounted for by insertions of _twist fields_[39] in cyclic orbifold correlators. Thus, by computing correlators of twist operators, one can evaluate \(\mathrm{Tr}_{A}\left(\rho_{A}^{n}\right)\) for a variety of setups. To give a few examples, one can easily adapt the twist field formalism to encode modified initial conditions around the branch points [42], which is fitting for computations of more refined entanglement measures such as the symmetry-resolved entanglement entropy [43, 44, 45, 46, 47, 48] or for explorations of entanglement in non-unitary systems [42, 49]. Arguably the most renowned result obtained in this framework is [1, 2, 50, 51, 52, 53] \[S_{n}(\ell)\underset{\ell\to\infty}{\sim}\frac{c}{6}\,\frac{n+1}{n}\,\log\ell\,, \tag{1.3}\] which gives the _universal_ asymptotic behaviour for the ground state entanglement entropy of an interval of length \(\ell\) in an infinite system (with \(c\) the central charge of the critical system). In this article, we consider the Renyi entanglement entropy in an open system with _mixed boundary conditions_, when the subregion \(A\) is a single interval _touching the boundary_ - we take the boundary condition (BC) at one end of the chain to be different from the BC at the other end (see Figure 1). In the scaling limit, such an open critical system is described by a Boundary Conformal Field Theory (BCFT), with a well understood [54, 55, 56, 57] correspondence between the chiral Virasoro representations and the _conformal boundary conditions_ allowed by the theory, and an algebra of boundary operators that interpolate between them. The more accessible setup of an interval touching one of two _identical_ boundaries has been thoroughly analysed using either conformal field theory methods [58, 59, 60, 2, 57, 52] or exact free fermion techniques [62, 63, 64]. Such configurations are also well-handled numerically, through density-matrix renormalization group (DMRG) techniques [65, 66, 67, 68]. In that setup, the subsystem \(A\) is at the end of a finite system with the same boundary condition \(\alpha\) on both sides. The computation of the Renyi entanglement entropies rests on the evaluation of a twist one-point function on the upper half-plane. Such a correlation function is straightforwardly fixed by conformal invariance, and as a consequence the entanglement entropy exhibits a simple dependence on the interval and system sizes. Explicitly, in the case of an interval of length \(\ell\) at the end of a system of size \(L\), one finds the leading universal behaviour [2]: \[S_{n}(\ell)\sim\frac{c}{12}\frac{n+1}{n}\log\left[\frac{2L}{\pi a}\sin\left( \frac{\pi\ell}{L}\right)\right]+\log g_{\alpha}\,, \tag{1.4}\] where \(a\) is the lattice spacing and \(g_{\alpha}\) is the _universal boundary entropy_[69] associated to the boundary condition \(\alpha\). When one studies systems with mixed BC, at the level of the BCFT one has to introduce _boundary condition changing operators_ (BCCOs), and thus the corresponding correlators are more complicated. The core idea of this framework is that the singular behaviour associated to the change in boundary conditions, can be encoded in the form of operators placed on the boundary, that interpolate between regions of different BC \(\alpha\neq\beta\). Thus, to compute the Renyi entropy \(S_{n}\) in this setup, we will evaluate _three-point functions_ with one twist operator and two BCCO insertions. Such setups have already been studied for the Ising and XX chains in [58], at the level of the CFT on the replicated surface, and rely on the knowledge of relatively simple closed form expressions for the \(2n\)-point correlator of BCCOs on the unit disk for their calculations. However, such knowledge is the exception, rather than the norm, for generic BCFTs. Figure 1: An interval of length \(\ell\) in a 1d critical chain with mixed BC (\(\alpha\beta\)) and length \(L\). In this work, we present a general method to compute such twist correlators functions with mixed BCs. The most technically demanding part of this framework is finding Ordinary Differential Equations (ODEs) that the correlators satisfy. According to Cardy's doubling trick [53], in the half-plane geometry, the three-point functions of interest obey the same Ward identities as a four-point conformal block with the corresponding operators, where the bulk twist operator \(\sigma(z,\bar{z})\) is replaced by the insertion of \(\sigma(z)\sigma^{\dagger}(\bar{z})\). Thus, in an adaptation of the method of [42], we can derive a differential equation by combining knowledge of the null-vector conditions obeyed by the twisted and untwisted fields under the symmetry algebra of the cyclic orbifold [40] with the derivation of well-chosen Ward identities obtained from current insertions in the correlators of interest. The final ingredient is the determination of a subset of the (bulk and boundary) structure constants of the cyclic orbifold BCFT, which fix the specific linear combination of solutions of the differential equation that gives the sought correlator. We have illustrated this approach with a variety of BCFT setups, that share a common assumption: in the mother CFT, the mixed boundary conditions (\(\beta\alpha\)) are implemented by a BCCO which is degenerate at level two under the Virasoro algebra. With this restriction, in the \(\mathbb{Z}_{2}\) orbifold of a generic BCFT, we have derived a second-order and a fourth order ODE, respectively for the _bare_ and _composite_1 twist correlator. Under the same restrictions, in the \(\mathbb{Z}_{3}\) orbifold of a generic BCFT, we have determined a third-order ODE for the bare twist correlator. We have also worked out, for the case of the \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{3}\) cyclic orbifolds of the Ising BCFT, a variety of lower-order ODEs. The latter calculations were found compatible with the results of [57], and have been tested against numerical results for the critical Ising chain, for all possible combinations of BCs, to excellent agreement. Finally, we have also considered the \(\mathbb{Z}_{2}\) orbifold of the three-state Potts model, and compared it against lattice data for the critical three-state Potts model with states \(\{R,G,B\}\), with less accurate, but consistent results. We quote here the leading behaviour of the second Renyi entropy of the critical three-state Potts chain of size \(L\) for mixed fixed \(R\) and _restricted_\(GB\) boundary conditions: Footnote 1: obtained by fusing the bare twist operator with an untwisted operator \(\phi\). \[S_{2}^{(R,GB)}(\ell)\sim\frac{c_{Potts}}{8}\log\frac{2L}{\pi a}\sin\left( \frac{\pi\ell}{L}\right)+\log g_{R}-\log\left[\eta^{-2h_{12}}{}_{2}\mathrm{F} _{1}\left(-8/5,-9/10;-9/5\mid 1-\eta\right)\right] \tag{1.5}\] with \(\eta=e^{2\pi i\ell/L}\), \(c_{Potts}=4/5\) the central charge, \(h_{12}=2/5\) the scaling dimension of the BCCO, and \(g_{R}=[(5-\sqrt{5})/30]^{1/4}\) the ground state degeneracy associated to the fixed \(R\) BC [69]. The expression (1.5) is, in fact, only a particular case of a more general result obtained in this paper, which applies to any critical system described by a BCFT based on a minimal model \(\mathcal{M}(p,p^{\prime})\) with mixed conformal BC \((\alpha,\beta)\) chosen such that the most relevant BCCO interpolating between them is \(\psi_{1,2}^{(\alpha\beta)}\) and there is no boundary operator \(\psi_{1,3}^{(\beta\beta)}\) allowed in the theory. Under these conditions, the second Renyi entropy of an interval \(A=[0,\ell]\), in a finite system of size \(L\) touching the boundary \(\beta\) is: \[S_{2}^{(\alpha,\beta)}(\ell)\sim\frac{c}{8}\log\frac{2L}{\pi a}\sin\left( \frac{\pi\ell}{L}\right)+\log g_{\beta}-\log\left[\eta^{-2h_{12}}{}_{2} \mathrm{F}_{1}\left(2-3\frac{p}{p^{\prime}},3/2-\frac{p}{p^{\prime}};3-4\frac {p}{p^{\prime}}\mid 1-\eta\right)\right] \tag{1.6}\] where \(c\), \(g_{\beta}\) and \(h_{12}\) generalize the notation in (1.5). In both (1.5) and (1.6), one should keep in mind, especially for the purpose of numerical studies, that the hypergeometric function in the third term of these equations converges inside the unit circle centred at \(\eta=1\), which only overlaps with the subinterval \(Arg(\eta)\in(0,\pi/3)\cup\{0,1\}\) \((5\pi/3,2\pi)\) of the parameter space of interest \(Arg(n)\in[0,2\pi]\). Thus, to evaluate the expressions for \(Arg(n)\in(\pi/3,5\pi/3)\), it is necessary to analytically continue the third term to this range. We give here the outline of the article. In Section 2, we give a review of the cyclic orbifold construction, with a focus on its implementation on the upper-half plane. We discuss in this section the bulk and boundary operator algebra, and show how some orbifold bulk and boundary structure constants can be expressed in terms of mother BCFT quantities by unfolding and factorizing arguments. We dedicate Section 3 to the derivation of ODEs for the different setups described above. On top of the announced derivations involving orbifold Ward identities, we also use the results on the fusion rules of the \(\mathbb{Z}_{N}\) cyclic orbifold of [70] and some mathematical facts about the hypergeometric differential equation, to derive low-order differential equations for the Ising case. Section 4 contains a comparison of our analytical results with lattice data, for both the Ising and three-state Potts critical chains. Finally, we have relegated the more technical derivations to the Appendix, to avoid congesting the logical flow of the paper. ## 2 The cyclic orbifold In this section, we will present the construction of the cyclic orbifold BCFT on the upper half-plane \(\mathbb{H}\). After reviewing a few essential features of the \(\mathbb{Z}_{N}\) orbifold on the Riemann sphere, we will discuss conformal boundary conditions, boundary operators as well as bulk-boundary and boundary-boundary operator algebras. ### The cyclic orbifold on the Riemann sphere To build a cyclic orbifold CFT, one starts from any mother CFT \(\mathcal{M}\) and constructs the tensor product theory \(\mathcal{M}^{\otimes N}\). Then one considers all the \(\mathbb{Z}_{N}\) equivalent ways of connecting the copies of the product theory, which creates \(N\) different sectors, each with its corresponding operator families and labelled by a \(\mathbb{Z}_{N}\)_twist charge_\([k]\). The spectrum of the cyclic orbifold \(\mathcal{M}_{N}\) is then built as a reunion of the operator families from all the sectors \([k]\). #### 2.1.1 Symmetry algebra and operator content In \(\mathcal{M}_{N}\), each copy \(a\) of the mother CFT carries the components of the stress-energy tensor \(T_{a}(z),\bar{T}_{a}(\bar{z})\). We define the discrete Fourier modes of these currents as \[T^{(r)}(z)=\sum_{a=0}^{N-1}\omega^{ar}\,T_{a}(z)\,,\qquad\bar{T}^{(r)}(\bar{z} )=\sum_{a=0}^{N-1}\omega^{ar}\,\bar{T}_{a}(\bar{z})\,, \tag{2.1}\] where \(r\) is considered modulo \(N\), and we have used the notation \(\omega=\exp(2i\pi/N)\). They satisfy the OPEs \[\begin{split} T^{(r)}(z)T^{(s)}(w)&=\frac{\delta_{r+ s,0}\,Nc/2}{(z-w)^{4}}+\frac{2T^{(r+s)}(w)}{(z-w)^{2}}+\frac{\partial T^{(r+s)} (w)}{z-w}+\operatorname{reg}_{z\to w}\,,\\ \bar{T}^{(r)}(\bar{z})\bar{T}^{(s)}(\bar{w})&= \frac{\delta_{r+s,0}\,Nc/2}{(\bar{z}-\bar{w})^{4}}+\frac{2\bar{T}^{(r+s)}( \bar{w})}{(\bar{z}-\bar{w})^{2}}+\frac{\partial\bar{T}^{(r+s)}(\bar{w})}{\bar {z}-\bar{w}}+\operatorname{reg}_{\bar{z}\to\bar{w}}\,,\end{split} \tag{2.2}\] where the Kronecker symbols \(\delta_{r+s,0}\) are understood modulo \(N\). The symmetric modes \(T^{(0)}(z)\) and \(\bar{T}^{(0)}(\bar{z})\) are the components of the stress-energy tensor of \(\mathcal{M}_{N}\) with central charge \(Nc\) whereas the other Fourier modes \(T^{(r)}(z),\bar{T}^{(r)}(\bar{z})\) with \(r\neq 0\) should be regarded as additional conserved currents. Altogether, these Fourier modes encode an extended conformal symmetry. The modes associated to these currents are defined in the usual way through: \[\begin{split} L^{(r)}_{m}&=\frac{1}{2i\pi}\oint dz\,z ^{m+1}\,T^{(r)}(z)\,,\\ \bar{L}^{(r)}_{m}&=\frac{1}{2i\pi}\oint d\bar{z}\, \bar{z}^{m+1}\,\bar{T}^{(r)}(\bar{z})\,.\end{split} \tag{2.3}\] In the sector of twist charge \([k]\) one has the following mode decompositions \[\begin{split} T^{(r)}(z)&=\sum_{m\in-kr/N+\mathbb{Z }}z^{-m-2}\,L^{(r)}_{m}\\ \bar{T}^{(r)}(\bar{z})&=\sum_{m\in+kr/N+\mathbb{Z}} \bar{z}^{-m-2}\,\bar{L}^{(r)}_{m}\end{split} \tag{2.4}\] and the commutation relations \[\begin{split}\left[L^{(r)}_{m},L^{(s)}_{n}\right]&= (m-n)L^{(r+s)}_{m+n}+\frac{Nc}{12}m(m^{2}-1)\,\delta_{m+n,0}\,\delta_{r+s,0} \,,\\ \left[\bar{L}^{(r)}_{m},\bar{L}^{(s)}_{n}\right]&= (m-n)\bar{L}^{(r+s)}_{m+n}+\frac{Nc}{12}m(m^{2}-1)\,\delta_{m+n,0}\,\delta_{r+ s,0}\,.\end{split} \tag{2.5}\] Hermitian conjugation of the modes acts as: \[\left(L^{(r)}_{n}\right)^{\dagger}=L^{(-r)}_{-n}\,,\qquad\left(\bar{L}^{(r)}_ {n}\right)^{\dagger}=\bar{L}^{(-r)}_{-n}\,. \tag{2.6}\] Orbifold _primary operators_ are, by definition, annihilated by the action of all the positive modes of \(\mathrm{OVir}\otimes\overline{\mathrm{OVir}}\). Descendant operators with respect to this algebra are constructed by the action of the negative \(m\) modes. We establish the notation for descendants of a scaling (primary or not) operator \(\mathcal{O}\): \[\begin{split}\left(L^{(r)}_{m}\cdot\mathcal{O}\right)(z,\bar{z} )&:=\frac{1}{2i\pi}\oint_{C_{z}}dw\,(w-z)^{m+1}\,T^{(r)}(w) \mathcal{O}(z,\bar{z})\,,\\ \left(\bar{L}^{(r)}_{m}\cdot\mathcal{O}\right)(z,\bar{z})& :=\frac{1}{2i\pi}\oint_{C_{z}}d\bar{w}\,(\bar{w}-\bar{z})^{m+1}\,\bar{T}^{(r) }(\bar{w})\mathcal{O}(z,\bar{z})\,,\end{split} \tag{2.7}\] where the contour \(\mathcal{C}_{z}\) encloses the point \(z\). It will be useful to work with the primary operator spectrum with respect to the _neutral subalgebra_\(A\otimes\bar{A}\) generated by the algebra elements \[L^{(r_{1})}_{m_{1}}\dots L^{(r_{p})}_{m_{p}}\quad\text{and}\quad\bar{L}^{(r_{1 })}_{m_{1}}\dots\bar{L}^{(r_{p})}_{m_{p}}\,,\qquad\text{with }r_{1}+\dots+r_{p}=0\mod N\,. \tag{2.8}\] One can classify all \(\mathbb{Z}_{N}\)-symmetric operators of \(\mathcal{M}_{N}\) into representations of \(A\otimes\bar{A}\). This organization, described in detail by the authors of the present work in [70], distinguishes between three types of operators. First, we have identified the _untwisted non-diagonal operators_\(\Phi_{[j_{1}\dots j_{N}]}\). These operators are built from \(\mathbb{Z}_{N}\)-symmetrized combinations of products of mother CFT primary operators \(\phi_{j}\) (with \(j=1\) referring to the identity operator \(\mathbf{1}\)): \[\Phi_{[j_{1}\dots j_{N}]}:=\frac{1}{\sqrt{N}}\sum_{a=0}^{N-1}(\phi_{j_{1+a}} \otimes\dots\otimes\phi_{j_{N+a}})\,, \tag{2.9}\] in which at least one pair satisfies \(j_{i}\neq j_{k}\). Its conformal dimension is given by \(h_{[j_{1}\dots j_{N}]}=\sum_{s}h_{j_{s}}\). The second type of primary operators under the neutral algebra are the _untwisted diagonal fields_\(\Phi^{(r)}_{j}\), where the Fourier replica index \(r\) takes values in \(\mathbb{Z}_{N}\). The \(r=0\) diagonal fields are defined to be: \[\Phi^{(0)}_{j}=\Phi_{j}:=\phi_{j}\otimes\cdots\otimes\phi_{j}\,, \tag{2.10}\] while for \(r\neq 0\), they are constructed as: \[\Phi^{(r)}_{j}:=\frac{1}{2Nh_{j}}L^{(r)}_{-1}\bar{L}^{(-r)}_{-1}\cdot\Phi_{j} \,,\qquad\mathbf{1}^{(r)}:=\frac{2}{Nc}L^{(r)}_{-2}\bar{L}^{(-r)}_{-2}\cdot \Phi_{\mathbf{1}}\,, \tag{2.11}\] The conformal dimension of a diagonal operator \(\Phi^{(r)}_{j}\) is then generically given by \[h^{(r)}_{j}=Nh_{j}+\delta_{r,0}\left(1+\delta_{j,1}\right) \tag{2.12}\] We should note that the diagonal operators with \(r=0\) and the non-diagonal operators are also primary under \(\mathrm{OVir}\otimes\overline{\mathrm{OVir}}\). Finally, we have to consider twist operators, which come in distinct flavours. For the purposes of this paper, we will mostly work with twist operators with Fourier replica index \(r=0\). Thusly, just as for the diagonal fields, we will drop this specification when the context heavily implies it, to decongest the notation. We first consider the ubiquitous bare twist operators [39, 2, 30, 71] which are denoted in our conventions \(\sigma^{[k]}=\sigma^{[k]}_{\mathbf{1}}\), or, in light notation, \(\sigma=\sigma^{[1]}\) and \(\sigma^{\dagger}=\sigma^{[-1]}\). We have also the composite twist fields \(\sigma^{[k]}_{j}\), which can be defined through point-splitting as in [49]: \[\sigma^{[k]}_{j}(z,\bar{z}):=\mathcal{A}_{j}\,\lim_{\epsilon\to 0}\left[ \epsilon^{2(1-N^{-1})h_{j}}\Phi_{[j,\mathbf{1},\dots,\mathbf{1}]}(z+\epsilon, \bar{z}+\bar{\epsilon})\cdot\sigma^{[k]}(z,\bar{z})\right]\,, \tag{2.13}\] where the constant \(\mathcal{A}_{j}=N^{-2(1-N^{-1})h_{j}-1/2}\) ensures that non-vanishing two-point functions of twist operators are normalized to one. If \(N\) and \(k\) are coprime, the conformal dimension of the bare twist operator is \[h_{\sigma}=\frac{c}{24}\left(N-\frac{1}{N}\right)\,, \tag{2.14}\] while for composite twist operators one has: \[h_{\sigma_{j}}=h_{\sigma}+\frac{h_{j}}{N}\,. \tag{2.15}\] Having established the primary operator spectrum of the orbifold, we will now review how the null-vectors of the diagonal and twisted fields in \(\mathcal{M}_{N}\) are inferred from the ones of the mother theory \(\mathcal{M}\). #### 2.1.2 Null vectors for untwisted operators Let us consider a generic mother CFT \(\mathcal{M}\), with central charge \[c=1-\frac{6(1-g)^{2}}{g}\,,\qquad 0<g\leq 1\,. \tag{2.16}\] The conformal dimensions of degenerate primary operators are given by the Kac formula \[h_{rs}=\frac{(r-sg)^{2}-(1-g)^{2}}{4g}\,, \tag{2.17}\] where \(r,s\) are positive integers. The corresponding operator \(\phi_{rs}\) is degenerate at level \(rs\). If the parameter \(g\) is rational, i.e. \(g=p/p^{\prime}\) with coprime \(p\) and \(p^{\prime}\), then the set of operators \(\phi_{rs}\) with \(1\leq r\leq p-1\) and \(1\leq s\leq p^{\prime}-1\) generates a closed operator algebra, and the related CFT is the minimal model \({\cal M}_{p,p^{\prime}}\). While we do employ this parametrization extensively, in the present work we will consider a more generic mother CFT, and we _do not assume_ that it is a minimal model--unless explicitly indicated. Consider the situation when the mother CFT includes the degenerate operator \(\phi_{12}\), with null-vector condition \[\left(L_{-2}-\frac{1}{g}L_{-1}^{2}\right)\phi_{12}=0\,. \tag{2.18}\] In the untwisted sector of the orbifold CFT, we have \[L_{n}^{(r)}=\sum_{a=1}^{N}e^{2i\pi ra/N}\left(1\otimes\ldots 1\otimes\underset {(a-{\rm th})}{L_{n}}\otimes 1\otimes\ldots 1\right)\,,\qquad n\in\mathbb{Z}\,, \tag{2.19}\] and the diagonal untwisted operator associated to \(\phi_{12}\) is \[\Phi_{12}=\phi_{12}\otimes\cdots\otimes\phi_{12}\,. \tag{2.20}\] Using an inverse discrete Fourier transform, one easily finds, for any \(r\in\mathbb{Z}_{N}\), \[\left[L_{-2}^{(r)}-\frac{1}{Ng}\sum_{s=0}^{N-1}L_{-1}^{(s)}L_{-1}^{(r-s)} \right]\cdot\Phi_{12}=0\,. \tag{2.21}\] When inserted into a correlation function, the modes \(L_{m}^{(0)}\) act as linear differential operators. The treatment of the modes \(L_{m}^{(r)}\) with \(r\neq 0\) introduces an additional difficulty, that we will address case by case, with the help of orbifold Ward identities. #### 2.1.3 The induction procedure The null-vectors of the mother CFT also determine the null vector conditions on twist operators in \({\cal M}_{N}\), through the _induction procedure_[40]. In the present work, we shall only be concerned with the twist sectors with charges \([\pm 1]\). In the notations of [70], induction can be expressed in terms of a norm-preserving, invertible linear map \(\Theta\) from the Hilbert space of the mother CFT to that of the twist sector [1], defined by \[\Theta|\phi\rangle=|\sigma_{\phi}\rangle\,,\qquad\Theta L_{m}\Theta^{-1}=N \left(L_{m/N}^{(-m)}-h_{\sigma}\,\delta_{m0}\right)\,, \tag{2.22}\] where \(\phi\) is any primary operator in the mother CFT, and \(\sigma_{\phi}\) is the associated composite twist operator in the orbifold CFT. The simplest application to null-vectors is the case of the identity: \[L_{-1}\cdot{\bf 1}=0\qquad\Rightarrow\qquad L_{-1/N}^{(1)}\cdot\sigma=0\,. \tag{2.23}\] For a degenerate operator at level two, applying the induction map on (2.18) yields \[\left[L_{-2/N}^{(2)}-\frac{N}{g}(L_{-1/N}^{(1)})^{2}\right]\cdot\sigma_{12}=0\,. \tag{2.24}\] The corresponding null-vector conditions for the operators \(\sigma^{\dagger}\) and \(\sigma_{12}^{\dagger}\) are easily obtained by conjugation. ### The cyclic orbifold on the upper half plane To construct the cyclic orbifold BCFT, we will work on the upper half-plane \(\mathbb{H}\), with the boundary along the real axis. We parametrize \(\mathbb{H}\) by \(z=x+iy\) with \(x\in\mathbb{R}\) and \(y>0\), and we impose the gluing condition on the boundary for the stress-energy tensor components: \[T^{(0)}(x)=\bar{T}^{(0)}(x)\quad\text{for}\quad x\in\mathbb{R}\,, \tag{2.25}\] which ensures that the boundary is conformal i.e., preserves a copy of the Virasoro algebra [72]. The \(\mathbb{Z}_{N}\) orbifold, however, has an extended symmetry, and we must choose if and how the components of the additional currents \(T^{(r\neq 0)}\) are glued at the boundary. Our usage of the replica trick provides a clear indication for these choices: since we are considering \(N\) copies of the _same_ mother BCFT, we must impose the gluing condition \(T_{a}(x)=\bar{T}_{a}(x)\) on each of them. By taking the Fourier transform of this relation, we find that in the orbifold CFT we are effectively imposing: \[T^{(r)}(x)=\bar{T}^{(r)}(x)\quad\text{for}\quad x\in\mathbb{R}\,, \tag{2.26}\] for all the discrete Fourier modes of the stress-energy tensor components defined in (2.1). This implies that the boundary preserves a full copy of the OVir algebra. By the same reasoning on CFT replicas, the orbifold boundary states we are interested in correspond to having the same conformal BC on the \(N\) copies of the mother CFT. They are simply given by \(|\alpha\rangle^{\otimes N}\) and \(|\beta\rangle^{\otimes N}\). On the upper half-plane, we will set the conformal BC \(\alpha\) on the positive real axis \(x>0\) and the conformal BC \(\beta\) on \(x<0\). To implement such mixed conformal BC in a BCFT, we will have to work with the formalism of _boundary condition changing operators_[53]. These operators, restricted to live on the boundary, are placed at the points of suture of regions of different BC. The full operator algebra of a BCFT is then formed by considering the OPEs between both BCCOs and bulk operators, as detailed in Appendix A. For a given pair of conformal BCs \((\alpha,\beta)\), there can be several primary BCCOs implementing the change \(\alpha\to\beta\): we denote such an operator \(\psi_{h}^{(\alpha\beta)}\), where \(h\) specifies its conformal dimension. The most relevant BCCO implementing \(\alpha\to\beta\) is simply referred to as \(\psi^{(\alpha\beta)}\). In the \(\mathbb{Z}_{N}\) orbifold CFT, we will be concerned with the calculation of correlators with insertions of _diagonal BCCOs_, namely : \[\Psi_{h}^{(\alpha\beta)}=\underbrace{\psi_{h}^{(\alpha\beta)}\otimes\dots \otimes\psi_{h}^{(\alpha\beta)}}_{\text{N times}}\,. \tag{2.27}\] Then, orbifold correlators with mixed BC are obtained by inserting the most relevant diagonal BCCO: \[\langle\mathcal{O}_{1}(z_{1},\bar{z}_{1})\dots\mathcal{O}_{n}(z_{n},\bar{z}_ {n})\rangle_{\mathbb{H}}^{\alpha\beta}=\langle\Psi^{(\alpha\beta)}(\infty) \,\mathcal{O}_{1}(z_{1},\bar{z}_{1})\dots\mathcal{O}_{n}(z_{n},\bar{z}_{n}) \Psi^{(\beta\alpha)}(0)\rangle_{\mathbb{H}}\,. \tag{2.28}\] By Cardy's doubling trick [72], [54], such \((n+2)\)-point correlators satisfy the same Ward identities as any of the \((2n+2)\)-point conformal blocks on the Riemann sphere \(\mathbb{C}\) with external operators \[\Phi(\infty),\mathcal{O}_{1}(z_{1}),\overline{\mathcal{O}}_{1}(\bar{z}_{1}), \dots,\mathcal{O}_{n}(z_{n}),\overline{\mathcal{O}}_{n}(\bar{z}_{n}),\Phi(0)\,, \tag{2.29}\] where \(\overline{\mathcal{O}_{i}}(\bar{z})\) is the antiholomorphic counterpart of \(\mathcal{O}_{i}(z)\), and \(\Phi(z)\) is the holomorphic part of the diagonal primary operator defined in (2.10), with the conformal dimension of \(\Psi^{(\alpha\beta)}\). In more precise terms, \(\overline{\mathcal{O}_{i}}\) is the operator conjugate to \(\mathcal{O}_{i}\) with respect to the symmetry algebra preserved by the boundary [73]. For \(\mathbb{Z}_{N}\) twist operators, conjugation acts as \(\overline{\sigma}_{i}=\sigma_{i}^{\dagger}\)[39], so that the one-twist function \[\langle\sigma_{i}(z,\bar{z})\rangle_{\mathbb{H}}^{(\alpha\beta)}=\langle\Psi^{ (\alpha\beta)}(\infty)\sigma_{i}(z,\bar{z})\Psi^{(\alpha\beta)}(0)\rangle_{ \mathbb{H}} \tag{2.30}\] satisfies the same Ward identities as the functions \(\bar{z}^{-2h_{\sigma_{i}}}\times\mathcal{F}_{k}(z/\bar{z})\), where \(\mathcal{F}_{k}\) is the rescaled conformal block: \[\mathcal{F}_{k}(\eta)=\raisebox{-14.226378pt}{\includegraphics[scale=0.4]{./figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures//figures//figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures//figures//figures/figures/figures/figures/figures//figures/figures//figures/figures//figures/figures//figures/figures//figures/figures/figures//figures/figures//figures/figures//figures/figures//figures/figures/figures//figures/figures/figures//figures/figures//figures/figures/figures/figures//figures//figures/figures//figures/figures//figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures//figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures//figures/figures//figures/figures//figures/figures//figures/figures/figures/figures/figures//figures/figures//figures/figures/figures//figures/figures//figures/figures/figures/figures/figures//figures//figures/figures/figures/figures//figures/figures/figures/figures//figures//figures/figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures//figures/figures/figures//figures/figures//figures//figures/figures//figures/figures//figures//figures//figures/figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures//figures/figures/figures/figures//figures/figures/figures/figures//figures//figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/ consider the OPEs of orbifold boundary operators. For generic diagonal BCCOs, this takes the form \[\Psi^{(\alpha\beta)}_{i_{1}}(x_{1})\Psi^{(\beta\gamma)}_{i_{2}}(x_{2})\underset{x_ {1}\to x_{2}}{\sim}\sum_{j}\mathcal{B}^{(\beta\gamma)\Psi_{j}}_{\Psi_{i_{1}}, \Psi_{i_{2}}}(x_{1}-x_{2})^{-h_{i_{1}}-h_{i_{2}}+h_{j}}\Psi^{(\alpha\gamma)}_{ j}(x_{2}) \tag{2.35}\] with the index \(j\) running over all the orbifold BCCOs interpolating between the conformal boundary conditions \(\alpha\) and \(\gamma\). We have denoted the _boundary-boundary structure constants_ by \(\mathcal{B}^{(\alpha\beta\gamma)\Psi_{j}}_{\Psi_{i_{1}},\Psi_{i_{2}}}\). To calculate the structure constants of the OPEs that are relevant for the present work, we will need to use factorization and unfolding arguments for the correlator that determine them, along the lines of [70], [75] and [71]. #### 2.3.1 Calculation of boundary-boundary structure constants Let us consider the calculation of boundary-boundary structure constants of the type \(\mathcal{B}^{(\beta\beta\alpha)\Psi_{k}}_{\Psi_{*},\Psi_{j}}\), where \(\Psi_{*}\) denotes a generic untwisted orbifold primary BCCO. We can express this as a three-point function on the upper half-plane \(\mathbb{H}\): \[\mathcal{B}^{(\beta\beta\alpha)\Psi_{k}}_{\Psi_{*},\Psi_{j}}=\langle\Psi^{( \alpha\beta)}_{k}(\infty)\Psi^{(\beta\beta)}_{j}(1)\Psi^{(\beta\alpha)}_{*}(0 )\rangle_{\mathbb{H}}\,. \tag{2.36}\] Since there are no twist insertions in the above correlator, it just factorizes into a linear combination of products of mother BCFT three-point functions. Let us first consider the case of a diagonal BCCO, with \(\Psi^{(\beta\alpha)}_{*}=\Psi^{(\beta\alpha)}_{i}\). Then, the orbifold correlator factorizes into mother CFT three-point functions as: \[\mathcal{B}^{(\beta\beta\alpha)\Psi_{k}}_{\Psi_{i},\Psi_{j}}=\left(\langle \psi^{(\alpha\beta)}_{k}(\infty)\psi^{(\beta\beta)}_{j}(1)\psi^{(\beta\alpha) }_{i}(0)\rangle_{\mathbb{H}}\right)^{N}\,, \tag{2.37}\] so we find a simple expression for these coefficients, in terms of mother BCFT boundary-boundary structure constants: \[\boxed{\mathcal{B}^{(\beta\beta\alpha)\Psi_{k}}_{\Psi_{i},\Psi_{j}}=\left(B^ {(\beta\beta\alpha)\,k}_{ij}\right)^{N}\,.} \tag{2.38}\] By similar considerations, the structure constants involving a non-diagonal BCCO \(\Psi^{(\beta\alpha)}_{[i_{1}\dots i_{N}]}\) can be expressed as: \[\boxed{\mathcal{B}^{(\beta\beta\alpha)\Psi_{k}}_{\Psi_{[i_{1}\dots i_{N}]}, \Psi_{j}}=\sqrt{N}\prod_{a=1}^{N}B^{(\beta\beta\alpha)\,k}_{i_{a}j}} \tag{2.39}\] The rest of the boundary-boundary structure constants of untwisted BCCOs can similarly be expressed in terms of mother BCFT quantities, but we will not need them in this work. #### 2.3.2 Orbifold bulk-boundary structure constants The first bulk-boundary structure constant we need to calculate is \(\mathcal{A}^{(\alpha)}_{\sigma,\Psi_{1}}\), where \(\Psi^{(\alpha\alpha)}_{\mathbf{1}}\) is just the identity boundary field. This can be expressed as the one-point function on the unit disk \(\mathbb{D}\): \[\mathcal{A}^{(\alpha)}_{\sigma,\Psi_{1}}=\langle\sigma(0,0)\rangle^{\alpha}_{ \mathbb{D}}\,, \tag{2.40}\] which is just the ratio of mother CFT partition functions: \[\langle\sigma(0,0)\rangle^{\alpha}_{\mathbb{D}}=\frac{\mathcal{Z}^{(\alpha)}_{ \mathbb{D}_{N}}}{\left[\mathcal{Z}^{(\alpha)}_{\mathbb{D}}\right]^{N}}\,, \tag{2.41}\] where \(\mathbb{D}_{N}\) denotes the \(N\)-th covering of the unit disk with branch points at \(0\) and \(1\). As shown in [76], we can express (2.40) in terms of the _ground state degeneracy_\(g_{\alpha}=\langle 0|\alpha\rangle\)[69] (which is defined as the overlap between the vacuum state \(|0\rangle\) and the boundary state \(|\alpha\rangle\) in the mother BCFT): \[\mathcal{A}^{(\alpha)}_{\sigma,\Psi_{1}}=g_{\alpha}^{1-N}\,. \tag{2.42}\] Using this result, we can calculate the one-point structure constants of composite twist operators \(\sigma_{i}\), by using the definition (2.13) and the relation between twist correlators on the disk \(\mathbb{D}\) and the mother CFT partition function on \(\mathbb{D}_{N}\), which simply gives: \[\mathcal{A}^{(\alpha)}_{\sigma_{i},\Psi_{1}}=\mathcal{A}^{(\alpha)}_{\sigma, \Psi_{1}}\,A^{\alpha}_{\phi_{i}}\,, \tag{2.43}\] where \(A^{\alpha}_{\phi_{i}}\) is the mother CFT one-point structure constant of \(\phi_{i}\) with conformal boundary condition \(\alpha\). The proof is relegated to Appendix B.1. Extending these results to more complicated bulk-boundary structure constants \(\mathcal{A}^{(\alpha)}_{\sigma_{i}^{[k]},\Psi_{j}}\) for generic choices of mother CFT and cyclic group \(\mathbb{Z}_{N}\) is not usually straightforward and depends on our knowledge of correlation functions in the mother CFT. For example, in Appendix B.2 we calculate the structure constant \(\mathcal{A}^{(\alpha)}_{\sigma,\Psi_{13}}\) in the \(\mathbb{Z}_{2}\) orbifold BCFT, since it can be expressed in terms of a two-point function of boundary operators in the mother CFT. For generic \(N\) and composite twist operator \(\sigma_{i}^{[k]}\), knowledge of higher-point correlators in the mother CFT is required to compute such structure constants through the same unfolding methods. ## 3 Differential equations in the \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{3}\) orbifold BCFT ### Setup for the calculations We consider the case of a generic BCFT, with central charge \(c\). The model is defined on the upper half plane, with conformal boundary conditions \(\alpha\) and \(\beta\) set on the negative \(\Re(z)<0\) and positive \(\Re(z)>0\) parts of the real axis, respectively. We will work, for the entirety of this section, under the assumption that the most relevant BCCO interpolating between these boundary conditions is \(\psi_{12}^{(\alpha\beta)}\), with conformal dimension \(h_{12}\). This implies that the BCCO has a null-vector at level \(2\). Of course, our results also apply to the case where the BCCO is \(\psi_{21}^{(\alpha\beta)}\), up to changing \(g\to 1/g\). In the \(\mathbb{Z}_{N}\) orbifold of this theory, we will consider one-point correlators of generic _composite twist operators_\(\sigma_{i}\) of twist charge \([k=1]\), in a background with mixed BC \(\alpha\) and \(\beta\), corresponding to the replicated boundary conditions of the mother BCFT. The change in boundary conditions in the orbifold theory will be implemented by the diagonal BCCO \(\Psi_{12}^{(\alpha\beta)}\) defined in (2.27), with conformal dimension \(h_{\Psi_{12}}=Nh_{12}\). Since we will aim to compare our CFT results with lattice data in Section 4, we will define our twist correlator on an infinite strip \(\mathbb{S}\) of width \(L\), parametrized by the complex coordinate \(w=u+iv\), with \(u\in[0,L]\) and \(v\in\mathbb{R}\). The conformal boundary conditions on the \(u=0\) and \(u=L\) sides of the strip are set to be \(\alpha\) and \(\beta\), respectively. We will consider correlators with a twist \(\sigma_{i}\) inserted at \(w=\ell\): \[\langle\sigma_{i}(\ell,\ell)\rangle_{\mathbb{S}}^{\alpha\beta}\,, \tag{3.1}\] where \(\ell\) is measured from the boundary \(\beta\), in accordance with Figure 1. This correlator is now mapped to the upper half plane, through: \[w=\frac{-iL}{\pi}\ln z\,, \tag{3.2}\] and expressed, using (2.28), as: \[\langle\sigma_{i}(z,\bar{z})\rangle_{\mathbb{H}}^{\alpha\beta}=\langle\Psi_{12} ^{(\alpha\beta)}(\infty)\,\sigma_{i}(z,\bar{z})\Psi_{12}^{(\beta\alpha)}(0) \rangle_{\mathbb{H}}\,, \tag{3.3}\] with \(z=\exp i\pi\ell/L\) in terms of strip coordinates. Using the information about the operator algebra of the orbifold BCFT we have presented in Section 2.3, we can write the following block expansion for (3.1) \[\langle\sigma_{i}(\ell,\ell)\rangle_{\mathbb{S}}^{\alpha\beta}=\mathcal{J} \sum_{\ell}\mathcal{A}_{\sigma_{i},\Psi_{\ell}}^{(\beta)}\mathcal{B}_{\Psi_{ \ell},\Psi_{12}}^{(\beta\beta\alpha)\Psi_{12}}\widetilde{\mathcal{F}}_{\ell} (\eta)\,, \tag{3.4}\] where \(\eta=z/\bar{z}=\exp\left(2\pi i\ell/L\right)\), and \(\mathcal{J}=(L\bar{z}/\pi)^{-2h_{\sigma_{i}}}\) is the combined Jacobian associated to the Mobius map \(\zeta\mapsto\zeta/\bar{z}\) that takes \((0,z,\bar{z},\infty)\mapsto(0,\eta,1,\infty)\) and the map \(w\mapsto z\) from the strip to the upper half plane. We recall that the \(\widetilde{\mathcal{F}}_{\ell}\)'s are the conformal blocks in the channel \(\eta\to 1\). As per Cardy's doubling argument [72], the functions \(\tilde{\mathcal{F}}_{k}(\eta)\) are four-point conformal blocks (2.31) with \(\Phi=\Phi_{12}\). To proceed, we need to determine the differential equation satisfied by these functions. To this end, we will use a combination of the null-vector conditions and the orbifold Ward identities derived in the Appendix. The function \(\langle\Psi_{12}\cdot\sigma\cdot\Psi_{12}\rangle\) in a generic \(\mathbb{Z}_{2}\) orbifold Following the general approach described above, the function \(\langle\Psi_{12}(\infty)\sigma(z,\bar{z})\Psi_{12}(0)\rangle_{\mathbb{H}}\) is given, up to the overall factor \(\bar{z}^{-2h_{\sigma}}\), by a linear combination of the conformal blocks \[\mathcal{F}_{k}(\eta)=\langle\Phi_{12}|\sigma(1)\mathcal{P}_{k}\sigma(\eta)| \Phi_{12}\rangle\,. \tag{3.5}\] It turns out that this family of conformal blocks was already studied in [42], for the calculation of the single-interval Renyi entropy in the excited state \(|\Phi_{12}\rangle\) with periodic BC. Let us recall how the derivation of the corresponding ODE goes. We use the null-vectors at level two of the untwisted chiral state \(|\Phi_{12}\rangle\), given by: \[\begin{split}&\left[L_{-2}^{(0)}-\frac{1}{2g}\left(L_{-1}^{(0)} \right)^{2}-\frac{1}{2g}\left(L_{-1}^{(1)}\right)^{2}\right]\cdot|\Phi_{12} \rangle\equiv 0,\\ &\left[L_{-2}^{(1)}-\frac{1}{g}L_{-1}^{(0)}L_{-1}^{(1)}\right] \cdot|\Phi_{12}\rangle\equiv 0\,,\end{split} \tag{3.6}\] and the null vector at level \(1/N\) of the bare twist operator \(\sigma\): \[L_{-1/2}^{(1)}\cdot\sigma\equiv 0\,. \tag{3.7}\] We combine these with the orbifold Ward identity for the _chiral_ correlator: \[\mathcal{G}^{(1)}(w,\eta)=\langle\Phi_{12}|\,\sigma(1)\mathcal{P}_{k}\sigma( \eta)T^{(1)}(w)L_{-1}^{(1)}\,|\Phi_{12}\rangle\, \tag{3.8}\] with \((m_{1},m_{2},m_{3},m_{4})=(0,-1/2,-1/2,-1)\) in the notation of (C.1). This gives, after taking into account (3.7): \[\sum_{p=0,1,2}d_{p}\,\langle\Phi_{12}|\,\sigma(1){\cal P}_{k}\sigma(\eta)L^{(1) }_{-p+2}L^{(1)}_{-1}\,|\Phi_{12}\rangle=0\,, \tag{3.9}\] with \(d_{p}\) calculated from the series \((C.6)\). By substituting the null vectors (2.18-3.7) and employing the identity (C.7), one obtains the differential equation: \[\begin{array}{l}64g^{2}\eta^{2}(\eta-1)^{2}\,\partial_{\eta}^{2}{\cal F}+16g \eta(\eta-1)\,\left[(-14g^{2}+23g-6)\eta+2g(1-4g)\right]\,\partial_{\eta}{ \cal F}\\ +\,(3g-2)\,\left[+3(5g-6)(1-2g)^{2}\eta^{2}+12g(1-2g)\eta+16g^{2}(g-1)\right] \,{\cal F}=0\,,\end{array} \tag{3.10}\] whose Riemann scheme is given by: \[\begin{array}{c c c}0&1&\infty\\ \hline-2h_{12}&-2h_{\sigma}&2h_{\sigma}-2h_{12}\\ -2h_{12}+h_{13}/2&-2h_{\sigma}+2h_{13}&2h_{\sigma}-2h_{12}+h_{13}/2\end{array}\] This corresponds to the intermediary states \(\{\sigma,\sigma_{13}\}\) in the channels \(\eta\to 0\) and \(\eta\to\infty\), and \(\{{\bf 1},\Phi_{13}\}\) in the channel \(\eta\to 1\). Note that, when the mother CFT is a minimal model \({\cal M}_{p,p^{\prime}}\), one can check for various values of \((p,p^{\prime})\) that these are exactly the intermediary states allowed by the orbifold fusion given in F, and that they all have multiplicity one. To proceed, one can define the shifted function \(f(\eta)\): \[f(\eta)=(1-\eta)^{2h_{\sigma}}\eta^{2h_{12}}{\cal F}(\eta)\,, \tag{3.11}\] and substitute in (3.10) to find that \(f(\eta)\) satisfies a second order hypergeometric equation (E.1) with parameters: \[a=2-3g\,,\qquad b=\frac{3}{2}-2g\,,\qquad c=\frac{3}{2}-g\,. \tag{3.12}\] We can work with the basis (E.3) of solutions around \(\eta=1\) for this hypergeometric equation--in the block expansion (3.4), this corresponds to approaching the twist operator to the boundary \(\beta\). Thus, the conformal blocks \(\widetilde{\cal F}_{\ell}(\eta)\) we seek are: \[\begin{array}{l}\boxed{\widetilde{\cal F}_{{\bf 1}}(\eta)=(1-\eta)^{-2h_{ \sigma}}\eta^{-2h_{12}}{}_{2}{\rm F}_{1}(a,b;a+b-c+1\mid 1-\eta)\,,}\\ \widetilde{\cal F}_{13}(\eta)=(1-\eta)^{-2h_{\sigma}+2h_{13}}\eta^{-2h_{12}}{ }_{2}{\rm F}_{1}(c-b,c-a;c-a-b+1\mid 1-\eta)\,,\end{array} \tag{3.13}\] and they are normalized to be of the form \(\widetilde{\cal F}_{\ell}(\eta)\sim(1-\eta)^{-2h_{\sigma}+h}(1+\dots)\) as \(\eta\to 1\), where \(h\) is the conformal dimension of the internal operator of the conformal block in this channel. The bulk-boundary fusion rules corresponding to the exponents are \[\sigma\Big{|}_{\beta}\to\Psi_{\bf 1}+\Psi_{13}\,, \tag{3.14}\] as \(\sigma\) is approached to the conformal boundary \(\beta\). Substituting the blocks and the expressions for the orbifold BCFT structure constants in (3.4) one finds the expression of the one-point twist correlator \[\langle\sigma(z,\bar{z})\rangle^{\alpha\beta}_{\mathbb{H}}=\bar{z}^{-2h_{ \sigma}}\left[{\cal A}^{(\beta)}_{\sigma,\Psi_{\bf 1}}{\cal B}^{(\beta\beta \alpha)\Psi_{12}}_{\Psi_{\bf 1},\Psi_{12}}\tilde{\cal F}_{{\bf 1}}(\eta)+{\cal A}^{( \beta)}_{\sigma,\Psi_{13}}{\cal B}^{(\beta\beta\alpha)\Psi_{12}}_{\Psi_{13}, \Psi_{12}}\tilde{\cal F}_{13}(\eta)\right]\,, \tag{3.15}\] where the various structure constants are expressed in terms of the mother BCFT data as \[\mathcal{A}^{(\beta)}_{\varrho,\Psi_{1}}=\mathcal{A}^{(\beta)}_{\sigma,\Psi_{13}} =g_{\beta}^{-1}\,,\qquad\mathcal{B}^{(\beta\beta\alpha)\Psi_{12}}_{\Psi_{1}, \Psi_{12}}=1\,,\qquad\mathcal{B}^{(\beta\beta\alpha)\Psi_{12}}_{\Psi_{13}, \Psi_{12}}=\left(B^{(\beta\beta\alpha)\,\psi_{12}}_{\psi_{13}\psi_{12}}\right) ^{2}\,. \tag{3.16}\] It is interesting now to observe that, for some pairs of conformal BCs \((\alpha,\beta)\), the block expansion (3.15) greatly simplifies because the boundary-boundary structure constant \(\mathcal{B}^{(\beta\beta\alpha)\Psi_{12}}_{\Psi_{13},\Psi_{12}}\) vanishes. At the level of the mother BCFT, this is equivalent to demanding that the operator \(\psi_{13}^{(\beta\beta)}\) is not allowed in the theory. For BCFTs based on \(A\)-series minimal models \(\mathcal{M}(p,p^{\prime})\), this holds for any pair of mixed conformal BCs \((\alpha,\beta)\equiv(\phi_{r2},\phi_{r1})\), labelled by bulk primary fields with \(1\leq r<p\). One can use well-established results about fusion rules in such models [77], to check that: \[\phi_{12}\in\phi_{r1}\times\phi_{r2} \tag{3.17}\] so that these BCs are interpolated by \(\psi_{12}^{(\alpha\beta)}\) and \[\phi_{13}\not\in\phi_{r1}\times\phi_{r1} \tag{3.18}\] which implies that \(\psi_{13}^{(\beta\beta)}\) is not in the boundary operator spectrum of the BCFT. Under these conditions, the correlator in (3.15) simplifies to: \[\langle\sigma(z,\bar{z})\rangle_{\mathbb{H}}^{\alpha\beta}=\eta^{h_{\sigma}} \left[g_{\phi_{r1}}^{-1}\tilde{\mathcal{F}}_{\mathbf{1}}(\eta)\right]\,, \tag{3.19}\] so that one finds the second Renyi entropy in the strip setup to be: \[S_{2}^{(\alpha,\beta)}(\ell)\sim\frac{c}{8}\log\frac{2L}{\pi a}\sin\left( \frac{\pi\ell}{L}\right)+\log g_{\phi_{r1}}-\log\left[\eta^{-2h_{12}}{}_{2} \mathrm{F}_{1}\left(2-3g,3/2-g;3-4g\mid 1-\eta\right)\right] \tag{3.20}\] It is now interesting to check that the theoretical prediction for this kind of mixed BC has the expected behaviour near the \(\phi_{r1}\) and \(\phi_{r2}\) boundaries. It is known [2] that the second Renyi entropy of an interval \(\ell\) touching one of the _identical_ boundaries of a finite system of size \(L\) is given by: \[S_{2}^{(\alpha,\alpha)}([0,\ell])=\frac{c}{8}\log\left[\frac{2L}{\pi}\sin \left(\frac{\pi\ell}{L}\right)\right]+\log g_{\alpha} \tag{3.21}\] We then anticipate that the mixed BC result (3.20) will interpolate between \(S_{2}^{(\phi_{r1},\phi_{r1})}\) and \(S_{2}^{(\phi_{r2},\phi_{r2})}\) as \(\ell\to 0\) and \(\ell\to L\) Furthermore, suppose we consider the _difference_ in second Renyi entropies between two mixed BC setups \((\phi_{r2},\phi_{r1})\) and \((\phi_{r^{\prime}2},\phi_{r^{\prime}1})\) for the same bulk CFT. We then find the following universal result: \[\boxed{\Delta S_{2}=S_{2}^{(\phi_{r^{\prime}2},\phi_{r^{\prime}1})}-S_{2}^{( \phi_{r2},\phi_{r1})}=\log\frac{g_{\phi_{r1}}}{g_{\phi_{r^{\prime}1}}}=\log \frac{g_{\phi_{r2}}}{g_{\phi_{r^{\prime}2}}}} \tag{3.22}\] where the latter equation follows from the expression of \(g_{\phi_{r,s}}=S_{\phi_{r,s},\phi_{0}}/\sqrt{S_{\mathbf{1},\phi_{0}}}\)[69] in terms of S-matrix elements of minimal models [77] - here \(\phi_{0}\) denotes the field with the lowest conformal dimension of the _diagonal_ bulk CFT. This entropy difference is thus determined, in these cases, by the identical BC expressions (3.21) that describe the asymptotic behaviour near the boundaries of the mixed BC setup. We illustrate these ideas graphically for the \(A\)-series minimal model \(\mathcal{M}(6,5)\) in Figure 2, by plotting the second Renyi entropies shifted by \(L^{2h_{\sigma}}\) for two mixed BC setups \((\phi_{12},\phi_{11})\) and \((\phi_{32},\phi_{31})\) for the BCFTs based on \(\mathcal{M}(6,5)\), together with the relevant identical BC setups for each case. The function \(\langle\Psi_{12}\cdot\sigma_{h}\cdot\Psi_{12}\rangle\) in a generic \(\mathbb{Z}_{2}\) orbifold From the perspective of critical quantum chains, the result in (3.15) only determines the leading contribution to the second Renyi entropy. To understand finite-size corrections to this result, we should also study the one-point function of subleading primary twist operators, namely \(\langle\sigma_{h}(z,\bar{z})\rangle_{\mathbb{H}}^{\alpha\beta}\). We shall derive an ODE for the conformal blocks \[\mathcal{F}_{k}(\eta)=\langle\Phi_{12}|\sigma_{h}(1)\mathcal{P}_{k}\sigma_{h}( \eta)|\Phi_{12}\rangle\,. \tag{3.23}\] Here we consider the case of a generic composite twist operator \(\sigma_{h}\) with conformal dimension \[\widehat{h}=h_{\sigma}+\frac{h}{N}\,, \tag{3.24}\] and thus we do not assume any null-vector condition on \(\sigma_{h}\). Besides the null-vector conditions at level two (3.6), we will need the null-vector at level three in the module of \(|\Phi_{12}\rangle\): \[L_{-2}^{(1)}L_{-1}^{(1)}\,|\Phi_{12}\rangle=\left[-L_{-3}^{(0)}+\frac{1}{g} \left(2gL_{-1}^{(0)}L_{-2}^{(0)}-\left(L_{-1}^{(0)}\right)^{3}\right)\right]| \Phi_{12}\rangle\, \tag{3.25}\] and two Ward identities obtained from: \[\langle\Phi_{12}|\,\sigma_{h}(1)\mathcal{P}_{k}\sigma_{h}(\eta)T^{(1)}(z)L_{- 1}^{(1)}\,|\Phi_{12}\rangle\, \tag{3.26}\] with \((m_{1},m_{2},m_{3},m_{4})=(-1,1/2,1/2,-2)\): \[a_{0|1}\,\langle\Phi_{12}|\,L_{1}^{(1)}\sigma_{h}(1)\mathcal{P}_{k}\sigma_{h}( \eta)L_{-1}^{(1)}\,|\Phi_{12}\rangle=\sum_{p=0}^{3}d_{p|1}\,\langle\Phi_{12}| \,\sigma_{h}(1)\mathcal{P}_{k}\sigma_{h}(\eta)L_{-2+p}^{(1)}L_{-1}^{(1)}\,| \Phi_{12}\rangle \tag{3.27}\] Figure 2: Shifted second Rényi entropies for the mixed BC setups \((\phi_{12},\phi_{11})\) and \((\phi_{32},\phi_{31})\) for the BCFTs based on \(\mathcal{M}(6,5)\). The difference between the two curves is constant with respect to the interval size, and thusly fixed by their respective asymptotic behaviours around \(\ell\to 0\) and \(\ell\to L\) and \((m_{1},m_{2},m_{3},m_{4})=(-2,1/2,1/2,-1):\) \[a_{0|2}\,\langle\Phi_{12}|\,L_{2}^{(1)}\sigma_{h}(1)\mathcal{P}_{ k}\sigma_{h}(x)L_{-1}^{(1)}\,|\Phi_{12}\rangle+a_{1|2}\,\langle\Phi_{12}|\,L_{1}^{(1)} \sigma_{h}(1)\mathcal{P}_{k}\sigma_{h}(x)L_{-1}^{(1)}\,|\Phi_{12}\rangle=\\ =d_{0|2}\,\langle\Phi_{12}|\,\sigma_{h}(1)\mathcal{P}_{k}\sigma_ {h}(\eta)L_{-1}^{(1)}L_{-1}^{(1)}\,|\Phi_{12}\rangle+d_{1|2}\,\langle\Phi_{12 }|\,\sigma_{h}(1)\mathcal{P}_{k}\sigma_{h}(\eta)L_{0}^{(1)}L_{-1}^{(1)}\,| \Phi_{12}\rangle+\\ +d_{2|2}\,\langle\Phi_{12}|\,\sigma_{h}(1)\mathcal{P}_{k}\sigma_ {h}(\eta)L_{1}^{(1)}L_{-1}^{(1)}\,|\Phi_{12}\rangle \tag{3.28}\] Putting everything together, and applying the change of function \[\mathcal{F}(\eta)=\eta^{-2h_{12}}\,(1-\eta)^{4h_{12}-2\widehat{h}}\,f(\eta)\,, \tag{3.29}\] we obtain the fourth-order ODE \[(\eta-1)^{4}\eta^{3}\,\partial_{\eta}^{4}f+\frac{1}{2}(\eta-1)^{3 }\eta^{2}\left[(2g+13)\eta+(2g-11)\right]\,\partial_{\eta}^{3}f\\ -\frac{1}{8}(\eta-1)^{2}\eta\left[(16g\widehat{h}+6g^{2}-45g-60) \eta^{2}+(20g^{2}+34g+96)\eta+(16g\widehat{h}+6g^{2}+3g-36)\right]\,\partial_ {\eta}^{2}f\\ -\frac{g}{16}(\eta-1)\left[(48\widehat{h}+18g-75)\eta^{3}+(16g^{ 2}-18g-112\widehat{h}+167)\eta^{2}\right.\\ \left.+(80\widehat{h}+16g^{2}-74g-53)\eta+(-16\widehat{h}-6g+9) \right]\,\partial_{\eta}f\\ +\frac{g}{8}\left[(16g^{2}\widehat{h}+6g^{3}-13g^{2}+4)\eta^{2}+ (-32g^{2}\widehat{h}+12g^{3}+34g^{2}-64g+24)\eta\right.\\ \left.+(24+16g^{2}\widehat{h}+6g^{3}-13g^{2}+4)\right]\,f=0\,. \tag{3.30}\] At this stage, it will be convenient to use the Coulomb-Gas parametrization to analyse the local exponents of the ODE. Recall the relation between the mother CFT central charge and the parameter \(g\): \[c=1-24Q^{2}\,,\qquad Q=\frac{1}{2}(1/b-b)\,,\qquad b=\sqrt{g}\,. \tag{3.31}\] The conformal dimensions in the mother CFT are parametrized by the _vertex charge_\(\alpha\) as \[h_{\alpha}=\alpha(\alpha-2Q)\,, \tag{3.32}\] and we use the shorthand notation for the conformal dimension of composite twisted operators \[\widehat{h}_{\alpha}=h_{\sigma}+\frac{h_{\alpha}}{N}\,. \tag{3.33}\] In this parametrization, the Riemann scheme for \(\mathcal{F}(\eta)\) is given in Table 1. \begin{table} \begin{tabular}{c c c} \(0\) & \(1\) & \(\infty\) \\ \hline \(-2h_{12}\) & \(-2\widehat{h}_{\alpha}\) & \(2\widehat{h}_{\alpha}-2h_{12}\) \\ \(-2h_{12}+\frac{1}{2}\) & \(-2\widehat{h}_{\alpha}+h_{13}\) & \(2\widehat{h}_{\alpha}-2h_{12}+\frac{1}{2}\) \\ \(-\widehat{h}_{\alpha}-2h_{12}+\widehat{h}_{\alpha+b}\) & \(-2\widehat{h}_{\alpha}+2h_{13}\) & \(\widehat{h}_{\alpha}-2h_{12}+\widehat{h}_{\alpha+b}\) \\ \(-\widehat{h}_{\alpha}-2h_{12}+\widehat{h}_{\alpha-b}\) & \(-2\widehat{h}_{\alpha}+2h_{13}+2\) & \(\widehat{h}_{\alpha}-2h_{12}+\widehat{h}_{\alpha-b}\) \\ \end{tabular} \end{table} Table 1: Singular exponents around \(\eta=0,1,\infty\) These exponents correspond to the intermediary states (counted with their multiplicities): \[\begin{array}{ll}\{{\bf 1},[{\bf 1},\phi_{13}],\Phi_{13},\Phi_{13}\}&\mbox{in the channel $\eta\to 1$}\,,\\ \{\sigma_{h},L^{(1)}_{-1/2}\cdot\sigma_{h},\sigma_{h^{\prime}},\sigma_{h^{\prime }}\}&\mbox{in the channels $\eta\to 0$ and $\eta\to\infty$}\,.\end{array} \tag{3.34}\] Here, we have defined \(h^{\prime}=h_{\alpha+b}\) and \(h^{\prime\prime}=h_{\alpha-b}\). Recall that the conformal blocks are labelled by primary operators under the neutral subalgebra \(A\), and that \(L^{(1)}_{-1/2}\cdot\sigma_{h}\) is one of these operators. When the mother CFT is a minimal model, one can check on various examples that the orbifold fusion rules derived in [70] from the Verlinde formula are consistent with these intermediary states. While an analytic solution to the differential equation is not known, one can determine the conformal blocks \({\cal F}_{k}(\eta)\) around \(\eta=0\) and \(\widetilde{\cal F}(\eta)\) around \(\eta=1\) numerically to arbitrary precision. Assuming this step has been performed, all that is left is to calculate the structure constants in the block expansion (3.4). The boundary-boundary structure constants are calculated from (2.38) and (2.39), while the bulk-boundary structure constants can be calculated analytically through unfolding, as shown, for some cases in Appendix B.2 and B.1. For numerical studies, however, it is simpler to bootstrap some of the coefficients in the block expansion (3.4) rather than to calculate all of them analytically. To implement this method, as detailed in [78], one needs to compare the block expansions in (3.4) with the block expansion corresponding to sending the twist field to the part of the boundary, endowed with the \(\alpha\) BC. The crucial point here is that the conformal blocks \({\cal F}_{k}(\eta)\) are branched functions on \(\mathbb{C}\), with branch points \(\{0,1,\infty\}\), and, thus, sending the twist field to the boundary with BC \(\alpha\) is equivalent to crossing to the other branch of the function. This is marked by appending the phase factor \(e^{2\pi i}\) to the variable \(\eta\) to get: \[\langle\sigma_{h}(z,\bar{z})\rangle_{\mathbb{S}}^{\alpha\beta}={\cal J}_{SL(2, \mathbb{C})}\sum_{\ell}{\cal A}^{(\alpha)}_{\sigma_{h},\Psi_{\ell}}{\cal B}^{ (\alpha\alpha\beta)\Psi_{12}}_{\Psi_{\ell},\Psi_{12}}\widetilde{\cal F}_{\ell }(e^{2\pi i}\eta)\,. \tag{3.35}\] To proceed, one needs to find the _monodromy matrix_\(X\) around zero for the basis \(\widetilde{\cal F}_{\ell}(\eta)\), which encodes the behaviour of the conformal blocks as the branch cut is crossed: \[\widetilde{\cal F}_{\ell}(e^{2\pi i}\eta)=\sum_{m}X_{\ell m}\widetilde{\cal F }_{m}(\eta)\,. \tag{3.36}\] Since the monodromy of the blocks \(\widetilde{\cal F}_{\ell}\) around zero is non-diagonal, we can use the fusing matrix \(P_{ij}\) to express the blocks \(\widetilde{\cal F}_{\ell}\) in terms of a basis of the blocks \({\cal F}_{k}\), which have diagonal monodromy around \(\eta=0\): \[\widetilde{\cal F}_{\ell}(\eta)=\sum_{k}P_{\ell k}{\cal F}_{k}(\eta)\,. \tag{3.37}\] The blocks \({\cal F}_{k}(\eta)\) simply acquire a phase under \(z\to e^{2\pi i}z\), so their monodromy matrix \(Y\) is diagonal: \[{\cal F}_{k}(e^{2i\pi}\eta)=\sum_{j}Y_{kj}\,{\cal F}_{j}(\eta)\,,\qquad Y_{kj} =\delta_{kj}\,\exp\left[2\pi i\left(-\widehat{h}_{\alpha}-2h_{12}+\widehat{h} _{k}\right)\right], \tag{3.38}\] where the exponents in the exponential above, are simply read off from the Riemann scheme. Then, the monodromy matrix of the blocks \(\widetilde{\cal F}_{\ell}(\eta)\) is found from the matrix product: \[X=P\cdot Y\cdot P^{-1}\,, \tag{3.39}\] which allows us to compare the block expansions in (3.4) and (3.35) to find a duality relation, of the type presented in [78]: \[\mathcal{A}^{(\beta)}_{\sigma_{h},\Psi_{i}}\mathcal{B}^{(\beta\beta\alpha)\Psi_{ i}}_{\Psi_{12},\Psi_{12}}=\sum_{j}\mathcal{A}^{(\alpha)}_{\sigma_{h},\Psi_{j}} \mathcal{B}^{(\alpha\alpha\beta)\Psi_{j}}_{\Psi_{12},\Psi_{12}}X_{ji}\,. \tag{3.40}\] Using the numerical determinations for \(\mathcal{F}_{k}(\eta)\) and \(\widetilde{\mathcal{F}}_{\ell}(\eta)\), one can find a good estimate for the fusing matrix \(P_{ij}\), and, consequently, \(X_{ij}\). A more fleshed out example of how the determination of \(P_{ij}\) works has been relegated to the Appendix H, where this equation is used for the case of the \(\mathbb{Z}_{2}\) orbifold of the three-state Potts model BCFT. After solving the linear system in (3.40) one can evaluate the unknown structure constants \(\mathcal{A}^{(\beta)}_{\sigma_{h},\Psi_{i}}\) and \(\mathcal{A}^{(\alpha)}_{\sigma_{h},\Psi_{i}}\). At this point, we stress that (3.40) gives, at most, _four_ constraints between the unknown structure constants. To fully determine all these quantities, one should calculate the remaining four structure constants through other methods. The function \(\langle\Psi_{12}\cdot\sigma\cdot\Psi_{12}\rangle\) in a generic \(\mathbb{Z}_{3}\) orbifold Here the relevant conformal blocks are \[\mathcal{F}_{k}(\eta)=\langle\Phi_{12}|\sigma(1)\mathcal{P}_{k}\sigma^{\dagger }(\eta)|\Phi_{12}\rangle\,. \tag{3.41}\] We give the null-vectors of \(|\Phi_{12}\rangle\) at levels two and three: \[L^{(r)}_{-2}\,|\Phi_{12}\rangle=\frac{1}{3g}\sum_{s=0}^{2}L^{(r-s )}_{-1}L^{(s)}_{-1}\,|\Phi_{12}\rangle\, \tag{3.42}\] \[L^{(3-r)}_{-1}L^{(r)}_{-2}\,|\Phi_{12}\rangle=\frac{1}{3g}\left[ 2L^{(0)}_{-1}L^{(1)}_{-1}L^{(2)}_{-1}+\left(L^{(3-r)}_{-1}\right)^{3}\right]| \Phi_{12}\rangle\, \tag{3.43}\] for \(r\in\{0,1,2\}\). We will also need the null-vectors for the out-state \(\langle\Phi_{12}|\), which can be obtained by Hermitian conjugation (2.6). To derive an ODE for the conformal blocks, we had to employ seven orbifold Ward identities, together with six of the null-vector conditions above. To not overload the presentation of this section with technical details, we relegate the specifics of the derivation to Appendix G. We apply the change of function \[\mathcal{F}(\eta)=\eta^{-8h_{12}/3}\,(1-\eta)^{16h_{12}/3-2h_{\sigma}}\,f( \eta)\,. \tag{3.44}\] The function \(f\) satisfies the ODE \[\begin{split}&(\eta-1)^{3}\eta^{2}\,\partial_{\eta}^{3}f+(\eta-1)^ {2}\eta\left[(g+3)\eta+(g-3)\right]\,\partial_{\eta}^{2}f\\ &+\frac{2}{9}(\eta-1)\left[2(2+3g)\eta^{2}-2(7-15g+18g^{2})\eta+( 4-3g)\right]\,\partial_{\eta}f\\ &+\frac{4}{27}(1-6g)(2-3g)(\eta+1)\,f=0\,.\end{split} \tag{3.45}\] The Riemann scheme for \(\mathcal{F}\) is \[\begin{array}{ccc}0&1&\infty\\ \hline-\frac{8}{3}h_{12}&-2h_{\sigma}&2h_{\sigma}-\frac{5}{3}h_{12}\\ -\frac{8}{3}h_{12}+\frac{1}{3}&-2h_{\sigma}+2h_{13}&2h_{\sigma}-\frac{8}{3}h_ {12}+\frac{1}{3}\\ -3h_{12}+\frac{h_{14}}{3}&-2h_{\sigma}+3h_{13}&2h_{\sigma}-3h_{12}+\frac{h_{ 14}}{3}\end{array}\] The local exponents correspond to the intermediary states: \[\begin{split}\{\mathbf{1},[\mathbf{1},\phi_{13},\phi_{13}],\Phi_{13 }\}&\quad\text{in the channel $\eta\to 1$}\,,\\ \{\sigma_{12},L^{(1)}_{-1/3}\cdot\sigma_{12},\sigma_{14}\}& \quad\text{in the channels $\eta\to 0$ and $\eta\to\infty$}\,.\end{split} \tag{3.46}\] In the orbifold BCFT, this translates into the following fusion rules for the twist operator with the boundary \(\beta\): \[\sigma_{1}\Bigg{|}_{\beta}\to\Psi^{(\beta\beta)}_{\mathbf{1}}+\Psi^{(\beta \beta)}_{[\mathbf{1},\phi_{13},\phi_{13}]}+\Psi^{(\beta\beta)}_{13}\,. \tag{3.47}\] The analytic solutions to the differential equation (3.45) are not known, but they can be evaluated numerically, to arbitrary precision. Then, one can use the bootstrap to determine some relations between the unknown structure constants in the expansion (3.4), as outlined in the previous section, and determine the rest analytically, by unfolding methods, to complete the calculation of the mixed BC correlator of the bare twist. We note that a fourth order differential equation that the correlator (2.28) satisfies has already been found in [75], where it plays a role in the determination of the leading contribution to the third Renyi entropy of an excited state in a periodic 1D critical chain. As predicted in [75], there is no degeneracy in the exponents in the more constraining third order differential equation we have found here. Note that these exponents are the ones expected from the orbifold fusion rules [70]. The function \(\langle\Psi_{12}\cdot\sigma_{13}\cdot\Psi_{12}\rangle\) in the \(\mathbb{Z}_{3}\) orbifold of the Ising model In this section, we will work with the \(\mathbb{Z}_{3}\) orbifold of the Ising BCFT. The bulk primary fields of this BCFT are \(\phi_{11}\equiv\mathbf{1}\), \(\phi_{12}\equiv s\) and \(\phi_{13}\equiv\varepsilon\) with \(h_{s}=1/16\) and \(h_{\varepsilon}=1/2\). We will keep labelling the fields by their Kac indices, to not overcomplicate the notation. We will provide here an alternative method for finding a differential equation for the one-point function: \[\langle\sigma_{13}(z,\bar{z})\rangle_{\mathbb{H}}^{f+}\,, \tag{3.48}\] where the orbifold conformal boundary conditions \(\alpha=f\) and \(\beta=+\) correspond to setting fixed and free BC respectively, on all the copies of the Ising mother BCFT. The diagonal BCCO \(\Psi^{(f+)}_{12}\) is the one interpolating between them in the orbifold, since in the Ising BCFT only the \(\psi^{(f+)}_{12}\) primary boundary field can change between the CBCs \((+)\leftrightarrow(f)\)[78]. As in the previous sections, we aim to find a differential equation satisfied by the conformal blocks \[\mathcal{F}_{k}(\eta)=\langle\Phi_{12}|\sigma_{13}(1)\mathcal{P}_{k}\sigma_{ 13}^{\dagger}(\eta)|\Phi_{12}\rangle\,. \tag{3.49}\] First, we use the fusion numbers in (F.1) to infer the dimension of the space of conformal blocks for (3.48): \[\sum_{i}\mathcal{N}^{i}_{\sigma_{13}\sigma_{13}^{\dagger}}\mathcal{N}^{\Phi_ {12}}_{i,\Phi_{12}}=2\,, \tag{3.50}\] which means the differential equation we seek should be second order. By using the null-vectors induced on \(\sigma_{13}\) together with the right combination of Ward identities, one should be able to rigorously derive it. Instead, we will assume this equation exists and is of Fuchsian type - a linear homogenous ODE whose three singular points are regular. The latter assumption is based on the observation that the method exploited in the previous sections relies on expressing orbifold modes \(L_{m}^{(r\neq 0)}\) in terms of Virasoro generators, whose combined differential action on correlators is well-understood in the literature [77], [79] to be of Fuchsian type. Now, using the fusion numbers of (F.1), the fusion rules are \[\begin{split}\sigma_{13}\times\Phi_{12}&\to\sigma_ {12}+L_{-2/3}^{(2)}\cdot\sigma_{12}\,,\\ \sigma_{13}\times\sigma_{13}^{\dagger}&\to\mathbf{1} +[\mathbf{1},\phi_{13},\phi_{13}]\,,\end{split} \tag{3.51}\] so we can determine the asymptotic behaviour of the solutions around the regular singular points \(\eta\in\{0,1,\infty\}\) of the differential equation and infer the Riemann scheme: \[\begin{array}{c|c|c}0&1&\infty\\ \hline-h_{\sigma_{13}}-3h_{12}+h_{\sigma_{12}}&-2h_{\sigma_{13}}&h_{\sigma_{1 3}}-3h_{12}+h_{\sigma_{12}}\\ -h_{\sigma_{13}}-3h_{12}+h_{\sigma_{12}}+\frac{2}{3}&-2h_{\sigma_{13}}+2h_{13 }&h_{\sigma_{13}}-3h_{12}+h_{\sigma_{12}}+\frac{2}{3}\end{array}\] One can readily check that the entries of this Riemann scheme sum up to one, so, by virtue of a general theorem on Fuchsian ODEs (see [80]), there is a unique second-order Fuchsian ODE with this set of singular exponents. If we define the shifted function \(f(\eta)\): \[f(\eta)=\eta^{h_{\sigma_{13}}+3h_{12}-h_{\sigma_{12}}}(1-\eta)^{2h_{\sigma_{1 3}}}\mathcal{F}(\eta)\,, \tag{3.52}\] we find, by the same considerations, that it should satisfy a second-order Fuchsian differential equation with the Riemann scheme \[\begin{array}{c|c|c}0&1&\infty\\ \hline 0&0&-2h_{\sigma_{13}}-6h_{12}+2h_{\sigma_{12}}\\ \frac{2}{3}&2h_{13}&-2h_{\sigma_{13}}-6h_{12}+2h_{\sigma_{12}}+\frac{2}{3} \end{array}\] This is just the canonical Riemann scheme of a hypergeometric differential equation (E.1), with coefficients: \[\begin{split} a&=-2h_{\sigma_{13}}-6h_{12}+2h_{\sigma_{12}}=-2/ 3\,,\\ b&=-2h_{\sigma_{13}}-6h_{12}+2h_{\sigma_{12}}+\frac{2}{3}=0\,,\\ c&=\frac{1}{3}\,,\end{split} \tag{3.53}\] in the conventions of Appendix E. We notice that the exponents in the \(\eta\to 1\) channel are spaced by one, so we will have to deal with the _degenerate exponents_ to arrive at a closed form solution. To do this, we will use the basis of solutions in the \(\eta\to 0\) channel - given in (E.2) - to construct a linearly independent basis of solutions around \(\eta\to 1\). The solutions for \(f\) can be simplified, in this case, to: \[I_{1}(\eta)=1\,,\qquad I_{2}(\eta)=\eta^{2/3}\,, \tag{3.54}\] which gives the conformal blocks around \(\eta\to 0\): \[\mathcal{F}_{1}(\eta)=\eta^{-1/3}(1-\eta)^{-4/9}\,,\qquad\mathcal{F}_{2}(\eta) =\eta^{1/3}(1-\eta)^{-4/9}\,. \tag{3.55}\] in our normalisation convention. To build the basis of solutions around \(\eta=1\), we look for the linear combinations \(\tilde{\cal F}_{i}(\eta)=\sum_{j}P_{ij}^{-1}{\cal F}_{j}(\eta)\) that have the following series expansion around \(\eta=1\) : \[\tilde{\cal F}_{1}(\eta)=(1-\eta)^{-4/9}\left(1+{\cal O}[(1-\eta)^{2}]\right)\,, \qquad\tilde{\cal F}_{2}(\eta)\sim(1-\eta)^{5/9}\,, \tag{3.56}\] since the power series associated to the orbifold identity should have no \((1-\eta)\) term due to the null-vectors \(L_{-1}^{(r)}\cdot{\bf 1}\equiv 0\), and the both solutions should have the leading coefficient normalised to one, in our convention for the conformal blocks. With these requirements, one finds the fusing matrix \(P_{ij}^{-1}\) to be: \[P^{-1}=\frac{1}{2}\left(\begin{array}{cc}1&1\\ 3&-3\end{array}\right)\,. \tag{3.57}\] Thus, the conformal blocks of (3.48) around \(\eta=1\) are found to be: \[\tilde{\cal F}_{1}(\eta)=\frac{\eta^{-1/3}+\eta^{1/3}}{2(1-\eta)^{4/9}}\,, \qquad\tilde{\cal F}_{2}(\eta)=\frac{3(\eta^{-1/3}-\eta^{1/3})}{2(1-\eta)^{4/ 9}}\,. \tag{3.58}\] For the physical correlation function, we write \[\langle\sigma_{13}(z,\bar{z})\rangle_{\mathbb{H}}^{f+}=\bar{z}^{-2h_{\sigma_{ 1},3}}\left[{\cal A}_{\sigma_{13},\Psi_{1}}^{(+f)\Psi_{12}}\tilde{\cal F}_{1} (\eta)+{\cal A}_{\sigma_{13},[\psi_{1},\psi_{13},\psi_{13}]}^{(+f)\Psi_{12}} \tilde{\cal B}_{[\psi_{1},\psi_{13},\psi_{13}],\Psi_{12}}^{(+f)\Psi_{12}} \tilde{\cal F}_{2}(\eta)\right]\,. \tag{3.59}\] Finally, we observe that \(B_{\psi_{13}\psi_{12}}^{(++f)\psi_{12}}\) vanishes, and hence \({\cal B}_{[\psi_{1},\psi_{13},\psi_{13}],\Psi_{12}}^{(++f)\Psi_{12}}=0\), so the expression (3.59) simplifies to: \[\boxed{\langle\sigma_{13}(z,\bar{z})\rangle_{\mathbb{H}}^{f+}=2^{5/9}\times \frac{\cos(2\theta/3)}{(r\sin\theta)^{4/9}}\,,\qquad z=re^{i\theta}} \tag{3.60}\] where we have also used: \[{\cal A}_{\sigma_{13},\Psi_{1}}^{(+)}=g_{+}^{-2}\,,\qquad{\cal B}_{\Psi_{1}, \Psi_{12}}^{(++f)\Psi_{12}}=1\,, \tag{3.61}\] and the value of the ground-state degeneracy for fixed BC \(g_{+}=1/\sqrt{2}\) in the Ising BCFT [69]. ### More hypergeometric differential equations in the Ising cyclic orbifold BCFTs We have managed, in Sections 3.3 and 3.4 to obtain differential equations for cyclic orbifolds of generic mother BCFTs, but have not been able to provide analytic solutions for them. One can, however, find second order differential equations for particular choices of \({\cal M}_{p,p^{\prime}}\) and composite twist fields (for the correlators of Section 3.3), in the manner presented in Section 3.5, which allow us to _exactly_ determine the correlators. Since we want to compare the results of this section with lattice data of the critical Ising spin chain with mixed BC, it will be particularly satisfying to find such equations for the cyclic orbifolds of the Ising BCFT. Let's first consider the correlator: \[\langle\sigma_{1,3}(z,\bar{z})\rangle_{N=2}^{\alpha\beta} \tag{3.62}\] in the \(\mathbb{Z}_{2}\) Ising orbifold BCFT which should satisfy, up to a Mobius map, the same differential equation as: \[\langle\Phi_{1,2}|\,\sigma_{1,3}(1)\sigma_{1,3}(\eta)\,|\Phi_{1,2}\rangle \tag{3.63}\] The orbifold fusion rules of [40], imply that the space of conformal blocks is two-dimensional since: \[\sum_{i}{\cal N}^{i}_{\sigma_{1,3},\sigma_{1,3}}{\cal N}^{\Phi_{1,2}}_{i,\Phi_{1, 2}}=2 \tag{3.64}\] By the same type of arguments and assumptions as in Section 3.5, we infer that (3.63) satisfies a second order Fuchsian differential equation with the following Riemann scheme: \[\begin{array}{c|c|c}0&1&\infty\\ \hline-h_{\sigma_{1,3}}-2h_{1,2}+h_{\sigma_{1}}&-2h_{\sigma_{1,3}}&h_{\sigma_{1,3}}-2h_{1,2}+h_{\sigma_{1,1}}\\ -h_{\sigma_{1,3}}-2h_{1,2}+h_{\sigma_{1,3}}+\frac{1}{2}&-2h_{\sigma_{1,3}}+2h_ {1,3}&h_{\sigma_{1,3}}-2h_{1,2}+h_{\sigma_{1,2}}+\frac{1}{2}\end{array}\] so that we eventually find the one-point twist correlator to be \[\langle\sigma_{1,3}(z,\bar{z})\rangle^{\alpha\beta}_{(N=2)}=\bar{z}^{-2h_{ \sigma_{1,3}}}g_{+}^{-1}\tilde{\cal F}^{N=2}_{\Psi_{\bf 1}}(\eta) \tag{3.65}\] with \[\boxed{\tilde{\cal F}^{(N=2)}_{\Psi_{\bf 1}}(\eta)=\frac{1+\eta^{3/4}}{2(1- \eta)^{9/16}\eta^{3/8}}} \tag{3.66}\] Finally, we can find an exact expression for the bare twist correlator: \[\langle\sigma_{\bf 1}(z,\bar{z})\rangle^{\alpha\beta}_{N=3} \tag{3.67}\] in the \(\mathbb{Z}_{3}\) Ising orbifold BCFT since it also satisfies a second order differential equation with Riemann scheme: \[\begin{array}{c|c|c}0&1&\infty\\ \hline-h_{\sigma_{\bf 1}}-3h_{1,2}+h_{\sigma_{1,2}}&-2h_{\sigma_{\bf 1}}&h_{ \sigma_{\bf 1}}-3h_{1,2}+h_{\sigma_{1,2}}\\ -h_{\sigma_{1}}-3h_{1,2}+h_{\sigma_{1,2}}+\frac{1}{3}&-2h_{\sigma_{\bf 1}}+2h_ {1,3}&h_{\sigma_{\bf 1}}-3h_{\Psi_{1,2}}+h_{\sigma_{1,2}}+\frac{1}{3}\end{array}\] We find: \[\langle\sigma_{\bf 1}(z,\bar{z})\rangle^{\alpha\beta}_{N=3}=\bar{z}^{-1/9}g_{+ }^{-2}\tilde{\cal F}^{N=3}_{\Psi_{\bf 1}}(\eta) \tag{3.68}\] with \[\boxed{\tilde{\cal F}^{N=3}_{\Psi_{\bf 1}}(\eta)=\frac{1+\eta^{1/3}}{2(1- \eta)^{1/9}\eta^{1/6}}} \tag{3.69}\] Other results for the Ising BCFT.We have also obtained results specific to the \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{3}\) orbifolds of the Ising BCFT with fixed mixed BC with \(\alpha=+\) and \(\beta=-\), for which the most relevant primary BCCO is \(\psi_{2,1}^{(+-)}\). Since these results are not based on deriving differential equations, it felt thematically appropriate to leave their presentation for the Appendix D. ## 4 Numerical checks and finite-size corrections in quantum chains To provide an independent appraisal of the validity of our CFT results, we have performed a numerical analysis on the Ising and three-state Potts open quantum chains for different settings of mixed BC. Once finite-size effects are properly accounted for, the validity of the CFT results becomes apparent. We should note that the Renyi entropies in the Ising case have already been obtained, for generic \(N\) in the work of [57], through a different approach. We found that our analytical calculations (for \(N=2,3\)) are compatible with their results. Furthermore, by studying the finite-size corrections to their result, we manage to quantitatively understand the deviation of the chain data from the leading CFT prediction in the DMRG numerical analysis of [57], even for relatively large system sizes \(M\sim 10^{2}\). Thus, when the subleading CFT contribution to the Renyi entropy is taken into account, as our analysis shall show, the agreement with the lattice data is excellent, even for the small system sizes \(M\sim 26\) accessible to exact diagonalization. ### The Ising quantum chain with mixed BC The Hamiltonian of the Ising quantum chain with open BC, describing \(M\) spins with generic BC at the boundary, is given by: \[H_{\alpha\beta}=-\sum_{j=1}^{M-1}s_{j}^{z}s_{j+1}^{z}-h\,\sum_{j=1}^{M}s_{j}^{ x}-h_{\alpha}s_{1}^{z}-h_{\beta}s_{M}^{z}\,, \tag{4.1}\] where \(s_{j}^{x,y,z}\) denote Pauli spin operators acting non-trivially at site \(j\), and as identity at all the other sites. We denote the lattice spacing by \(a\), so that the length of the chain is \(L=Ma\). The parameters \(h_{\alpha},h_{\beta}\) denote external fields (in the \(z\) direction) acting at the boundary sites \(j=1\) and \(j=M\). The ground state of this Hamiltonian is then found by _exact diagonalization_ (ED) for system sizes \(M\leq 26\), and from it, the Renyi entropies are extracted. To take the _scaling limit_ of the critical chain, we send \(M\to\infty,a\to 0\) while keeping \(L\) fixed. In this limit, criticality is achieved in the bulk for \(h=1\), while each boundary admits three _critical points_\(h_{\alpha},h_{\beta}\in\{0,\pm\infty\}\). From a CFT perspective, the scaling limit of the critical Ising chain with open boundaries is very well understood. It is described by the BCFT with central charge \(c=1/2\) and a bulk operator spectrum consisting of three primary operators - the identity \(\mathbf{1}\), energy \(\varepsilon\) and spin operators \(s\) - and their descendants [77]. The three boundary critical points correspond to the three conformal boundary conditions for the Ising BCFT, which, in the framework of radial quantization on the annulus, allow the construction of the following physical boundary states [77, 53]: \[|f\rangle =|\mathbf{1}\rangle\!\rangle-|\epsilon\rangle\!\rangle\] _(free BC),_ (4.2) \[|\pm\rangle =\frac{1}{\sqrt{2}}|\mathbf{1}\rangle\!\rangle+\frac{1}{\sqrt{2} }|\epsilon\rangle\!\rangle\pm\frac{1}{2^{1/4}}|s\rangle\!\rangle\] _(fixed BC),_ (4.3) where \(|i\rangle\!\rangle\) denotes the Ishibashi state [53][82] corresponding to the primary operator \(i\). The physical boundary states \(|\alpha\rangle\) are in one-to-one correspondence with the primary fields of the bulk CFT 2: \(|f\rangle\leftrightarrow s\) and \(|\pm\rangle\leftrightarrow\mathbf{1}/\varepsilon\). The boundary fields that interpolate between two conformal BCs can be inferred from this correspondence, as shown in [53], [78]. Thus, the spectrum of primary boundary fields \(\psi_{i}^{(\alpha\beta)}\) of the Ising BCFT is the one of Table 2. Footnote 2: This statement is strictly true if the bulk CFT is diagonal, see [82] for a detailed discussion. On the discrete side, we are calculating the one-point correlator of the _lattice twist operator_\(\widehat{\sigma}(m,n)\), where \((m,n)\) are square-lattice coordinates. In the scaling limit with \(a\to 0\), \(\widehat{\sigma}(m,n)\) admits a _local_ expansion into scaling operators of the corresponding orbifold CFT. The two most _relevant_ terms in this expansion are: \[\widehat{\sigma}(m,n)=A\,a^{2h_{\sigma}}\sigma_{\bf 1}(w,\bar{w})+B\,a^{2h_{ \sigma_{\varepsilon}}}\sigma_{\varepsilon}(w,\bar{w})+\mbox{less relevant terms}\,, \tag{4.4}\] with the composite twist operator \(\sigma_{\varepsilon}\) defined in (2.13) and \(h_{\sigma_{\varepsilon}}=h_{\sigma}+h_{\varepsilon}/N\). The integers \((m,n)\) parametrize the lattice, and they are related to the continuum coordinate on the strip as \(w=(m+in)a\), \(\bar{w}=(m-in)a\). We can take advantage of the translation invariance in the \(n\) direction to fix the "time" coordinate of the lattice twist operators to be \(n=0\). We will then denote their continuum coordinate by \(\ell=ma\). The amplitudes \(A\) and \(B\) in (4.4) are not universal quantities, so we cannot determine them by CFT techniques. However, they are also independent of the global properties of the system (e.g. choice of BC) so they can be found from a numerical analysis of the infinite Ising chain. Here one can employ the free fermion techniques of [1] and the well-known analytical results for the Renyi entropy of an interval in an infinite system [50, 2] to fit for the values of \(A\) and \(B\), with great accuracy. We can now express the lattice one-point twist correlator with generic mixed BC as an expansion of CFT correlators: \[\langle\widehat{\sigma}(m,0)\rangle^{\alpha\beta}=Aa^{2h_{\sigma}}\langle \sigma(\ell,\ell)\rangle^{\alpha\beta}_{\mathbb{S}_{L}}+Ba^{2h_{\sigma_{ \varepsilon}}}\langle\sigma_{\varepsilon}(\ell,\ell)\rangle^{\alpha\beta}_{ \mathbb{S}_{L}}+\ldots \tag{4.5}\] Using the map (3.2), we can make the dependence on system size in (4.5) explicit: \[\langle\widehat{\sigma}(m,0)\rangle^{\alpha\beta}=A\left(\frac{M}{\pi}\right) ^{-2h_{\sigma}}\langle\sigma(z,\bar{z})\rangle^{\alpha\beta}_{\mathbb{H}}+B \left(\frac{M}{\pi}\right)^{-2h_{\sigma_{\varepsilon}}}\langle\sigma_{ \varepsilon}(z,\bar{z})\rangle^{\alpha\beta}_{\mathbb{H}}+\ldots \tag{4.6}\] where \(z=\exp(i\pi\ell/L),\bar{z}=\exp(-i\pi\ell/L)\). In our computational setup, the system sizes accessible through exact diagonalization are limited to \(M\leq 26\) and, since twist operators are placed _between lattice sites_, we have only considered even system sizes. With system sizes of this order of magnitude, finite-size corrections are quite strong. The most relevant corrections we have found arise from the subleading scaling of the lattice twist operator, given in equation (4.6). The relative scaling of the subleading term with respect to the leading one is \({\cal O}\left(M^{-2h_{\epsilon}/N}\right)\). Since we do not have access, numerically, to system sizes large enough to suppress these corrections, we had to take into account the first two terms in the expansion of (4.6) to find a good agreement with the lattice data. Furthermore, as the work of [57] suggests, the finite-size effects are still important, even at the much larger system sizes \(M\sim 100\) accessible through DMRG methods. We mention that such subleading contributions to the lattice twist operator, which have been identified here from the operator spectrum of the \(\mathbb{Z}_{2}\) cyclic orbifold, have previously been understood, through the path integral formalism on the corresponding replicated surface, under the name of "unusual corrections" [83, 84]. We give now the results in the \(\mathbb{Z}_{2}\) orbifold for the correlators appearing in the expansion (4.5), for _mixed fixed_ BC with \(\alpha=+\), \(\beta=-\) (calculated in Appendix D) and mixed free-fixed \begin{table} \begin{tabular}{|c|c|c|c|} \hline \((\alpha\beta)\) & \(+\) & \(-\) & \(f\) \\ \hline \(+\) & \(\psi_{\bf 1}\) & \(\psi_{\varepsilon}\) & \(\psi_{s}\) \\ \hline \(-\) & \(\psi_{\varepsilon}\) & \(\psi_{\bf 1}\) & \(\psi_{s}\) \\ \hline \(f\) & \(\psi_{s}\) & \(\psi_{s}\) & \(\psi_{\bf 1},\psi_{\varepsilon}\) \\ \hline \end{tabular} \end{table} Table 2: Boundary operator spectrum of the Ising BCFT BC with \(\alpha=+\) and \(\beta=f\): \[\begin{split}\langle\sigma_{\mathbf{1}}(\ell,\ell)\rangle_{\mathbb{S }_{L}}^{+-}&=2^{-5/2}\left(\frac{2L}{\pi}\right)^{-1/16}\frac{7+ \cos\frac{2\pi\ell}{L}}{\left(\sin\frac{\pi\ell}{L}\right)^{1/16}}\,,\\ \langle\sigma_{\varepsilon}(\ell,\ell)\rangle_{\mathbb{S}_{L}}^ {+-}&=2^{-5/2}\left(\frac{2L}{\pi}\right)^{-9/16}\frac{1-9\cos \frac{2\pi\ell}{L}}{\left(\sin\frac{\pi\ell}{L}\right)^{9/16}}\,,\\ \langle\sigma_{\mathbf{1}}(\ell,\ell)\rangle_{\mathbb{S}_{L}}^ {+f}&=2^{1/2}\left(\frac{2L}{\pi}\right)^{-1/16}\frac{\cos\frac{ \pi\ell}{4L}}{\left(\sin\frac{\pi\ell}{L}\right)^{1/16}}\,,\\ \langle\sigma_{\varepsilon}(\ell,\ell)\rangle_{\mathbb{S}_{L}}^ {+f}&=-2^{1/2}\left(\frac{2L}{\pi}\right)^{-9/16}\frac{\cos\frac{ 3\pi\ell}{4L}}{\left(\sin\frac{\pi\ell}{L}\right)^{9/16}}\,,\end{split} \tag{4.7}\] where the interval \(\ell\) starts at the \(\alpha=+\) boundary. The expressions for the bare twist correlators are in accord with the equivalent results obtained in [57]. With this mention, we show in Figure 3 the remarkable agreement between our CFT calculations for the two terms contributing to the second Renyi entropy \(S_{2}^{\alpha\beta}=-\log\langle\widehat{\sigma}(m,0)\rangle^{(\alpha\beta)}\) of the interval \([0,m]\) on the lattice, and the numerical results for the critical Ising chain from the exact diagonalization of the Hamiltonian. Figure 3a illustrates the case of different \((\pm)\) fixed BC on the two sides of the chain, while Figure 3b corresponds to letting the \(m=0\) site free, and applying a magnetic field at the boundary site \(m=M-1\). To illustrate the large amplitude of finite-size effects, we show in Figure 4 how the CFT prediction fares against the lattice results with and without the incorporation of the subleading term. Even for the curve including both subleading and leading terms in (4.6), the agreement with lattice data is not perfect close to the boundary. This can be traced to the presence of corrections from _descendants_ of twist operators, which introduce terms of \(\mathcal{O}(M^{-h_{\epsilon}-1})\) relative to the bare twist contribution. We can repeat the same kind of analysis for the third Renyi entropy, related to the \(\mathbb{Z}_{3}\)-orbifold one-point function by \(S_{3}^{\alpha\beta}=-\frac{1}{2}\log\langle\widehat{\sigma}(m,0)\rangle^{( \alpha\beta)}\). The Ising orbifold correlators in this case are given by: \[\begin{split}\langle\sigma_{\mathbf{1}}(\ell,\ell)\rangle_{ \mathbb{S}_{L}}^{+-}&=3^{-2}\left(\frac{2L}{\pi}\right)^{-1/9} \frac{7+2\cos\frac{2\pi\ell}{L}}{\left(\sin\frac{\pi\ell}{L}\right)^{1/9}}\,, \\ \langle\sigma_{\varepsilon}(\ell,\ell)\rangle_{\mathbb{S}_{L}}^ {+-}&=3^{-2}\left(\frac{2L}{\pi}\right)^{-4/9}\frac{1+8\cos \frac{2\pi\ell}{L}}{\left(\sin\frac{\pi\ell}{L}\right)^{4/9}}\,,\\ \langle\sigma_{\mathbf{1}}(\ell,\ell)\rangle_{\mathbb{S}_{L}}^ {+f}&=2\left(\frac{2L}{\pi}\right)^{-1/9}\frac{\cos\frac{\pi\ell}{ 3L}}{\left(\sin\frac{\pi\ell}{L}\right)^{1/9}}\,,\\ \langle\sigma_{\mathbf{1}}(\ell,\ell)\rangle_{\mathbb{S}_{L}}^ {+f}&=2^{1/9}\left(\frac{2L}{\pi}\right)^{-4/9}\frac{\cos\frac{2 \pi\ell}{3L}}{\left(\sin\frac{\pi\ell}{L}\right)^{4/9}}\,.\end{split} \tag{4.8}\] In Figure 5, we once again compare our CFT calculations (including both the leading and subleading term) with the critical chain results for the third Renyi entropy \(S_{3}^{\alpha\beta}\), to good agreement for mixed fixed BC (Fig. 5a) and mixed free fixed BC (Fig. 5b). As for the \(\mathbb{Z}_{2}\) results, including the CFT subleading contribution to \(S_{3}^{\alpha\beta}\) is necessary to find a satisfying match with the lattice results. Further finite-size corrections in this case decay as \(\mathcal{O}\left(M^{-\frac{2h_{\epsilon}}{3}-1}\right)\). As advertised in the beginning of the section, our results for the bare twist correlators (for all configurations of mixed BC) are compatible with the ones of [57]. The subleading contribution to the Renyi entropies from the excited twist correlator is largely responsible for the mismatch between the lattice and CFT data in the aforementioned article. Finite-size corrections of this magnitude can be suppressed only with much larger system sizes \(M\sim 10^{3}\), as the authors of the present work have shown in [60]. ### The three-state Potts quantum chain with mixed BC A natural extension of the Ising chain, the three-state Potts model allows the spins at each site to take one of three possible values \(\{R,G,B\}\), which we can also conveniently parametrize by third roots of unity \(\{1,\omega,\omega^{2}\}\), with \(\omega=\exp(2\pi i/3)\). The Hamiltonian of the three-state Potts Figure 3: Plots of the second Rényi entropy \(S_{2}^{\alpha\beta}([m/M])\) in the critical Ising chain with two types of mixed BC for a chain of size \(M=26\). The interval is grown from the \(\beta=+\) boundary. model, tuned to its bulk critical point [85][69], [86] is given by: \[H_{\alpha\beta}=-\zeta\left[\sum_{j=1}^{M-1}\left(Z_{j}Z_{j+1}^{\dagger}+Z_{j}^{ \dagger}Z_{j+1}\right)+\sum_{j=2}^{M-1}\left(X_{j}+X_{j}^{\dagger}\right)+H_{1} ^{(\alpha)}+H_{M}^{(\beta)}\right]\,, \tag{4.9}\] where \(\zeta=\frac{\sqrt{3}}{2\pi^{3/2}}\) is the conformal normalization factor [86] and the operators \(Z_{j}\) and \(X_{j}\) act at site \(j\) as: \[Z=\left(\begin{array}{ccc}1&0&0\\ 0&\omega&0\\ 0&0&\omega^{2}\end{array}\right)\,,\qquad X=\left(\begin{array}{ccc}0&1&0\\ 0&0&1\\ 1&0&0\end{array}\right)\,. \tag{4.10}\] The terms \(H_{1}^{(\alpha)}\) and \(H_{M}^{(\beta)}\) set the BCs at the ends of the chain. For the purpose of this analysis, we will set _fixed_ BC of type \(R\) at site 1 and _restricted_ boundary conditions of type \(\{G,B\}\) at site \(M\) - the spin at site \(M\) is forbidden from taking the value \(R\). This is implemented through the boundary terms: \[H^{(R)}=h\left(\begin{array}{ccc}-1&0&0\\ 0&0&0\\ 0&0&0\end{array}\right)\,,\qquad H^{(GB)}=h\left(\begin{array}{ccc}1&0&0\\ 0&0&-1\\ 0&-1&0\end{array}\right)\,, \tag{4.11}\] The critical points of interest for the boundaries correspond to \(h=+\infty\). However, for any \(h>0\), the boundaries will flow towards the same critical points, up to irrelevant boundary perturbations [87]. These are typically inconsequential for \(h\) a large positive value. Furthermore, in our numerical analysis we can, in fact, implement \(|h|\to\infty\) by restricting the local Hilbert spaces of the boundary sites to exclude the \(\{G,B\}\) and \(\{R\}\) configurations on the left and, respectively, right boundary. The scaling limit \(M\to\infty,\,a\to 0\) (with \(L=Ma\) fixed) of this critical chain is also well understood. It is given by the D-series BCFT \({\cal M}_{6,5}\) with central charge \(c=4/5\) and a bulk Figure 4: Comparison of the second Rényi entropy in the critical Ising chain of size \(M=26\) with mixed free fixed BC with CFT results. Inclusion of the subleading term in the expansion 4.6 is crucial for obtaining a satisfying agreement with lattice data primary operator spectrum that contains the scalar fields given in Table 3 as well as the non-diagonal fields \(\{\phi_{2/5,7/5},\phi_{7/5,2/5},\phi_{3,0},\phi_{0,3}\}\) whose labels indicate their respective holomorphic and antiholomorphic conformal dimensions. One can, as shown in Table 1, assign a \(\mathbb{Z}_{3}\) charge to the scalar fields, and their respective conformal families, that is consistent with the fusion rules between them. The \(\dagger\) in Table 3 is, thusly, used to differentiate the fields with the same conformal dimension, but different \(\mathbb{Z}_{3}\) charge. In the scaling limit, the fixed and restricted boundary critical points will correspond, natu Figure 5: Plots of the third Rényi entropy \(S_{3}^{\alpha\beta}([m/M])\) in the critical Ising chain with two types of mixed BC for a chain of size \(M=26\). The interval is grown from the \(\alpha=+\) boundary. rally, to the fixed and restricted3 conformal boundary states [53, 88]. Footnote 3: In [53] they are referred to as ”mixed” BC. \[|\mathbf{1}\rangle=\mathcal{N}[(|\mathbf{1}\rangle+|\psi\rangle+| \psi^{\dagger}\rangle\!\rangle)+\lambda(|\epsilon\rangle+|s\rangle\!\rangle+|s^ {\dagger}\rangle\!\rangle)]\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ The correlators in (4.15) satisfy the second order (3.10) and fourth order (3.30) ODEs with \(g=6/5\). While the solutions to equation (3.10) are known exactly (3.13), one needs to solve (3.30) numerically to find the conformal blocks in the expansion (3.4) of the excited twist correlator \(\langle\sigma_{\varepsilon}(z,\bar{z})\rangle_{\mathbb{H}}^{\alpha\beta}\). This is done by a standard numerical implementation of the Frobenius method, whose details we leave for Appendix H. As in the case of the Ising BCFT, not all the solutions of these differential equations are needed to build the twist field correlators in (4.15). Crucially, we note that in the three-state Potts mother BCFT, there is no boundary operator \(\psi_{7/5}^{(RR)}\) living on the fixed conformal boundary of type \(R\)[56]. At the level of the operator algebra this translates into the vanishing of the boundary-boundary structure constants \(B_{\psi_{7/5},\psi_{1,2}}^{(R|R|GB),\psi_{1,2}}\) as we have checked using the results of [82]. This implies, through the relations between mother BCFT and orbifold structure constants derived in Section 2.3, the vanishing of some of the coefficients in the block expansions (3.4) of the correlators in (4.15). In effect, only the block corresponding to the identity operator contributes to these expressions when the twist fields are sent to the \(\beta\) boundary. It corresponds to the following fusion rules for \(\sigma_{\mathbf{1}}\), \(\sigma_{\varepsilon}\): \[\sigma_{\mathbf{1}}\Bigg{|}_{\beta}\to\Psi_{\mathbf{1}}^{(\beta\beta)}\qquad \sigma_{\varepsilon}\Bigg{|}_{\beta}\to\Psi_{\mathbf{1}}^{(\beta\beta)} \tag{4.16}\] Thusly, we are led to obtain expressions for the bare twist and excited twist correlators on the UHP: \[\begin{split}\langle\sigma(z,\bar{z})\rangle^{\alpha\beta}& =\bar{z}^{-2h_{\sigma}}\mathcal{A}_{\sigma,\Psi_{\mathbf{1}}}^{( \beta)}\mathcal{B}_{\Psi_{\mathbf{1}},\Psi_{12}}^{(\beta\beta\alpha)\Psi_{12 }}\tilde{\mathcal{F}}_{\mathbf{1}}(\eta)\,,\\ \langle\sigma_{\varepsilon}(z,\bar{z})\rangle^{\alpha\beta}& =\bar{z}^{-2h_{\sigma_{\varepsilon}}}\mathcal{A}_{\sigma_{ \varepsilon},\Psi_{\mathbf{1}}}^{(\beta)}\mathcal{B}_{\Psi_{\mathbf{1}}, \Psi_{12}}^{(\beta\beta\alpha)\Psi_{12}}\tilde{\mathcal{F}}_{\mathbf{1}}^{( \varepsilon)}(\eta)\,,\end{split} \tag{4.17}\] where \(\tilde{\mathcal{F}}_{\mathbf{1}}(\eta)\) is given in (3.13), so in this case we can write an explicit result for the bare twist correlator on the UHP: \[\boxed{\langle\sigma(z,\bar{z})\rangle^{\alpha\beta}=\mathcal{A}_{\sigma, \Psi_{\mathbf{1}}}^{(\beta)}\mathcal{B}_{\Psi_{\mathbf{1}},\Psi_{12}}^{(\beta \beta\alpha)\Psi_{12}}(1-\eta)^{-2h_{\sigma}}\eta^{-2h_{12}+h_{\sigma}}{}_{2 }\text{F}_{1}\left(-8/5,-9/10;-9/5\ \mid 1-\eta\right)} \tag{4.18}\] For the excited twist correlator we have: \[\tilde{\mathcal{F}}_{\mathbf{1}}^{(\varepsilon)}(\eta)=J_{1}(u(\eta))=(1-u)^{ -2h_{\sigma_{\varepsilon}}}\sum_{n=0}^{\infty}a_{n}(1-u)^{n}\,, \tag{4.19}\] with the coefficients determined by the recursion relation (H.8), derived in Appendix H. The structure constants can be expressed in terms of known quantities for the \(\mathcal{M}(6,5)\) BCFT, also obtained in Appendix B: \[\mathcal{A}_{\sigma_{\varepsilon},\Psi_{\mathbf{1}}}^{(\beta)}=g_{R}^{-1}A_{ \varepsilon}^{R}\,,\qquad\mathcal{A}_{\sigma,\Psi_{\mathbf{1}}}^{(\beta)}=g_ {R}^{-1}\,,\qquad\mathcal{B}_{\Psi_{\mathbf{1}},\Psi_{12}}^{(\beta\beta \alpha)\Psi_{12}}=1\,, \tag{4.20}\] where the ground state degeneracies \(g_{R}\) and \(g_{GB}\) have been found in [69]: \[g_{R}=\left(\frac{5-\sqrt{5}}{30}\right)^{\frac{1}{4}}\qquad g_{GB}=g_{R} \lambda^{2} \tag{4.21}\] and the bulk-boundary structure constant \(A_{\varepsilon}^{R}\) has been calculated in [88, 89] to be: \[A_{\varepsilon}^{R}=\left(\frac{1+\sqrt{5}}{2}\right)^{\frac{3}{2}}\,. \tag{4.22}\] Putting everything together, we can finally compare the lattice prediction for the second Renyi entropy \(S_{2}^{(R,GB)}=-\log\langle\widehat{\sigma}(m,0)\rangle^{(R,GB)}\) with our analytic results in Figure 6. While the CFT prediction does not satisfyingly match the lattice data at all points, we observe that the inclusion of the subleading term gives an analytic curve that is closer to the lattice data. However, it is not enough to make up for the severe finite-size effects. Firstly, due to the operator content of the D-series \(\mathcal{M}_{6,5}\) CFT, we expect the higher order corrections in 4.15 to have a slower power law decay than in the case of the Ising CFT. We conjecture that the next-to-subleading contribution to (4.15) will decay as \(\sim M^{-2(h_{\sigma_{i}}+1/2)}\). These corrections, we believe, arise from the combined contribution of the \(\langle\sigma_{\phi_{1,3}}(w,\bar{w})\rangle_{\mathbb{S}}^{R,GB}\) and \(\langle L_{-1/2}^{(1)}\bar{L}_{-1/2}^{(1)}\sigma_{\phi_{1,2}}(w,\bar{w}) \rangle_{\mathbb{S}}^{R,GB}\). While the first correlator can be calculated by a repeat of the method employed for the subleading term, the correlator involving the descendant twist field requires the derivation of a new differential equation. Such an endeavour is beyond the scope of this work. Furthermore, the quantum chain sizes we can reach are diminished in the case of the three-state Potts model, since the size of the space of states grows as \(\sim 3^{M}\). This memory constraint prevents us from reaching sizes at which higher order corrections are suppressed, using our computational methods. This limitation can be, perhaps, bypassed through the usage of more sophisticated numerical tools, such as DMRG or tensor network methods, to access system sizes \(M\) for which the unknown higher-order correction terms are further suppressed. Finally, one can use the method of Appendix H, applied this time to the third order ODE of Section 3.4 to derive the leading CFT contribution to the \(S_{3}^{(R,GB)}([0,\ell])\) Renyi entropy. Since in this case, we have not derived an ODE for the excited twist correlator, we have no handle on the finite-size corrections to the lattice data, which should be even more severe for \(N=3\). Figure 6: Comparison of the second Rényi entropy in the critical three-state Potts chain of size \(M=18\) with mixed \((R,GB)\) BC with CFT results. Instead, we have just checked that the CFT result for mixed BC interpolates between the third Renyi entropies for identical \(R\) and \(GB\) boundaries: \[S_{3}^{(\alpha,\alpha)}([0,\ell])=\frac{c}{9}\log\left[\frac{2L}{\pi}\sin\left( \frac{\pi\ell}{L}\right)\right]+\log g_{\alpha} \tag{4.23}\] Our expectations are met, as Figure 7 confirms. Figure 7: Comparison of shifted third Rényi entropies for \((R,GB)\), \((R,R)\) and \((GB,GB)\) BC. The mixed BC curve can be seen to interpolate between the identical BC results Conclusion In this article, we have presented a general method for calculating Renyi entropies \(S_{N}\) in the ground state of a 1D critical system with mixed open boundaries, for an interval starting at one of its ends. This required computing three-point functions of one twist operator and two BCCOs on the upper-half plane \(\mathbb{H}\) with mixed BCs \((\alpha,\beta)\) in the \(\mathbb{Z}_{N}\) cyclic orbifold. For this purpose, we have derived ODEs satisfied by these correlation functions, by exploiting the null-vectors of the twisted and untwisted representations of its symmetry algebra \(\mathrm{OVir}_{N}\), together with Ward identities obtained from the additional conserved currents of the theory. We used a combination of analytical and numerical methods to find a basis of solutions (_a.k.a_ conformal blocks) of these ODEs. For the examples provided in this work, we have calculated the boundary and bulk-boundary structure constants needed to build the physical correlators as linear combinations of the blocks. Among the setups we have analysed are the leading and subleading contributions to the one-interval second and third Renyi entropies of the Ising model, and the second Renyi entropy for the three-state Potts model. We have also derived differential equations for mixed BC twist field correlators in the \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{3}\) orbifolds of generic BCFTs, and obtained an explicit expression for the second Renyi entropy valid for any diagonal minimal model, but with a particular set of mixed boundary conditions. We have compared the CFT results against critical Ising and three-state Potts spin chain data. Since finite size effects are quite significant for open chains, we have included both the leading and subleading contributions to the lattice twist field correlator in our analytical prediction. In the Ising case, the agreement was excellent for all choices of mixed BC, even though the system sizes we could reach were limited. For the three-state Potts chain, however, the finite size effects are even more severe, and as a consequence the matching is less satisfactory. This could be improved by using more sophisticated numerical techniques such as DMRG [90, 91] or tensor network methods [92]. The clearest limitation of our method, first identified in [42], is that the process for obtaining a differential equation becomes more difficult as \(N\) is increased. We have checked using the fusion rules in Appendix F for \(N>3\) that the expected order of the ODEs increases with \(N\) for generic minimal models, which implies that more orbifold Ward identities will be needed to obtain the ODEs. There are several possible extensions of this work. A possibility would be to generalize the setup for the calculation of Renyi entropies of an interval _contained in the bulk_, with mixed BC. However, in this situation, one would have to find a differential equation that a four-point function with two twist fields and two BCCOs satisfies. Cardy's doubling trick suggests that such a correlator satisfies the same Ward identities as a six-point conformal block on the complex plane, so the corresponding differential equation would be partial instead of ordinary. ## Appendix A Mother BCFT conventions We will define here our mother BCFT conventions on the upper-half plane \(\mathbb{H}\) parametrized by the coordinate \(z=x+iy\). The boundary is aligned with the real axis. Bulk operators in the mother CFT are denoted by \(\phi_{i}(z,\bar{z})\) while boundary operators are written as \(\psi_{j}^{(ab)}(x)\). The operator algebra consists of three types of OPE, which we explicitate, to fix the notations for the corresponding structure constants. First, we have the _bulk-bulk OPEs_: \[\phi_{i}(z,\bar{z})\phi_{j}(0,0)=\sum_{\phi_{k}\text{ scaling op.}}C^{k}_{ij}z^{-h_{i}-h_{j}+h_{k}}\bar{z}^{-\bar{h}_{i}-\bar{h}_{j}+ \bar{h}_{k}}\,\phi_{k}(0,0)\] (A.1) where \(C^{k}_{ij}\) are the _bulk structure constants_. The second type of OPE are **boundary-boundary OPEs** between BCCOs interpolating different boundary conditions: \[\psi_{i}^{(ab)}(x)\psi_{j}^{(dc)}(y)=\delta_{bd}\sum_{k}B^{(abc)\psi_{k}}_{ \psi_{i}\psi_{j}}(x-y)^{h_{k}-h_{i}-h_{j}}\psi_{k}^{(ac)}(y)\] (A.2) for \(x>y\). The \(B^{(abc)\psi_{k}}_{\psi_{i}\psi_{j}}\) are the _boundary-boundary structure constants_. The Kronecker delta formally expresses the fact that it only makes sense to consider correlations of boundary operators ordered such that their BCs change consistently with their labelling. Finally, we consider the third kind of OPE, between the bulk and the boundary: \[\phi_{i}(z)=\sum_{k}A^{(a)}_{\phi_{i},\psi_{k}}(2y)^{h_{k}-\Delta_{i}}\cdot \psi_{k}^{(aa)}(x)\] (A.3) with \(A^{(a)}_{\phi_{i},\psi_{k}}\) the bulk-boundary structure constants. In [93], [82] all the structure constants \(A^{(a)}_{\phi_{i},\psi_{k}}\) and \(B^{(abc)\psi_{k}}_{\psi_{i}\psi_{j}}\) have been determined for A-series and D-series BCFTs, in terms of fusion matrix elements of bulk CFT four-point functions, and the entries of the modular \(S\) matrix. Relevant for this paper are the results: \[\boxed{B^{(abc)\psi_{k}}_{\psi_{i}\psi_{j}}=\mathbf{F}_{bk}\begin{bmatrix}a&c \\ i&j\end{bmatrix}}\] (A.4) where the fusion matrix relates bases of conformal blocks around \(z=0\) and \(z=1\) \[\mathcal{I}^{r}_{ia,cj}(z)=\sum_{rs}\mathbf{F}_{rs}\begin{bmatrix}a&c\\ i&j\end{bmatrix}\mathcal{J}^{s}_{ij,ac}(1-z)\] (A.5) defined in the bulk. We also give the expressions for the 1-point structure constants of the BCFT in terms of \(S\)-matrix elements of the mother CFT \[\boxed{A^{(a)}_{\phi_{i}}\equiv A^{(a)}_{\phi_{i},\psi_{1}}=\frac{S_{ai}}{S_{ a1}}\sqrt{\frac{S_{11}}{S_{i1}}}}\] (A.6) Computation of orbifold structure constants ### Composite twist one-point structure constant in the \(\mathbb{Z}_{N}\) orbifold BCFT Assuming the one-point structure constant \(\mathcal{A}^{(\alpha)}_{\sigma_{\mathbf{1}},\psi_{\mathbf{1}}}\) is known, let's consider the correlator: \[\left\langle\sigma_{j}(0,0)\right\rangle_{\mathbb{D}}^{\alpha}=\mathcal{A}^{( \alpha)}_{\sigma_{j},\psi_{\mathbf{1}}}\] (B.1) We now use (2.13) to write the LHS of (B.1) as: \[\left\langle\sigma_{j}(0,0)\right\rangle_{\mathbb{D}}^{\alpha}=\mathcal{A}_{j }\lim_{\epsilon\to 0}\epsilon^{2(1-N^{-1})h_{j}}\left\langle\Phi_{[j,\mathbf{1},\dots,\mathbf{1}]}(\epsilon,\bar{\epsilon})\sigma^{[k]}(0,0)\right\rangle_{ \mathbb{D}}^{\alpha}\] (B.2) Substituting the definition (2.9) of non-diagonal fields, we find: \[\left\langle\sigma_{j}(0,0)\right\rangle_{\mathbb{D}}^{\alpha}=N^{-2(1-N^{-1} )h_{j}-1}\lim_{\epsilon\to 0}\epsilon^{2(1-N^{-1})h_{j}}\sum_{a=0}^{N-1} \left\langle\left(\phi_{j+a}\otimes\phi_{\mathbf{1}+a}\otimes\dots\phi_{ \mathbf{1}+a}\right)(\epsilon,\bar{\epsilon})\sigma^{[k]}(0,0)\right\rangle_{ \mathbb{D}}^{\alpha}\] (B.3) Each correlator in the sum above can be written as: \[\left\langle\left(\phi_{j+a}\otimes\phi_{\mathbf{1}+a}\otimes\dots\phi_{ \mathbf{1}+a}\right)(\epsilon,\bar{\epsilon})\sigma^{[k]}(0,0)\right\rangle_ {\mathbb{D}}^{\alpha}=\frac{Z_{N,a}}{Z_{1,a}^{N}}\langle\phi_{j}(\epsilon, \bar{\epsilon})\rangle_{\mathbb{D}_{N}}=\mathcal{A}^{(\alpha)}_{\sigma_{ \mathbf{1}},\psi_{\mathbf{1}}}\langle\phi_{j}(\epsilon,\bar{\epsilon})\rangle_ {\mathbb{D}_{N}}^{a}\] (B.4) where \(Z_{N,a}\) denotes the partition function on the \(N\)-sheeted disk with conformal BC \(a\), and branch point at \(0\). Now, we can unfold the disk correlator through the conformal map \(w\to w^{1/N}\) and substitute back in (B.3) to find: \[\left\langle\sigma_{j}(0,0)\right\rangle_{\mathbb{D}}^{\alpha}=\mathcal{A}^{( \alpha)}_{\sigma_{\mathbf{1}},\psi_{\mathbf{1}}}\langle\phi_{j}(0,0)\rangle_ {\mathbb{D}}^{a}\] (B.5) so that we finally find: \[\boxed{\mathcal{A}^{(\alpha)}_{\sigma_{j},\psi_{\mathbf{1}}}=\mathcal{A}^{( \alpha)}_{\sigma_{\mathbf{1}},\psi_{\mathbf{1}}}A^{a}_{\phi_{j}}}\] (B.6) ### Bulk-boundary structure constant in the \(\mathbb{Z}_{2}\) orbifold CFT In this section we compute the structure constant \(\mathcal{A}^{(\alpha)}_{\sigma_{\mathbf{1}},\psi_{\mathbf{1},3}}\), which is given by the UHP correlator: \[\left\langle\sigma_{\mathbf{1}}(i/2,-i/2)\Psi^{(\alpha\alpha)}_{1,3}(1) \right\rangle_{\mathbb{H}}^{\alpha}=\mathcal{A}^{(\alpha)}_{\sigma_{\mathbf{1 }},\psi_{\mathbf{1},3}}\] (B.7) We can map the LHS of (B.7) to the unit disk through: \[z\rightarrow\frac{z-i/2}{z+i/2}\] (B.8) and then use the partition function expression of the correlator (as in the previous section) to find (after a global rotation): \[\left\langle\sigma_{\mathbf{1}}(0,0)\Psi^{\alpha}_{1,3}(-i)\right\rangle_{ \mathbb{D}}=\left\langle\sigma_{\mathbf{1}}(0,0)\right\rangle_{\mathbb{D}}^{ \alpha}\left\langle\psi^{\alpha}_{1,3}(-i)\psi^{\alpha}_{1,3}(-ie^{2i\pi}) \right\rangle_{\mathbb{D}_{2,a}}\] (B.9) where \(D_{2,a}\) is a 2-sheeted disk with branch point at \(0\). We unfold the correlator of boundary fields through the map \(w\to w^{1/2}\) to find: \[\left\langle\psi^{\alpha}_{1,3}(-i)\psi^{\alpha}_{1,3}(-ie^{2i\pi})\right\rangle _{\mathbb{D}_{2,a}}=\left(2i^{-1/2}\right)^{-2h_{1,3}}\left\langle\psi^{(aa)}_ {1,3}(i^{1/2})\psi^{(aa)}_{1,3}(-i^{1/2})\right\rangle_{\mathbb{D}}=1\] (B.10) so that, by putting everything together, we arrive at: \[\boxed{\mathcal{A}^{(\alpha)}_{\sigma_{1},\Psi_{1,3}}=\mathcal{A}^{(\alpha)}_{ \sigma_{1},\Psi_{1}}}\] (B.11) For generic \(N\), expressing the bulk-boundary structure constant \(\mathcal{A}^{(\alpha)}_{\sigma_{1},\Psi_{1,3}}\) in terms of mother BCFT quantities depends on our ability to calculate \(N\)-point functions of boundary operators. For \(N\geq 5\), this becomes difficult to solve for generic mother BCFTs. ## Appendix C Orbifold Ward identities for bulk fields Following [42], we give here the **orbifold Ward identities for 4-point bulk correlators**: \[\begin{split}\sum_{p=0}^{\infty}a_{p}\left\langle\mathcal{O}_{1} \left|L^{(r)}_{-m_{1}-p}\mathcal{O}_{2}(1)\mathcal{O}_{3}(x,\bar{x})\right| \mathcal{O}_{4}\right\rangle&=\sum_{p=0}^{\infty}b_{p}\left\langle \mathcal{O}_{1}\left|\left[L^{(r)}_{m_{2}+p}\mathcal{O}_{2}\right](1)\mathcal{ O}_{3}(x,\bar{x})\right|\mathcal{O}_{4}\right\rangle\\ &+\sum_{p=0}^{\infty}c_{p}\left\langle\mathcal{O}_{1}\left| \mathcal{O}_{2}(1)\left[L^{(r)}_{m_{3}+p}\mathcal{O}_{3}\right](x,\bar{x}) \right|\mathcal{O}_{4}\right\rangle\\ &+\sum_{p=0}^{\infty}d_{p}\left\langle\mathcal{O}_{1}\left| \mathcal{O}_{2}(1)\mathcal{O}_{3}(x,\bar{x})L^{(r)}_{m_{4}+p}\right|\mathcal{ O}_{4}\right\rangle\end{split}\] (C.1) where the _levels_\(m_{i}\in\mathbb{Z}+rk_{i}/N\) satisfy: \[m_{1}+m_{2}+m_{3}+m_{4}=-2\] (C.2) and the coefficients \(a_{p}\), \(b_{p}\), \(c_{p}\) and \(d_{p}\) are defined from the Taylor series: \[(1-z)^{m_{2}+1}(1-xz)^{m_{3}+1} =\sum_{p=0}^{\infty}a_{p}z^{p}\] (C.3) \[(z-x)^{m_{3}+1}z^{m_{4}+1} =\sum_{p=0}^{\infty}b_{p}(z-1)^{p}\] (C.4) \[(z-1)^{m_{2}+1}z^{m_{4}+1} =\sum_{p=0}^{\infty}c_{p}(z-x)^{p}\] (C.5) \[(z-1)^{m_{2}+1}(z-x)^{m_{3}+1} =\sum_{p=0}^{\infty}d_{p}z^{p}\] (C.6) #### A useful identity We give here the following commutation identity [42]: \[\begin{split}&\left\langle\mathcal{O}_{1}\left|\mathcal{O}_{2}(1) \mathcal{O}_{3}(x,\bar{x})L_{n}\right|\mathcal{O}_{4}\right\rangle-\left\langle \mathcal{O}_{1}\left|L_{n}\mathcal{O}_{2}(1)\mathcal{O}_{3}(x,\bar{x})\right| \mathcal{O}_{4}\right\rangle\\ &\quad=\left\{(1-x^{n})\left[x\partial_{x}+(n+1)h_{3}\right]+(h_ {4}-h_{1})-n\left(h_{2}+h_{3}\right)\right\}\left\langle\mathcal{O}_{1}\left| \mathcal{O}_{2}(1)\mathcal{O}_{3}(x,\bar{x})\right|\mathcal{O}_{4}\right\rangle \end{split}\] (C.7) where \(\mathcal{O}_{2}\) and \(\mathcal{O}_{3}\) are primary fields and \(\left|\mathcal{O}_{2}\right\rangle\), \(\left|\mathcal{O}_{4}\right\rangle\) are generic states. This commutator identity allows one to express insertions of Virasoro modes \(L_{n}\)_inside_ a correlation function in terms of differential operators acting _on_ them. Renyi entropies for the critical Ising chain with mixed fixed BC In this section, we will derive the bare and excited twist contributions to the second and third Renyi entropy in the critical Ising chain with fixed mixed BC \(a=+,b=-\). In the Ising BCFT, the boundary field that interpolates between the corresponding conformal BC \(|\pm\rangle\) is the operator \(\psi_{2,1}^{(+-)}\), with conformal dimension \(h_{2,1}=1/2\). In the \(\mathbb{Z}_{N}\) orbifold of this theory, the change in boundary conditions is implemented by the diagonal operator \(\Psi_{2,1}^{(\alpha\beta)}\) defined as in (2.27). The essential observation for the derivation of this section is that the space of conformal blocks is one-dimensional for the chiral correlators \[\left\langle\Phi_{1,3}|\sigma_{j}^{[-k]}(1)\sigma_{j}^{[k]}(\eta)|\Phi_{1,3}\right\rangle\] (D.1) with \(j\in\{\mathbf{1},\phi_{1,3}\}\)in the \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{3}\) Ising orbifold CFTs. The result is obtained, as in the discussion of Section 3, from the fusion rules of these theories, found in [40] and [70]. These fusion rules also imply the leading singular behaviour of the conformal block around the points \(\eta\in\{0,1,\infty\}\). The corresponding exponents are given in Table 4. In the \(\eta\to 1\) channel, the exponent corresponds to the fusion \[\sigma_{j}^{[k]}\times\sigma_{j}^{[-k]}\rightarrow\Phi_{\mathbf{1}}\] (D.2) for all the chiral correlators we are considering in this section. The diagonal operator \(\Phi_{\mathbf{1}}\) is defined as in (2.10). From the exponents around \(\eta\to 0\) and \(\eta\to 1\) we can determine the generic form of the conformal blocks for the four cases enumerated above to be: \[f_{j}^{(N)}(\eta)=\eta^{-1}(1-\eta)^{-2h_{\sigma_{j}}}P(\eta)\] (D.3) where \(P(\eta)\) is a generic polynomial in \(\eta\). Furthermore, taking into account the singular behaviour of \(f_{j}^{(N)}(\eta)\) around \(\eta\rightarrow\infty\), one can constrain its degree in all four cases to be \(\leq 2\), so that we have: \[f_{j}^{(N)}(\eta)=\eta^{-1}(1-\eta)^{-2h_{\sigma_{j}}}(a_{2}\,\eta^{2}+a_{1} \,\eta+a_{0})\] (D.4) Around \(\eta\to 1\), this function behaves as: \[f_{j}^{(N)}(\eta)\sim(1-\eta)^{-2h_{\sigma_{j}}}\left[(a_{2}+a_{1}+a_{0})+(a_ {2}-a_{0})(1-\eta)+a_{2}(1-\eta)^{2}+\dots\right]\] (D.5) \begin{table} \begin{tabular}{c|c|c|c} & 0 & 1 & \(\infty\) \\ \hline \(N=2,j=\mathbf{1}\) & \(-1\) & \(-\frac{1}{16}\) & \(-\frac{15}{16}\) \\ \hline \(N=2,j=\phi_{1,3}\) & \(-1\) & \(-\frac{9}{16}\) & \(-\frac{7}{16}\) \\ \hline \(N=3,j=\mathbf{1}\) & \(-1\) & \(-\frac{1}{9}\) & \(-\frac{8}{9}\) \\ \hline \(N=3,j=\phi_{1,3}\) & \(-1\) & \(-\frac{4}{9}\) & \(-\frac{5}{9}\) \\ \end{tabular} \end{table} Table 4: Singular behaviour of the conformal block of (D.1) for different \(N\) and twist field insertions \(\sigma_{j}^{[k]}(\eta)\) To find \(a_{i}\), we will need to consider the first few results in the module of \(\Phi_{\bf 1}\) from the OPE of twist fields in the \(\mathbb{Z}_{N}\) orbifold: \[\sigma^{[k]}_{j}(\eta)\sigma^{[-k]}_{j}(1)=\Phi_{\bf 1}(1)+\frac{2h_{\sigma_{j}}}{ Nc}(1-\eta)^{2}\,T^{(0)}(1)+\ldots\] (D.6) where \(T^{(0)}(z)=L^{(0)}_{-2}\Phi_{\bf 1}(z)\) is the SET of the chiral \(\mathbb{Z}_{N}\) orbifold CFT. The corresponding structure constant has been determined by applying a \(L^{(0)}_{2}\) from the left on both sides of the OPE, and power matching in \((1-\eta)\). Finally, the term at level 1 has vanished because the null vector \(L_{-1}{\bf 1}\equiv 0\) in the mother CFT induces the null-vectors \(L^{(r)}_{-1}\Phi_{\bf 1}\equiv 0\) in the orbifold. Inserting (D.6) into (D.1) one finds, the coefficients \[a_{0}=a_{2}=\frac{2h_{\sigma_{j}}}{Nc}\quad a_{1}=1-2a_{0}\] (D.7) with which we fix the conformal blocks for all the cases presented in Table 4. We then use the block expansions for the mixed BC correlators to find: \[\left\langle\sigma^{[k]}_{j}(z,\bar{z})\right\rangle^{\alpha\beta}_{N}=g^{1-N }_{+}f^{N}_{j}(\eta)\] (D.8) where we have also used, notably, the results of (2.42) for the 1-point structure constant of twist fields. After mapping to the strip through (3.2), we find for \(N=2\): \[\langle\sigma_{\bf 1}(\ell,\ell)\rangle^{+-}_{\mathbb{S}_{L}} =2^{-5/2}\left(\frac{2L}{\pi}\right)^{-1/16}\frac{7+\cos\frac{2 \pi\ell}{L}}{\left(\sin\frac{\pi\ell}{L}\right)^{1/16}}\] (D.9) \[\langle\sigma_{\varepsilon}(\ell,\ell)\rangle^{+-}_{\mathbb{S}_{ L}} =2^{-5/2}\left(\frac{2L}{\pi}\right)^{-9/16}\frac{1-9\cos\frac{2 \pi\ell}{L}}{\left(\sin\frac{\pi\ell}{L}\right)^{9/16}}\] (D.10) and \(N=3\): \[\langle\sigma_{\bf 1}(\ell,\ell)\rangle^{+-}_{\mathbb{S}_{L}} =3^{-2}\left(\frac{2L}{\pi}\right)^{-1/9}\frac{7+2\cos\frac{2\pi \ell}{L}}{\left(\sin\frac{\pi\ell}{L}\right)^{1/9}}\] (D.11) \[\langle\sigma_{\varepsilon}(\ell,\ell)\rangle^{+-}_{\mathbb{S}_{ L}} =3^{-2}\left(\frac{2L}{\pi}\right)^{-4/9}\frac{1+8\cos\frac{2\pi \ell}{L}}{\left(\sin\frac{\pi\ell}{L}\right)^{4/9}}\] (D.12) ## Appendix E Hypergeometric differential equation The hypergeometric differential equation is canonically defined as: \[\eta(\eta-1)f^{\prime\prime}(\eta)+[(a+b+1)\eta-c]f^{\prime}(\eta)+ab\,f(\eta)=0\] (E.1) with the Riemann scheme: \[\begin{array}{ccc}0&1&\infty\\ \hline 0&0&a\\ 1-c&c-a-b&b\end{array}\] The solutions are constructed using the Gauss hypergeometric function \({}_{2}\) F\({}_{1}(a,b;c\mid\eta)\). Following the conventions of [94], we give a standard basis of fundamental solutions to (E.1) around the singular point \(\eta=0\): \[\begin{split} I_{1}(\eta)&={}_{2}\text{ F}_{1}(a,b;c\mid\eta)\\ I_{2}(\eta)&=\eta^{1-c}{}_{2}\text{ F}_{1}(b-c+1,a-c+1;2-c \mid\eta)\end{split}\] (E.2) and around \(\eta=1\): \[\begin{split} J_{1}(\eta)&={}_{2}\text{ F}_{1}(a,b;a+b-c+1\mid 1-\eta)\\ J_{2}(\eta)&=(1-\eta)^{c-a-b}{}_{2}\text{ F}_{1}(c-b,c-a;c-a-b+1 \mid 1-\eta)\end{split}\] (E.3) The two bases of solutions are linearly related as \[I_{i}(\eta)=\sum_{j=1}^{2}P_{ij}J_{j}(\eta)\] (E.4) with the fusing matrix P \[\text{P}=\left[\begin{array}{cc}\frac{\Gamma(c)\Gamma(d)}{\Gamma(c-a) \Gamma(c-b)}&\frac{\Gamma(c)\Gamma(-d)}{\Gamma(a)\Gamma(b)}\\ \frac{\Gamma(2-c)\Gamma(d)}{\Gamma(1-a)\Gamma(1-b)}&\frac{\Gamma(2-c)\Gamma(- d)}{\Gamma(1-c+a)\Gamma(1-c+b)}\end{array}\right]\] (E.5) and its inverse: \[\text{P}^{-1}=\left[\begin{array}{cc}\frac{\Gamma(1-c)\Gamma(1-d)}{\Gamma(1- c+d)\Gamma(1-c+b)}&\frac{\Gamma(c-1)\Gamma(1-d)}{\Gamma(a)\Gamma(b)}\\ \frac{\Gamma(1-c)\Gamma(1+d)}{\Gamma(1-a)\Gamma(1-b)}&\frac{\Gamma(c-1)\Gamma(1 +d)}{\Gamma(c-a)\Gamma(c-b)}\end{array}\right]\] (E.6) expressed in terms of Euler's Gamma function \(\Gamma\), with \(d=c-a-b\). ## Appendix F Fusion rules in the \(\mathbb{Z}_{N}\) orbifold In [70] we have found compact expressions for the fusion numbers of the \(\mathbb{Z}_{N}\) orbifold of a diagonal RCFT. They are given by: \[\begin{split}\mathcal{N}^{[k_{1}\ldots k_{N}]}_{[i_{1}\ldots i_{ N}],[j_{1}\ldots j_{N}]}&=\sum_{a,b=0}^{N-1}N^{k_{1}}_{i_{1+a},j_{1+b}}\ldots N ^{k_{N}}_{i_{N+a},j_{N+b}}\,,\\ \mathcal{N}^{k^{(r)}}_{[i_{1}\ldots i_{N}],[j_{1}\ldots j_{N}]}& =\sum_{a=0}^{N-1}N^{k}_{i_{1+a},j_{1}}\ldots N^{k}_{i_{N+a},j_{N}}\,,\\ \mathcal{N}^{k^{(s)}}_{[i_{1}\ldots i_{N}],j^{(r)}}&=N^{k}_{i_{1},j} \ldots N^{k}_{i_{N},j}\,,\\ \mathcal{N}^{k^{(t)}}_{i^{(r)},j^{(s)}}&=\delta_{r+s,t}\,N^{k}_{ij}\,. \\ \mathcal{N}^{[k_{1}\ldots k_{N}]}_{i^{[0]}(r)j^{[i](s)}}&=\delta_{p+q,0} \,\sum_{\ell=1}^{M}\frac{S_{i\ell}S_{j\ell}\cdot S_{k_{1}\ell}\ldots S_{k_{N} \ell}}{S^{N}_{l\ell}}\,,\\ \mathcal{N}^{k^{(t)}}_{i^{[0]}(r)j^{[i](s)}}&=\frac{\delta_{p+q,0} }{N}\sum_{\ell=1}^{M}\left[\frac{S_{i\ell}S_{j\ell}S^{N}_{k\ell}}{S^{N}_{1\ell} }+\sum_{n=1}^{N-1}\omega^{np(r+s-t)}\frac{(P_{-n})_{i\ell}(P_{n})_{j\ell}S_{k \ell}}{S_{1\ell}}\right]\,,\\ \mathcal{N}^{k^{[m](t)}}_{i^{[0]}(r)j^{[i](s)}}&=\frac{\delta_{p+q,m} }{N}\sum_{\ell=1}^{M}\left[\frac{S_{i\ell}S_{j\ell}S_{k\ell}}{S^{N}_{1\ell}}+ \sum_{n=1}^{N-1}\omega^{n(r+s-t)}\frac{(P_{pn^{-1}}^{\dagger})_{i\ell}(P_{qn^{- 1}}^{\dagger})_{j\ell}(P_{mn^{-1}})_{k\ell}}{S_{1\ell}}\right]\,.\end{split}\] (F.1) where \(\omega=\exp\left(2\pi i/N\right)\), and \(N_{ij}^{k}\), \(S_{ij}\) are the fusion numbers and the modular \(S\)-matrix of the mother CFT. One also needs the matrix \(P_{n}\) which is defined from \[P_{n}=T^{-n/N}\cdot Q_{n}\cdot T^{[[-n^{-1}]]/N}\,,\qquad n\in\mathbb{Z}_{N}^{ \times}\,,\] (F.2) where \(T\) is the modular \(T\) matrix of the mother CFT, \([[-n^{-1}]]\) denotes the inverse of \((-n)\) in \(\mathbb{Z}_{N}^{\times}\), with \(0<[[-n^{-1}]]<N\), and \(Q_{n}\) is the matrix representing the linear action of the modular map \[\tau\mapsto q_{n}(\tau)=\frac{n\tau-(n[[-n^{-1}]]+1)/N}{N\tau-[[-n^{-1}]]}\] (F.3) on the characters \(\chi_{j}\) of the mother CFT. ## Appendix G Derivation of differential equation in the \(Z_{3}\) orbifold BCFT We present in this section all the orbifold Ward identities and null-vector conditions necessary to derive the third order differential equation (3.45). #### The Ward identities **Ward 1**: The correlator to integrate over is: \[\left\langle\Phi_{12}\right|L_{1}^{(1)}\sigma_{\mathbf{1}}(1)T^{(1)}(z)\tilde{ \sigma}_{\mathbf{1}}(\eta)L_{-1}^{(1)}\left|\Phi_{12}\right\rangle\] (G.1) with \((m_{1},m_{2},m_{3},m_{4})=(-1,1/3,-1/3,-1)\), to find: \[a_{0|1}\left\langle\Phi_{12}\right|(L_{1}^{(1)})^{2}\sigma_{ \mathbf{1}}(1)\tilde{\sigma}_{\mathbf{1}}(\eta)L_{-1}^{(1)}\left|\Phi_{12} \right\rangle+a_{1|1}\left\langle\Phi_{12}\right|L_{1}^{(1)}L_{0}^{(1)} \sigma_{\mathbf{1}}(1)\tilde{\sigma}_{\mathbf{1}}(\eta)L_{-1}^{(1)}\left| \Phi_{12}\right\rangle=\] (G.2) \[d_{0|1}\left\langle\Phi_{12}\right|L_{1}^{(1)}\sigma_{\mathbf{1} }(1)\tilde{\sigma}_{\mathbf{1}}(\eta)L_{-1}^{(1)}L_{-1}^{(1)}\left|\Phi_{12} \right\rangle+d_{1|1}\left\langle\Phi_{12}\right|L_{1}^{(1)}\sigma_{\mathbf{1} }(1)\tilde{\sigma}_{\mathbf{1}}(\eta)L_{0}^{(1)}L_{-1}^{(1)}\left|\Phi_{12} \right\rangle\] **Ward 2**: The correlator to integrate over is: \[\left\langle\Phi_{12}\right|\sigma_{\mathbf{1}}(1)T^{(1)}(z)\tilde{\sigma}_{ \mathbf{1}}(\eta)L_{-1}^{(1)}L_{-1}^{(1)}\left|\Phi_{12}\right\rangle\] (G.3) with \((m_{1},m_{2},m_{3},m_{4})=(-1,1/3,-1/3,-1)\) to find: \[a_{0|2}\left\langle\Phi_{12}\right|L_{1}^{(1)}\sigma_{\mathbf{1} }(1)\tilde{\sigma}_{\mathbf{1}}(\eta)\left(L_{-1}^{(1)}\right)^{2}\left|\Phi_{ 12}\right\rangle =d_{0|2}\left\langle\Phi_{12}\right|\sigma_{\mathbf{1}}(1)\tilde {\sigma}_{\mathbf{1}}(\eta)\left(L_{-1}^{(1)}\right)^{3}\left|\Phi_{12}\right\rangle\] (G.4) \[+d_{1|2}\left\langle\Phi_{12}\right|\sigma_{\mathbf{1}}(1)\tilde {\sigma}_{\mathbf{1}}(\eta)L_{0}^{(1)}\left(L_{-1}^{(1)}\right)^{2}\left|\Phi_{ 12}\right\rangle +d_{2|2}\left\langle\Phi_{12}\right|\sigma_{\mathbf{1}}(1)\tilde {\sigma}_{\mathbf{1}}(\eta)L_{1}^{(1)}\left(L_{-1}^{(1)}\right)^{2}\left|\Phi_{ 12}\right\rangle\] \[+d_{3|2}\left\langle\Phi_{12}\right|\sigma_{\mathbf{1}}(1)\tilde {\sigma}_{\mathbf{1}}(\eta)L_{2}^{(1)}\left(L_{-1}^{(1)}\right)^{2}\left|\Phi_{ 12}\right\rangle\] **Ward 3**: The correlator to integrate over is \[\left\langle\Phi_{12}\right|L_{1}^{(1)}L_{1}^{(1)}\sigma_{\mathbf{1} }(1)T^{(1)}(z)\tilde{\sigma}_{\mathbf{1}}(\eta)\left|\Phi_{12}\right\rangle\] (G.5) with \((m_{1},m_{2},m_{3},m_{4})=(-1,1/3,-1/3,-1)\) to find: \[d_{0|3}\left\langle\Phi_{12}\right|\left(L_{1}^{(1)}\right)^{2} \sigma_{\mathbf{1}}(1)\tilde{\sigma}_{\mathbf{1}}(\eta)L_{-1}^{(1)}\left|\Phi_{ 12}\right\rangle =a_{0|3}\left\langle\Phi_{12}\right|\left(L_{1}^{(1)}\right)^{3} \sigma_{\mathbf{1}}(1)\tilde{\sigma}_{\mathbf{1}}(\eta)\left|\Phi_{12}\right\rangle\] (G.6) \[+a_{1|3}\left\langle\Phi_{12}\right|\left(L_{1}^{(1)}\right)^{2}L_ {0}^{(1)}\sigma_{\mathbf{1}}(1)\tilde{\sigma}_{\mathbf{1}}(\eta)\left|\Phi_{12}\right\rangle +a_{2|3}\left\langle\Phi_{12}\right|\left(L_{1}^{(1)}\right)^{2}L_ {-1}^{(1)}\sigma_{\mathbf{1}}(1)\tilde{\sigma}_{\mathbf{1}}(\eta)\left|\Phi_{12}\right\rangle\] \[+a_{3|3}\left\langle\Phi_{12}\right|\left(L_{1}^{(1)}\right)^{2}L_ {-2}^{(1)}\sigma_{\mathbf{1}}(1)\tilde{\sigma}_{\mathbf{1}}(\eta)\left|\Phi_{12}\right\rangle\] **Ward 4**: The correlator to integrate over is: \[\left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)T^{(2)}(z)\tilde{\sigma}_{\bf 1}( \eta)L_{-1}^{(1)}\left|\Phi_{12}\right\rangle\] (G.7) with \((m_{1},m_{2},m_{3},m_{4})=(0,-1/3,1/3,-2)\) so we find: \[\begin{array}{l}d_{0|4}\left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)\tilde{ \sigma}_{\bf 1}(\eta)L_{-2}^{(2)}L_{-1}^{(1)}\left|\Phi_{12}\right\rangle+d_{1|4} \left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)\tilde{\sigma}_{\bf 1}(\eta)L_{-1}^{(2)}L_{-1 }^{(1)}\left|\Phi_{12}\right\rangle+\\ d_{2|4}\left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)\tilde{\sigma}_{\bf 1}( \eta)L_{0}^{(2)}L_{-1}^{(1)}\left|\Phi_{12}\right\rangle+d_{3|4}\left\langle \Phi_{12}\right|\sigma_{\bf 1}(1)\tilde{\sigma}_{\bf 1}(\eta)L_{1}^{(2)}L_{-1}^{(1)} \left|\Phi_{12}\right\rangle=0\end{array}\] (G.8) **Ward 5**: The correlator to integrate over is: \[\left\langle\Phi_{12}\right|L_{1}^{(1)}T^{(2)}(z)\sigma_{\bf 1}(1)\tilde{ \sigma}_{\bf 1}(\eta)\left|\Phi_{12}\right\rangle\] (G.9) with \((m_{1},m_{2},m_{3},m_{4})=(-2,-1/3,1/3,0)\) so we find: \[\begin{array}{l}a_{0|5}\left\langle\Phi_{12}\right|L_{1}^{(1)}L_{2}^{(2)} \sigma_{\bf 1}(1)\tilde{\sigma}_{\bf 1}(\eta)\left|\Phi_{12}\right\rangle+a_{1|5} \left\langle\Phi_{12}\right|L_{1}^{(1)}L_{1}^{(2)}\sigma_{\bf 1}(1)\tilde{ \sigma}_{\bf 1}(\eta)\left|\Phi_{12}\right\rangle+\\ a_{2|5}\left\langle\Phi_{12}\right|L_{1}^{(1)}L_{0}^{(2)}\sigma_{\bf 1}(1) \tilde{\sigma}_{\bf 1}(\eta)\left|\Phi_{12}\right\rangle+a_{3|5}\left\langle \Phi_{12}\right|L_{1}^{(1)}L_{-1}^{(2)}\sigma_{\bf 1}(1)\tilde{\sigma}_{\bf 1}(\eta) \left|\Phi_{12}\right\rangle=0\end{array}\] (G.10) **Ward 6**: The correlator to integrate over is: \[\left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)T^{(2)}(z)\tilde{\sigma}_{\bf 1}( \eta)L_{-1}^{(1)}\left|\Phi_{12}\right\rangle\] (G.11) with \((m_{1},m_{2},m_{3},m_{4})=(-1,-1/3,1/3,-1)\) to find: \[\begin{array}{l}a_{0|6}\left\langle\Phi_{12}\right|L_{1}^{(2)}\sigma_{\bf 1 }(1)\tilde{\sigma}_{\bf 1}(\eta)L_{-1}^{(1)}\left|\Phi_{12}\right\rangle =d_{0|6}\left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)\tilde{ \sigma}_{\bf 1}(\eta)L_{-1}^{(2)}L_{-1}^{(1)}\left|\Phi_{12}\right\rangle\\ +d_{1|6}\left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)\tilde{ \sigma}_{\bf 1}(\eta)L_{0}^{(2)}L_{-1}^{(1)}\left|\Phi_{12}\right\rangle+d_{2|6} \left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)\tilde{\sigma}_{\bf 1}(\eta)L_{1}^{(2)}L_{-1 }^{(1)}\left|\Phi_{12}\right\rangle\end{array}\] (G.12) **Ward 7**: The correlator to integrate over is: \[\left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)T^{(1)}(z)\tilde{\sigma}_{\bf 1}( \eta)L_{-1}^{(2)}\left|\Phi_{12}\right\rangle\] (G.13) with \((m_{1},m_{2},m_{3},m_{4})=(-1,1/3,-1/3,-1)\) to find: \[\begin{array}{l}a_{0|7}\left\langle\Phi_{12}\right|L_{1}^{(1)}\sigma_{\bf 1 }(1)\tilde{\sigma}_{\bf 1}(\eta)L_{-1}^{(2)}\left|\Phi_{12}\right\rangle=d_{0|7} \left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)\tilde{\sigma}_{\bf 1}(\eta)L_{-1}^{(1)}L_{-1 }^{(2)}\left|\Phi_{12}\right\rangle\\ +d_{1|7}\left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)\tilde{ \sigma}_{\bf 1}(\eta)L_{0}^{(1)}L_{-1}^{(2)}\left|\Phi_{12}\right\rangle+d_{2|7} \left\langle\Phi_{12}\right|\sigma_{\bf 1}(1)\tilde{\sigma}_{\bf 1}(\eta)L_{1}^{(1)}L_{- 1}^{(2)}\left|\Phi_{12}\right\rangle\end{array}\] (G.14) **The null-vector conditions** \[L_{-1}^{(1)}L_{-1}^{(2)}\left|\Phi_{12}\right\rangle =\frac{1}{2}\left[3gL_{-2}^{(0)}-\left(L_{-1}^{(0)}\right)^{2} \right]\left|\Phi_{12}\right\rangle\] (G.15) \[\left\langle\Phi_{12}\right|L_{1}^{(1)}L_{1}^{(2)} =\left\langle\Phi_{12}\right|\frac{1}{2}\left[3gL_{2}^{(0)}-\left(L_ {1}^{(0)}\right)^{2}\right]\] (G.16) \[2\,L_{-1}^{(0)}L_{-1}^{(2)}L_{-1}^{(1)}\left|\Phi_{12}\right\rangle =\left[3gL_{-1}^{(0)}L_{-2}^{(0)}-\left(L_{-1}^{(0)}\right)^{3} \right]\left|\Phi_{12}\right\rangle\] (G.17) \[2\,L_{-1}^{(0)}L_{-1}^{(2)}L_{-1}^{(1)}\left|\Phi_{12}\right\rangle =\left[3gL_{-1}^{(1)}L_{-2}^{(2)}-\left(L_{-1}^{(1)}\right)^{3} \right]\left|\Phi_{12}\right\rangle\] (G.18) \[2\,\left\langle\Phi_{12}\right|L_{1}^{(0)}L_{1}^{(2)}L_{1}^{(1)} =\left\langle\Phi_{12}\right|\left[3gL_{2}^{(0)}L_{1}^{(0)}-\left(L_ {1}^{(0)}\right)^{3}\right]\] (G.19) \[2\,\left\langle\Phi_{12}\right|L_{1}^{(0)}L_{1}^{(2)}L_{1}^{(1)} =\left\langle\Phi_{12}\right|\left[3gL_{2}^{(2)}L_{1}^{(1)}-\left(L_ {1}^{(1)}\right)^{3}\right]\] (G.20) By removing from this linear system of 13 equations all terms containing modes \(L_{n}^{(r)}\) with \(r\neq 0\), one indeed obtains (3.45). Numerical implementation of the Frobenius method We want to find a basis of solutions to the differential equation (3.30) that converge on the entire range of interest - the unit circle \(|\eta|=1\). The Fuchsian ODE (3.30) has singular points \(0,1,\infty\). The solutions around \(\eta=0\) and \(\eta=1\) converge on the disks \(|\eta|<1\) and \(|\eta-1|<1\) respectively. Thus, only a portion of the unit semicircle, namely \(0<\text{Arg}(\eta)<\pi/3\), is contained in the convergence disk around \(\eta=1\). We can circumvent this problem by observing that the solutions around \(\eta=\infty\) can be convergent on the whole unit circle \(|\eta|=1\). Even better, we can implement the change of variable \[\eta\mapsto\frac{1+u}{2u}\,,\qquad\partial_{\eta}\mapsto-2u^{2}\partial_{u}\,.\] (H.1) so that the new ODE, in the variable \(u\) has singular points at \(u=0,1,-1\). The original unit circle \(|\eta|=1\) is mapped to \(|u-1/3|=2/3\), which is contained in the convergence disk \(|u|<1\). Hence, applying the Froebenius method, and expressing the solutions around \(u=1\) in terms of those around \(u=0\) will give the appropriate numerical evaluation of the desired values of \(\eta\). Now, as explained in [42], a convenient way of finding power series solutions around a point \(u=u_{0}\) is to rewrite the differential equation (3.30) in terms of the operator \(\theta=(u-u_{0})\partial_{u}\), which satisfies: \[(u-u_{0})^{n}\partial_{u}^{n}=\prod_{k=0}^{n-1}(\theta-k)\] (H.2) Most importantly, we have that any polynomial \(P(\theta)\) satisfies: \[P(\theta)(u-u_{0})^{r}=P(r)(u-u_{0})^{r}\] (H.3) For \(u_{0}=0\), we can then rewrite the equation as: \[\left[\sum_{i=0}^{8}u^{i}P_{i}(\theta)\right]=0\] (H.4) where: \[P_{0}(\theta) =250\theta^{4}-125\theta^{3}-130\theta^{2}-\theta+6\] \[P_{1}(\theta) =-\theta\left(-2125\theta^{2}+450\theta+997\right)-174\] \[P_{2}(\theta) =-250\theta^{4}+125\theta^{3}+130\theta^{2}-\left(750\theta^{3}+ 1250\theta^{2}-2415\theta+4264\right)\theta+\theta-3123\] \[P_{3}(\theta) =-\theta\left(4250\theta^{2}+5925\theta+6016\right)+\theta\left( -2125\theta^{2}+450\theta+997\right)-8511\] \[P_{4}(\theta) =5\,\theta\left(150\theta^{3}+575\theta^{2}+93\theta-332\right)+ \theta\left(750\theta^{3}+1250\theta^{2}-2415\theta+4264\right)-6000\] (H.5) \[P_{5}(\theta) =2125\,\theta\left(\theta^{2}+3\theta+2\right)+\theta\left(4250 \theta^{2}+5925\theta+6016\right)\] \[P_{6}(\theta) =-250\,\theta\left(\theta^{3}+6\theta^{2}+11\theta+6\right)-5 \theta\left(150\theta^{3}+575\theta^{2}+93\theta-332\right)\] \[P_{7}(\theta) =-2125\,\theta\left(\theta^{2}+3\theta+2\right)\] \[P_{8}(\theta) =250\,\theta\left(\theta^{3}+6\theta^{2}+11\theta+6\right)\] We now seek power series solutions around \(u=0\) of the form: \[I_{i}(u)=u^{r_{i}}\sum_{n=0}^{\infty}a_{n}u^{n}\quad\text{with}\quad a_{0}=1\] (H.6) where the \(r_{i}\) are the roots of the _characteristic polynomial_\(P_{0}(r)\): \[r_{1}=-3/10\quad r_{2}=1\quad r_{3}=1/5\quad r_{4}=-2/5\] (H.7) and are the same as the exponents around \(\infty\) in Table 1. By substituting the ansatz (H.6) in the differential equation and employing the identity (H.3) we find the following recursion relations for the coefficients \(a_{n}\) of the solution \(I_{i}(u)\): \[\boxed{P_{0}(r_{i}+n)a_{n}=-\sum_{i=1}^{min\{n,8\}}a_{n-i}\,P_{i}(r_{i}+n-i) \qquad a_{0}=1}\] (H.8) The four series found in this way converge for \(|u|<1\) and can be evaluated numerically to arbitrary precision. We note, at this point, that the solution corresponding to \(r_{4}\) is unphysical, since it corresponds, according to Table 1, to the presence in the operator algebra of the theory of a composite twist field formed with a primary operator that is outside the Kac table, i.e. not present in the \({\cal M}(6,5)\) CFT. This suggests that the physical space of conformal blocks is actually three-dimensional, and thus, that there should be a third order differential equation satisfied by the excited twist correlator in this setup. One should now repeat the above computation for the solutions \(J_{j}(u)\) around \(u=1\), since these are the ones that appear in the block expansion (3.4) of BCFT correlators. The recursion relation takes the same form as in (H.8), with different roots: \[\lambda_{1}=-1/2\quad\lambda_{2}=9/10\quad\lambda_{3}=23/10\quad\lambda_{4}= 23/10+2\] (H.9) A slight complication appears in this case because two of the roots of the corresponding characteristic polynomial differ by an integer, that is, \(r_{4}=r_{3}+2\). This will lead to the truncation of the corresponding recursion relations (H.8) for \(r_{3}\) because at \(n=2\), the coefficient \(P_{0}(r_{3}+2)=0\). A good basis of solutions in this case is \(\{J_{1}(u),J_{2}(u),J_{3}^{(k)}(u),J_{4}(u)\}\), where: \[\begin{split} J_{i}(u)&=(1-u)^{r_{i}}\sum_{i=0}^{ \infty}a_{n}(1-u)^{n}\\ J_{3}^{(k)}(u)&=(1-u)^{r_{3}}[a_{0}+a_{1}(1-u)]+ kJ_{4}(u)\end{split}\] (H.10) where \(k\) is a free parameter and the value we choose for it should not change the final result for the physical correlator. We have chosen to set it to \(k_{0}=0.04428171795178596\) and define \(J_{3}(u)=J^{(k_{0})}(u)\). The reason for this choice becomes apparent when one looks at our solution for the fusing matrix \({\bf M}\): \[{\bf M}=\left(\begin{array}{cccc}0.207411&0.393808&0.152178&0\\ 1.356&-2.30281&-0.444933&0\\ 7.70383&71.6374&-22.7841&0\\ -8986.23&-19156.8&7211.61&5800.8\end{array}\right)\] (H.11) which relates the bases of conformal blocks around \(u=1\) and \(u=0\) as: \[J_{i}(u)=\sum_{j}M_{ij}I_{j}(u)\] (H.12) To obtain this solution, we have generated a linear system of equations for the unknown \(M_{ij}\) from the evaluation of the above relations at different points \(\{u_{i}\}\) in the interval \(0<u<1\) (where both sets of solutions converge). In this context, the parameter \(k_{0}\) was tuned so that the block \(J_{3}(u)\) does not depend on the unphysical solution \(I_{3}(u)\) around \(u=0\). Furthermore, since the matrix elements \((M^{-1})_{i4}=0\) can be readily checked to vanish for \(i\in\{1,2,3\}\), we can conclude that \(\{J_{1}(u),J_{2}(u),J_{3}(u)\}\) form the physical three-dimensional basis of conformal blocks around \(u=1\).
2310.15319
Hallucination Detection for Grounded Instruction Generation
We investigate the problem of generating instructions to guide humans to navigate in simulated residential environments. A major issue with current models is hallucination: they generate references to actions or objects that are inconsistent with what a human follower would perform or encounter along the described path. We develop a model that detects these hallucinated references by adopting a model pre-trained on a large corpus of image-text pairs, and fine-tuning it with a contrastive loss that separates correct instructions from instructions containing synthesized hallucinations. Our final model outperforms several baselines, including using word probability estimated by the instruction-generation model, and supervised models based on LSTM and Transformer.
Lingjun Zhao, Khanh Nguyen, Hal Daumé III
2023-10-23T19:36:28Z
http://arxiv.org/abs/2310.15319v1
# Hallucination Detection for Grounded Instruction Generation ###### Abstract We investigate the problem of generating instructions to guide humans to navigate in simulated residential environments. A major issue with current models is _hallucination_: they generate references to actions or objects that are inconsistent with what a human follower would perform or encounter along the described path. We develop a model that detects these hallucinated references by adopting a model pre-trained on a large corpus of image-text pairs, and fine-tuning it with a contrastive loss that separates correct instructions from instructions containing synthesized hallucinations. Our final model outperforms several baselines, including using word probability estimated by the instruction-generation model, and supervised models based on LSTM and Transformer. ## 1 Introduction Performance of neural-network-based models on generating navigation instructions is substantially inferior to that of humans Zhao et al. (2023). These models often _hallucinate_, generating references to objects or actions that do not exist or are impossible to execute in the environment. Similar behavior has been observed in language models in other domains of text generation Raunak et al. (2021); Ji et al. (2023); Xiao and Wang (2021); Lee et al. (2018); Guerreiro et al. (2022); Rawte et al. (2023). Instructions containing hallucinations can confuse or misdirect humans, leading to frustration and sometimes even catastrophic mistakes. Detecting hallucinations is therefore essential to improve instruction generation models and inform risk to human users. Nevertheless, ground-truth word-level hallucination labels are typically not readily available in this domain. Meanwhile, hiring crowd-workers to annotate instructions can be very costly Anderson et al. (2018); He et al. (2021); Wang et al. (2022); Gao et al. (2022). We propose a data-efficient weakly supervised approach to hallucination detection. Our approach reduces the necessary supervision in two ways. First, we leverage a pre-trained vision-language model Guhur et al. (2021) that has learned transferable representations of path-instruction pairs through self-supervised learning. Second, we introduce data-augmentation strategies to create synthetic data with "free" hallucination labels. We fine-tune the pre-trained model with the synthesized data using a contrastive learning objective to learn representations that separate positive examples (hallucinations) from negative examples (non-hallucinations). Our model outperforms various baselines in terms of F-1 scores on human-annotated evaluation data, beating an LSTM- and a Transformer-based models by 6.2 and 10.0 points, respectively. Ablation studies demonstrate the effectiveness of the proposed self-supervised pre-training and contrastive fine-tuning approach. We release the code, models, and data at [https://lingjunzhao.github.io/hallucination_detection.html](https://lingjunzhao.github.io/hallucination_detection.html). ## 2 Related Work Hallucination detection.Neural sequence to sequence models are prone to generate hallucinations, where the outputs are inconsistent with the inputs or the environments Muller et al. (2019); Maynez et al. (2020); Wiseman et al. (2017); Martindale et al. (2019); Durmus et al. (2020); Ji et al. (2023). Recent work largely focuses on text-only domains Wang and Sennrich (2020); Zhou et al. (2020); Chen et al. (2021); Dale et al. (2022); Xu et al. (2023); Nie et al. (2019); Falke et al. (2019); Kryscinski et al. (2019); Rebuffel et al. (2022); Liu et al. (2021); van der Poel et al. (2022) and image captioning Rohrbach et al. (2018); Dai et al. (2022); Biten et al. (2022); Li et al. (2023); Gunjal et al. (2023). To the best of our knowledge, our work is the first study of hallucination in grounded instruction generation. Grounded Instruction Generation.Instruction generation has been commonly studied in navigation settings [1, 1, 1, 2, 3, 4, 5, 6, 7, 18, 19, 20, 21]. Recent work by Zhao et al. (2023) reveals a significant gap between the performance of models and humans. Our work constructs a model that can be useful for evaluating and enhancing instruction-generation models. Huang et al. (2019) and Zhao et al. (2021) train LSTM-based discriminative models with contrastive learning to score instructions. We follow a similar approach but focus on identifying word-level hallucinations, and effectively leverage a large pre-trained Transformer model. ## 3 Problem Setting Grounded instruction generation.Our task takes place in an environment, where a speaker model \(S(\mathbf{u}\mid\mathbf{r})\) composes an _instruction_\(\mathbf{u}\) to communicate an imaginary _trajectory_\(\mathbf{r}\) to a follower so that the latter can generate the same trajectory in the environment. An instruction is a sequence of words \(u_{i}\), whereas a trajectory is a sequence of observations \(\mathbf{o}_{t}\) and actions \(a_{t}\). We employ the Matterport3D simulator for experiments [1] which embeds a follower in a 3D model of a real-world residential building. The observation \(\mathbf{o}_{t}\) of the follower comprises of an RGB image representing the panoramic view at a location in a building, and orientation features encoding the follower's gaze direction. Each action \(a_{t}\) moves the follower to a new location close to where it is standing and changes its observation. Speaker model.We follow Zhao et al. (2023) to train a T5-based [20] speaker model. This model encodes a trajectory into a sequence of hidden vectors and applies multi-headed attention on those vectors to generate an instruction auto-regressively. It is trained on the Room-to-Room (R2R) dataset provided by the Matterport3D simulator. Detail about the model is provided in SSA.1. Hallucination in grounded instruction.Instructions generated by our speaker model often contain words that are inconsistent with the input trajectory. We refer to those words as _hallucinations_. Similar to prior work [19], we observe two types of hallucinations: * _Intrinsic hallucination_ is a word that needs to be replaced because it inaccurately describes an observation or action. For example, an instruction says "_Walk past the reception desk and out the door on the right_," but in the described trajectory, the door is on the _left_; * _Extrinsic hallucination_ is a word that needs to be removed because it has no correspondence in the input trajectory. Our model typically exhibits this type of hallucination by repeatedly generating the same sentence, e.g., "_Walk out of the office. Walk into the hallway and turn left_." We formulate hallucination detection as _binary classification_: given an input \(\mathbf{x}=(\mathbf{r},\mathbf{u},i)\) consisting of a trajectory \(\mathbf{r}\), an instruction \(\mathbf{u}\), and an index \(i\in\{1,\cdots,|\mathbf{u}|\}\), decide whether the word \(u_{i}\) is a hallucination, i.e. whether it should be replaced or removed to make \(\mathbf{u}\) consistent with \(\mathbf{r}\). Candidate selection.For each instruction, we identify a set of candidate words for classification, which are (a) directional words like _left_, _right_, etc. (see SSA.2 for a full list) as well as (b) nouns identified by the SpaCy part-of-speech tagger [1]. ## 4 Hallucination Detection Model ### Architecture We learn a classifier \(C(y=1\mid\mathbf{x}=(\mathbf{r},\mathbf{u},i))\) to decide whether a word \(u_{i}\) is hallucinated. Our model is based on the Airbert model [1], which inherits the ViLBERT architecture [10]. An overview of the model is given in Figure 1. It implements two Transformers: one encodes the instruction \(\mathbf{u}\) and the other encodes the trajectory \(\mathbf{r}\). We wrap the word to be classified \(u_{i}\) between a pair of special tokens ([\(\mathtt{BH}\)] and [\(\mathtt{EH}\)]). Let \(\mathbf{h}_{\text{lang}}\) be the output of the language-encoding Transformer, and \(\mathbf{h}_{\text{vision}}\) be that of the vision-encoding Transformer. The model computes a score function \(s(\mathbf{x})=s(\mathbf{r},\mathbf{u},i)=w^{\top}(\mathbf{h}_{\text{lang}}\odot\mathbf{h}_{\text{ vision}})\), where \(w\) is a learnable vector, and \(\odot\) denotes element-wise multiplication. More details about the model are given in SSA.1. ### Learning approach Self-supervised pre-training.Instead of learning from scratch, we fine-tune a pre-trained checkpoint of the Airbert model. The checkpoint was first trained on a large collection of 1.4M images and 0.7M captions collected from AirBnB. It was subsequently adapted for a trajectory-instruction compatibility estimation task using the Room-to-Room dataset. The objective in each phase combines BERT-style pre-training (mask and pair prediction) with contrastive learning. We refer the readers to the original paper for an elaborate description of the pre-training phase. Contrastive fine-tuning.We assume a dataset of contrastive pairs \((\mathbf{x}^{+},\mathbf{x}^{-})\). The positive and negative examples of a pair have the same trajectory \(\mathbf{r}\) and word index \(i\), but differ in the instruction \(\mathbf{u}\). The classified word in \(\mathbf{x}^{-}\) is a hallucination, whereas that in \(\mathbf{x}^{+}\) is not. For each pair, we compute the model scores \(s(\mathbf{x}^{+})\) and \(s(\mathbf{x}^{-})\), and construct the softmax distribution \(\mathbf{\hat{p}}=\text{Softmax}(\mathbf{s})\) where \(\mathbf{s}=(s(\mathbf{x}^{+}),s(\mathbf{x}^{-}))\). We then train the model to recognize the positive example by minimizing the cross entropy between \(\mathbf{\hat{p}}\) and \(\mathbf{p}^{\star}=(1,0)\). This objective effectively forces the representation of the trajectory to be similar to that of the positive instruction and dissimilar to that of the negative instruction. At inference time, we define the hallucination detection classifier as \(C(\mathbf{x})=1-\sigma(s(\mathbf{x}))\), where \(\sigma\) is the sigmoid function. ### Synthesizing data creation Even for fine-tuning, acquiring human-labeled data can be prohibitively expensive. For evaluation, we manually annotated a small sample of labels (SS5). The annotation process was laborious, with an average time of 30 minutes required to annotate just 10 instructions. Based on our calculations, with a compensation of 15 USD per hour, it would cost approximately 9,000 USD to hire crowd workers to annotate all instances (\(\sim\)12,000) in the R2R training set. Thus, we propose a more cost-effective methodology for generating training data. Synthetic negative examples.We start with a training example \((\mathbf{u}^{+},\mathbf{r})\) in the Room-to-Room training set and modify the human-written instruction \(\mathbf{u}^{+}\) to create instructions with hallucinations. We first extract the candidate words in the instruction (SS3). To create an intrinsic hallucination, we choose a candidate word and apply the following procedure: * If the word is a **direction**, we replace it with an alternative direction. E.g., "_Walk **downup**, one flight of stairs and stop on the landing._": * If it is a **room**, we substitute it with another room randomly selected from a pre-composed list. E.g., "_Exit the bedroom balconv via the farthest left. Walk toward the couch. Stop there._"; * Otherwise, we swap it for another word in the instruction that is neither a direction nor a room. E.g., "_Exit the bedroom using the **door** step on the left then go straight until you get to the stairs and wait on the second step door._" Using this procedure, we first generate an intrinsic hallucination in \(\mathbf{u}^{+}\) to synthesize \(\mathbf{u}^{-}\). Then, with a probability of 0.5, we synthesize another intrinsic hallucination in each of \(\mathbf{u}^{+}\) and \(\mathbf{u}^{-}\). This step makes the training instructions more similar to the test-time inputs, which may contain multiple intrinsic hallucinations as they are generated by imperfect speaker models. To create an instruction with _extrinsic_ hallucinations, we append a sentence, taken from \(\mathbf{u}^{+}\) or another instruction, to the end of a random sentence in \(\mathbf{u}^{+}\). For example: "_Walk out of the office. Walk into the hallway and turn left. Walk Figure 1: Our hallucination detection model, which takes as input an instruction with a target word and determines whether it should be replaced or removed to be consistent with a visual trajectory. To build this model, we fine-tune pre-trained Airbert (Guhur et al., 2021) with a contrastive learning objective. into the hallway and turn left."_. Every word in the added sentence is considered an extrinsic hallucination. We do not create additional intrinsic hallucinations in the instruction. Alleviating input-distribution shift.Model trained only on human-written instruction may perform poorly on model-generated instructions. Therefore, we also include "high-quality" model-generated instructions on the R2R training set as positive examples and apply the same strategies to generate negative examples. The quality of an instruction is measured by the success rate of an ensemble of VLN\(\circ\)BERT instruction-following agents Hong et al. (2021) in recreating the described trajectory. We consider a model-generated instruction to be of high quality if at least 80% of the ensemble agents can successfully reach the final location in the described trajectory. ## 5 Experiments Data.Following the procedure described in SS4.3, we generate a training set of 325,346 contrastive pairs. For evaluation, we use the same 75 evaluation trajectories in Zhao et al. (2023) to form the test set. We randomly select another set of 20 trajectories in the R2R validation seen set for development. The environments in which the evaluation trajectories are generated are a subset of the training environments. We use the speaker model to generate instructions from these trajectories. The first two authors then manually annotate word-level hallucinations, creating 209 development examples and 632 test examples. The final labels are decided by mutual agreement. We choose the decision threshold of a model to maximize its F-1 score on the development set. Baselines.(i) **random** classifier assigns a label chosen uniformly at random, (ii) **speaker model probability** defines the hallucination probability \(C(\mathbf{x})=1-S(u_{i}\mid\mathbf{r};\mathbf{u}_{<i})\) where \(\mathbf{x}=(\mathbf{r},\mathbf{u},i)\), \(S\) is the speaker model (SS 3), and \(\mathbf{u}_{<i}\) is the instruction generated up to step \(i-1\) for the input \(\mathbf{r}\); (iii) **LSTM** and (iv) **T5** are binary classifiers learned under a standard maximum-likelihood objective. They implement an encoder-decoder architecture based on LSTM and Transformer, respectively, and are trained using the same synthetic dataset as our proposed model. These models are initialized with random parameters. The detailed implementations and hyperparameters of all models are given in SSA.1. Main results (Table 1).The speaker-model-probability is a remarkably strong baseline, despite not trained for hallucination detection. Its performance is on par with that of T5, which is the same model but trained specifically for hallucination detection. The LSTM-based model outperforms the T5-based models. Scaling up the size of the T5 model improves the recall score by 10 points. Our proposed model (fine-tuned Airbert) beats all baselines by wide margins in terms of F-1 score for hallucination labels, (+10.0 versus T5-base, +6.2 versus LSTM). It excels in precision compared to the baselines. We also include results on the development set in SSA.3. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & F-1 & Precision & Recall \\ \hline Random & 16.6 & 13.4 & 21.7 \\ Speaker model probability & 29.5 & 20.9 & 50.0 \\ LSTM-based encoder-decoder & 38.7 & 37.4 & 40.2 \\ T5-small (Transformer-based encoder-decoder) & 33.9 & 26.5 & 46.7 \\ T5-base (Transformer-based encoder-decoder) & 34.9 & 25.2 & **56.5** \\ Fine-tuned Airbert (ours) & **44.9** & **42.3** & 47.8 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance on the test set of our proposed hallucination detection model and various baselines. The decision threshold of each model is selected to maximize F-1 score of hallucination labels on the development set. Figure 2: The effectiveness of self-supervised pre-training and contrastive fine-tuning. Results are F-1 scores of hallucination labels on the test set. Ablation studies (Figure 2).Our results confirm that self-supervised pre-training and contrastive fine-tuning are requisite to the performance of our model. Without pre-training, our model is just as bad as the LSTM-based model. We also compare fine-tuning via contrastive learning with fine-tuning via a maximum-likelihood learning. In the latter approach, the model simply takes as input an example \((\mathbf{r},\mathbf{u},i)\) and learns to directly predict the true label. The approach underperforms contrastive learning by 4.9 F-1 points. Our finding aligns with previous work Gunel et al. (2021); Zhang et al. (2021); Goyal et al. (2023), suggesting that contrastive learning is effective not only as a representation learning objective, but also as a classification objective. Error and Qualitative Analysis.In Table 2, we break down the performance of our model by word type. Our model struggles with detecting room and object hallucinations, indicating that its understanding of visually grounded words is lacking. Especially, it has relatively low recall on object hallucinations, potentially due to lack of diversity of this word type in the training data. Figure 3 shows a few successful and failure examples of our model. ## 6 Conclusion This work is an early attempt to address the hallucination issue in grounded instruction generation. We have shown that techniques like self-supervised pre-training on multimodal data and contrastive fine-tuning on synthetic data are promising scalable approaches. We hope that these directions can be further developed in future work. ## Limitations Despite the effectiveness of the data generation method, this approach requires substantial domain-specific knowledge. Our method, particularly to generate directional hallucinations, is based on heuristics and does not take into account the actual environment. Another limitation is the small size of the evaluation datasets due to the expensive cost of annotation. ## Acknowledgments We thank the CLIP Laboratory at Maryland and our reviewers for providing helpful feedback to improve the manuscript. \begin{table} \begin{tabular}{l c c c} \hline \hline Type & F1 & Precision & Recall \\ \hline Direction & 48.1 & 41.9 & 56.4 \\ Room & 38.9 & 38.9 & 38.9 \\ Object & 38.7 & 50.0 & 31.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Fine-tuned Airbert performance broken down by word type. Results are on test set. Figure 3: Some successful and failure cases of the fine-tuned Airbert model. The blue arrow indicates the described path, and the green represents the next location.
2303.15221
Digital Twin of a Network and Operating Environment Using Augmented Reality
We demonstrate the digital twin of a network, network elements, and operating environment using machine learning. We achieve network card failure localization and remote collaboration over 86 km of fiber using augmented reality.
Haoshuo Chen, Xiaonan Xu, Jesse E. Simsarian, Mijail Szczerban, Rob Harby, Roland Ryf, Mikael Mazur, Lauren Dallachiesa, Nicolas K. Fontaine, John Cloonan, Jim Sandoz, David T. Neilson
2023-03-23T19:37:09Z
http://arxiv.org/abs/2303.15221v1
# Digital Twin of a Network and Operating Environment Using Augmented Reality ###### Abstract We demonstrate the digital twin of a network, network elements, and operating environment using machine learning. We achieve network card failure localization and remote collaboration over 86 km of fiber using augmented reality. (c) 2023 The Author(s) ## 1 Introduction A network digital twin is a simulation model of a communication system and its operating environment that enables applications such as the monitoring of network operations in real time, predictive maintenance, and testing "what if" scenarios before implementation on a production network. Recent work on digital twins of optical networks has developed accurate simulation and machine-learning (ML) models of the fiber transmission system [1]. There has been an increased awareness that communication networks are physical systems that interact with and can be used to sense the environment [2], motivating the digital-twin model to include the operating environment. Adding the physical and environmental information has multiple benefits, including improved physical connectivity visibility, a better understanding of shared risk groups, and a better facility security analysis. In this work, we demonstrate an optical network digital twin model based on a graph neural network (GNN) [3] with novel capabilities enabled by models of the physical network elements themselves as well as the operating environment. The network operators interact with the digital twin in real time using a distributed augmented reality (AR) application empowered with ML through remote computing. The AR application relies on a low-latency connection to a remote edge server that performs multiple computational functions. By using a 3-dimensional (3D) map of the network surroundings, 3D models of the network elements, and fault localization on the optical network, we show that the digital twin enables automated guidance of an on-site operator to a network element with a failure condition. When the operator views the network element with AR, a distributed ML-based image classification algorithm indicates the card that has the root-cause alarm condition. Finally, the AR application allows a real-time interactive collaboration session with a second operator remotely connected to a node after 86 km of fiber propagation so that knowledge can be shared across central and dispersed locations. Within the AR session, both operators can manipulate 3D computer-automated design (CAD) models of the network element and card as virtual 3D holograms, thereby enabling collaborative maintenance operations inside a metaverse [4]. ## 2 Digital Twin of the Network and Its Operating Environment We construct a digital twin representation of an optical transport network and the laboratory environment, encompassing the network topology, network equipment 3D models, and a 3D map of the environment. Fig. 1a is a diagram of the optical network consisting of six commercial Nokia 1830 PSS optical transport nodes (TNs) with flexgrid reconfigurable optical add-drop multiplexers (ROADMs). Wavelength (WL) paths and fiber lengths are indicated in the figure. Fig. 1b shows an AR image of a computer-generated downsized virtual hologram of one of the PSS-32 shelves of node TN1. The hologram was generated from 3D CAD models of the network element. Figure 1: a) Optical network topology, b) 3D digital model of a network element, c) image of the network environment and d) 3D mapping of the network environment. Fig. 1c is a photograph of the surrounding environment of the network, and Fig. 1d is a 3D map of the same location created with a Microsoft HoloLens 2 AR headset (ARH) [5]. Figure 2a shows a diagram of the equipment used for the AR remote collaboration experiment. The optical transport nodes TN1 and TN2 are the same nodes as shown in the optical network diagram of Fig. 1a and wavelength WL1 is used to carry traffic between the nodes for the experiment. WL1 has a line rate of 200 Gbit/s using 8-QAM modulation format and a low bit-error ratio in both directions. Two Centec V586 Openflow (OF) version 1.3 switches with 10G and 100G interfaces connect to 100G client interfaces on D5X500 flexible bitrate transponders at TN1 and TN2 that generate and terminate WL1. An instance of the Open Network Operating System (ONOS) [6] software-defined network (SDN) controller controls the OF switches. Wi-Fi access points (APs) connect the local and remote AR headsets to the network with 2.5G connections to the OF switches at the local and remote sites, respectively. A 100G path through the network carries both the AR traffic from the local ARH to a remote server as well as a constant bitrate (CBR) stream from a 100G interface of a Sprent SPT-N12U traffic generator (TG) that produces bi-directional competition to the AR traffic. On-demand computing is enabled using a remote server that is equipped with an NVIDIA RTX A6000 graphics processing unit. The server assigns tasks to clients with AR capabilities, processes the requests from the clients through ML models, and synchronizes the operations between the clients. In the experiment, WL2 experiences frame losses and three severe network card alarms, as illustrated in Fig. 2b. The figure illustrates the various card-level network elements such as optical transponder (OT), line amplifier (LA), wavelength selective switch (WSS), array amplifier (AA), multicast switch (MCS), and fiber span. A fault localization model [3], which employs GNNs and natural language processing, is executed on the remote server and successfully identifies the optical transponder of WL2 as the source of the failure by collecting the alarms from the network elements and utilizing the network connectivity graph, as illustrated in Fig. 2b. The AR applications were developed using Unity, OpenXR, and mixed reality toolkit [7]. An on-site (local) operator wearing an ARH uses a hand-operated menu to choose between applications, set up parameters such as the server IP address, and connect or disconnect from the remote server. ### Lab navigation We developed an AR-based lab navigation application to assist network operators in efficiently navigating to their desired destination, e.g., a rack containing a failed network card. A top view of the 2D lab map is presented in Fig. 2c, where a path from the starting point (P1) to the rack containing the failed transponder (P4) is indicated by a series of blue virtual directional arrows and the destination is marked by a virtual red flag. The on-site ARH operator receives the coordinates of the arrows and the flag from the remote server, which calculates the path using the A-star path-finding algorithm [8] based on the environment's 3D map of Fig. 1d. Each rack has two network equipment shelves, and the height of the flag serves as an indication of the targeted shelf level. Fig. 2(d) shows an AR navigation image captured directly through the display of the local ARH at the P2 location along the navigation path. ### Network card identification After locating the shelf, the next step is to indicate to the operator the network cards with alarms and identify the root-cause of failure. Fig. 3a shows a diagram of the process for network card identification and indication to the operator through ML-based computation at the remote server. The computation uses the object detection model YoloV7 [9], which was fine-tuned using 824 captured images in order to detect and classify 11 different types of network cards and 2 different shelves. We employ the trainable bag-of-freebies method [10] to improve accuracy and reduce the size of the training dataset. The webcam of the ARH captures images at a rate of 5 frames per second, which are sent to the remote server. YoloV7 outputs the name of the detected card, a bounding box that encompasses the card, and a confidence score. A higher confidence score indicates the likelihood that the box contains the object, with a maximum score of 1. The coordinates of the bounding boxes provide the relative positions of the detected cards. Using the card arrangement information retrieved from the network element database, cards Fig. 2: a) Experiment setup for remote AR collaboration, b) illustration of ML-based fault localization, c) top-view 2D map of the network surroundings with virtual navigation markers, and d) image of navigation guidance taken directly through the display of the local (on-site) ARH at the P2 location. with alarms can be determined. This information is then sent back to the ARH with additional color coding to indicate the nature of the alarm. Cards with a failure requiring replacement are indicated in red, while cards with an alarm that are not the main source of failure are indicated in blue. The bounding boxes, along with the card model names and confidence scores, are displayed on the ARH. In the case of the shelf in Fig. 2(a), the flexible-bitrate OT card (model: D5X500Q) is determined to be the main source of failure with a confidence score of 85%, and the LA card (model: ASWG) is also detected as having an alarm, but the GNN ML model of the network determines that it is not the root cause of the alarm. Note that the shelf image in Fig. 2(a) contains four of the same LAs that are not indicated as faulty by the remote server computation. Due to the knowledge of card positions, the LA with the alarm, which is rightmost among the four, has been successfully identified. We tested the robustness of the remote classification computation to network congestion by introducing CBR competition to the AR traffic with the traffic generator shown in Fig 1(a). We use the OpenFlow [11] meter table to prioritize the AR traffic, which restricts the maximum rate of the competing traffic to 90 Gb/s of the 100 Gb/s total. Fig. 2(b) shows the maximum bitrate between the on-site ARH and the remote edge server and the total round-trip latency for the identification result, which is caused by the ML inference time of YoloV7 on the server and the network round-trip time over 86 km of single-mode fiber and the Wi-Fi link. With prioritization of the AR traffic, we measured a maximum of 330 Mb/s bitrate with iPerf [12], and \(<\)35 ms total round-trip latency for the card identification result. Fig. 2(c) shows histograms of the ML inference time and network round-trip data transfer time. ### Interactive remote collaboration We developed a remote collaboration application so that a local network operator can collaborate in real time with a remote expert. For example, the local operator can receive guidance for the process of replacing the network card by manually manipulating the virtual 3D models of the transport system cards and shelf that are generated as 3D images in the virtual environments of both the operator and remote expert. The remote edge server synchronizes the position and orientation of the digital 3D models for both participants. Fig. 2(d) shows an AR image in which the expert utilizes mid-air drawing of red circles to indicate the locations of the two latches that must be released prior to removing the card and demonstrates the card-replacement procedure to the local operator. The application also supports real-time voice and video communication between the two headsets. ## 3 Conclusion In this work, we demonstrate a digital twin of a network, network elements, and operating environment utilizing ML and remote edge computation. Through interaction in an AR virtual environment, the digital twin enabled indoor navigation, network card failure identification and localization, and remote collaboration over an 86-km optical link. These innovations demonstrate the potential of AR and ML in network management and maintenance. Supplemental video: [https://youtu.be/RJMDRjCIBFI](https://youtu.be/RJMDRjCIBFI)
2302.02474
MATILDA.FT, a Mesoscale Simulation Package for Inhomogeneous Soft Matter
In this paper we announce the public release of a massively-parallel, GPU-accelerated software, which is the first to combine both coarse-grained molecular dynamics and field-theoretical simulations in one simulation package. MATILDA.FT (Mesoscale, Accelerated, Theoretically-Informed, Langevin, Dissipative particle dynamics, and Field Theory) was designed from the ground-up to run on CUDA-enabled GPUs, with the Thrust library acceleration, enabling it to harness the possibility of massive parallelism to efficiently simulate systems on a mesoscopic scale. MATILDA.FT is a versatile software, enabling the users to use either Langevin dynamics or Field Theory to model their systems - all within the same software. It has been used to model a variety of systems, from polymer solutions, and nanoparticle-polymer interfaces, to coarse-grained peptide models, and liquid crystals. MATILDA.FT is written in CUDA/C++ and is object oriented, making its source-code easy to understand and extend. The software comes with dedicated post-processing and analysis tools, as well as the detailed documentation and relevant examples. Below, we present an overview of currently available features. We explain in detail the logic of parallel algorithms and methods. We provide necessary theoretical background, and present examples of recent research projects which utilized MATILDA.FT as the simulation engine. We also demonstrate how the code can be easily extended, and present the plan for the future development. The source code, along with the documentation, additional tools and examples can be found on GitHub repository.
Zuzanna M. Jedlinska, Christian Tabedzki, Colin Gillespie, Nathaniel Hess, Anita Yang, Robert A. Riggleman
2023-02-05T20:07:23Z
http://arxiv.org/abs/2302.02474v1
# Matilda.Ft, a Mesoscale Simulation Package for Inhomogeneous Soft Matter ###### Abstract In this paper we announce the public release of a massively-parallel, GPU-accelerated software, which is the first to combine both coarse-grained molecular dynamics and field-theoretical simulations in one simulation package. MATLAB.FT (Mesoscale, Accelerated, Theoretically-Informed, Langevin, Dissipative particle dynamics, and Field Theory) was designed from the ground-up to run on CUDA-enabled GPUs, with the Thrust library acceleration, enabling it to harness the possibility of massive parallelism to efficiently simulate systems on a mesoscopic scale. MATLAB.FT is a versatile software, enabling the users to use either Langevin dynamics or Field Theory to model their systems - all within the same software. It has been used to model a variety of systems, from polymer solutions, and nanoparticle-polymer interfaces, to coarse-grained peptide models, and liquid crystals. MATLAB.FT is written in CUDA/C++ and is object oriented, making its source-code easy to understand and extend. The software comes with dedicated post-processing and analysis tools, as well as the detailed documentation and relevant examples. Below, we present an overview of currently available features. We explain in detail the logic of parallel algorithms and methods. We provide necessary theoretical background, and present examples of recent research projects which utilized MATLAB.FT as the simulation engine. We also demonstrate how the code can be easily extended, and present the plan for the future development. The source code, along with the documentation, additional tools and examples can be found on GitHub MATLAB.FT repository. GPU, Nvidia, CUDA, Coarse-grained simulations, soft matter, polymers, self-assembly ## I Introduction Poymers are a ubiquitous type of material, important both in biological and industrial settings. Polymers are an umbrella term, gathering macromolecules composed of smaller, repeating monomers. In the industrial settings, polymers are used extensively in the tire industry, in flexible composite material production, and utilized as durable adhesives. In addition, they have been exploited in more precise applications, such as drug delivery [1] and the design of artificial catalysis centers [2]. This is possible due to the propensity of polymers to self-assemble into higher-order structures, and their ability to undergo phase separation in solution. Controlled phase separation has been exploited to create nano-capsules with well-defined pore sizes, by first inducing phase separation in the capsule shell and then flushing-out one of the components [3]. Similar approaches using non-solvents to induce phase separation in a polymer solution are common methods to produce polymer membranes[4; 5]. Design of these materials requires precise knowledge of the thermodynamics and microstructure of polymer materials under a variety of conditions. Within a biological context, the most important natural polymers are nucleic acids and proteins. The former are responsible for storing and propagating the genetic information, whereas the latter act as enzymes to facilitate multiple biochemical reactions within the cell, participate in active transport of intracellular components, or serve as structural elements of the cytoskeleton. A special class of proteins, called Intrinsically Disordered Proteins (IDPs) have been shown to be a main constituent of the membraneless organelles [6]. The functions of these intracellular compartments vary from nucleic-acid biosynthesis and organization, to participation in stress, and immune responses [7]. Just like regular organelles, they have a distinct chemical composition, and thus spatially separate specific biological processes. Unlike regular organelles, however, they are not enclosed by a lipid membrane, and thus membraneless organelles can be easily formed and dissolved as the need arises. The deregulation of these process has been implicated in cancer development and the onset of neurodegenerative disease [8]. Efficient prediction of the equilibrium thermodynamics of these systems remains an ongoing challenge, although significant recent progress has been made[9]. Given their wide-ranging applications, inhomogeneous (macro- or micro-phase separated) polymers remain a topic of intense investigation. Experimental methods are available to study their properties and aid in the novel material design, and the polymer science community has a rich history of close collaboration between experiment, theory, and simulation. Computer-aided approaches have had an ever increasing importance as a research tool as computational power has become more ubiquitous. Due to this continuously increasing computational speed and processing power, progressively bigger and more complex systems can be simulated, allowing mesoscopic material propertiers to be predicted before synthesizing physical samples. Various open-source Molecular Dynamics (MD) codes have been released, which are capable of simulating polymeric species on an atomistic or coarse-grained level. Some notable examples include LAMMPS [10], NAMD [11], and GROMACS [12]. LAMMPS can perform all-atom simulations on polymer chains, using available force fields. It is also equipped with biologically-oriented force fields, which enable coarse grained simulations of biomolecules. In addition, the user can define their own coarse-grained polymer model, and expand it to include the required potentials. On the other hand, both GROMACS and NAMD have been specifically designed to model biological molecules, such as proteins and nucleic acids on a fully atomistic level. Polymer simulations are challenging in general due to the wide range of length- and time-scales required for accurate simulation. In many soft matter fields, particularly those involving the design materials using polymers, polymer field theory and related techniques have played a crucial role in the design of new materials and in the interpretation of experimental results[13; 14; 15; 16; 17]. Polymer field theories are developed by beginning with a description of the system in terms of coarse-grained potentials, such as chains obeying Gaussian statistics, Flory contact repulsions governed by a \(\chi\) parameter, partial charges on the various species, etc. One first writes down the partition function for this particle model, then using one of a variety of transformation techniques[13; 14; 18], decouples the particle interactions and transforms the model to one where one molecule of each type interacts with chemical potential fields generated by the various interaction potentials. The field-theoretic approach is attractive because it enables a variety of analytic analysis, such as the mean-field approximation which gives rise to self-consistent field theory (SCFT) or a variety of loop expansions. The particle-to-field transformation is formally exact, and there are examples in the literature showing quantitative agreement between the particle and field version of the model[19]. In more recent years, several methods have developed to sample the original particle model efficiently[20; 21; 22; 23; 24] that generally fall under the umbrella of theoretically-informed coarse-grained models; in these methods, the underlying particle coordinates are retained, and the particles are mapped to density fields to efficiently calculate the non-bonded forces and energies. However, with a few notable exceptions[25; 26; 27], there is a dearth of simulation packages available to perform general simulations on either particle or field-based models, particularly those that are designed specifically to run on Graphics Processing Units (GPUs). A single code base that can simulate both particle- and fluctuating field-theoretic simulations of identical (or very similar) molecular models could allow readily switching between dynamic and equilibrium simulations as well as assessment of the importance of fluctuations in any calculating properties. In this work, we present a first version of our code for Mesoscale, Accelerated, Theoretically-Informed, Langevin, Dissipative particle dynamics, and Field Theory, MATILDA.FT. MATILDA.FT is written from the ground-up intended to be run on GPUs, and the bulk of the code is written using the CUDA programming language. MATILDA.FT is capable of modeling both the systems consisting of a few molecules, as well as those containing millions of particles. Its strength lies specifically in being able to efficiently simulate polymeric and other soft materials (e.g., liquid crystalline systems) on a mesoscopic scale. On this scale, the coarse-grained interactions are typically "soft" (finite at overlap) and the particle density high; in this limit, it becomes more efficient to evaluate the non-bonded interactions using density fields. These large-scale molecular assemblies of interest can correspond to biomolecular coacervates in explicit solvent, artificially synthesized ionomers, block copolymer melts, side-chain liquid crystalline polymers, or polymer-infiltrated nanoparticle packings. The outline of this paper is as follows. In Section I.1 we begin with a brief history of the code development. Next, in Section II we outline the main features of the code and available functionalities. In Section III, we describe the structure of the models being used in the molecular dynamics and field-theoretical simulations. Subsequently, in Section IV, we show how the code is optimized for parallel execution on CUDA-enabled GPUs. In the Methods Section V we present a more in-depth description of selected features available in MATILDA.FT. In Section VI we outline how the code is organized, with focus on its class structure, and extensibility options. Next, in Section VII we show results for selected example systems. We end with the planned developments in Section VIII, and a conclude with Section IX. ### Brief History The GPU-TILD code evolved out of a series of programs originally referred to as dynamic mean-field theories (DMFT) developed by Chao, Koski, and Riggleman[28] and augmented by several students in subsequent years. Over time, it became clear that the approach was strongly connected to existing methods that went by the moniker theoretically-informed Monte Carlo (TIMC). The technical differences between the two is that our approach tends to use higher-order particle-to-mesh schemes to ensure UV-convergence and that time evolution is through Langevin dynamics in lieu of Monte Carlo dynamics. As a result we have subsequently referred to the method as theoretically-informed Langevin dynamics (TILD).Our group had several versions of an internal code that was developed using openMP and later openMPI. However, all of these codes suffered from a fairly rigid structure that was tied to simulations of specific systems associated with specific research projects and an unintuitive input file format. During the first summer of the Coronavirus pandemic, RAR began developing a basic TILD simulation package. The goal was to develop code intended to be used on GPUs from the ground up, creating a more modular, object-oriented structure. Once the utility of the code became clear, its use spread throughout our research group, with substantial additions to the code base being made by all of the co-authors of this work, most notably CT and ZMJ. The addition of the field-theoretic simulations portion of the code began in late 2022 by RAR. ## II Feature overview In this section, we provide a brief overview of the features available in MATLAB.FT. They are later described in more detail in the following sections, IV and V. MATILDA.FT can performed two types of simulations: TILD or field-theoretic (FT) simulations. The TILD method is a hybrid particle/field approach where explicit coordinates of the molecules are retained and used to calculate bonded interactions while non-bonded forces (e.g. excluded volume and electrostatics) are calculated by mapping the particles to a density field. The highly coarse-grained nature of the interactions leads to a significant speed-up compared to particle-only implementations of the same models, due to the relatively high particle density in such models. The method is closely related to the theoretically-informed coarse-grained modeling techniques developed in the de Pablo group[20; 21] and other related techniques[22; 23]. In a FT simulation, on the other hand, particle coordinates are integrated out completely and one must compute the statistics of the molecular conformations in external fields generated by the other molecules. FT simulations are especially powerful when the equilibrium properties of high molecular weight polymers are of interest, specifically a large dimensionless polymer concentration \(C=n/(V/R_{g}^{3})\), where \(n\) is the number of chains in a system, \(V\) is the volume, and \(R_{g}\) is the radius of gyration of a chain. The FT simulations and TILD methods are complimentary in this sense, as the particle-based TILD approach is much more efficient at lower polymer concentrations, and TILD can capture some aspects of polymer dynamics that FT cannot. ### TILD Branch Next, we move to the overview of the TILD module. Here, the simulations are performed in the NVT ensemble, in a fully-periodic orthogonal box, either in two or three dimensions. Although MATILDA.FT can perform simulations of free particles, it has been designed specifically to efficiently model systems of polymer melts and solutions. Polymers are modeled as discrete Gaussian chains with monomers that are connected through harmonic springs, and the density of each monomer is spread around its center through a convolution with a unit Gaussian. The strength of the repulsive interactions between the species is mediated through the Flory-Huggins \(\chi\) parameter. Monomers can be either neutral or charged. If they carry a net charge, then in addition to the repulsive potential, they also interact through Coulombic electrostatic forces. Regardless of their net charge, the monomers can be made polarizable through the use of (classical) Drude Oscillators as detailed in Section V.1. User-defined groups of particles are the basic structure to which operations in MATILDA.FT are assigned. These operations can be either static or dynamic. Static groups maintain the same set of members over the entire course of simulation whereas dynamic groups periodically update their member lists, based on a specific membership criterion. The basic set of operations acting on particle groups are applying the repulsive and electrostatic force, thermostating, and integration, in order to propagate coordinates in time. Other operations that can be applied to particles include external/biasing forces, spatial confinement, creation and breaking of dynamic bonds, or performing on-the-fly property calculations. Some of these operations require a distance-based neighbour list, MATILDA.FT provides different styles of neighbour lists depending on the application. The user interacts with the code through input scripts, written in a plain-text format. Before the simulation is started, the entire script is read and appropriate variables and data structures are initialized. The maximum number of time steps and the time step size are parameters specified in the input script. Then the system can undergo an optional equilibration period before subsequently entering the production run stage. Different input formats are required by the TILD and FT simulations. For the TILD branch, two files need to be provided. The first one is the main input file, providing information about simulation dimensionality, box size, density grid spacing, and interaction potentials between particle types. It also defines the particle groups and assigns integrators and forces to act on them during each time step. The second file contains information about particle coordinates, types, molecules they belong to, and optionally, their charge. Currently, this file can be provided in the format consistent with the LAMMPS data file, in either _atomic_ or charge_ atom style. To allow the use of data generated by other codes, the initial configuration can also be read from a GSD-format file, developed by the Glotzer Lab [29]. On the other hand, FT simulations only require a single input file, as all the fields and molecular information are initialized from within the input script. A sample input script, providing basic TILD functionality is shown in Listing 1. Here, the simulation is performed in 2 dimensions, for 10,000 time steps. The script specifies various log and output frequencies. The variable _pmeorder_ controls the order of interpolation for constructing the density fields. The dimensionless time step is set to 0.005. The configuration data is read from the _input.data_ file. An external force _midpush_ (along the y-axis, with the dimensionless magnitude of 0.5) is applied to all particles. The integrator is chosen to be GJF. The lines beginning with _pair_style_ specify the repulsive interactions between the selected particles types. A more detailed description of all available options can be found in the documentation and in Section VI. ``` 1Dim2 2max_steps10001 3log_freq1000 4binary_freq1000 5pmeorder1 6rtaj_freq1000 78 89 90delt0.005 10read_datainput.data 111 112Nf65 13Nf65 14 15extraforceallmidpush10.5 16 17integratorallGJF 18 19pair_stylegaussian111.56251.0 20pair_stylegaussian221.56251.0 211pair_stylegaussian123.001.0 ``` Listing 1: Example input script ### FT Branch As the development of the FTS features of the code only began relatively recently, the feature set is currently somewhat limited, but will expand significantly in the coming months. Currently, FTSs are limited to mean-field calculations as in self-consistent field theory (SCFT) with linear, discrete Gaussian chain models. The molecules can be of arbitrary blockiness with an arbitrary number of components, and the potentials implemented include the Flory contact repulsion and the Helfand weak compressibility[30]. More details about the interactions between the species are provided in Section III. As detailed below, the key elements of the FTS implementation are three classes: **Potentials**, which govern the non-bonded interactions and act on **Species**. The **Species** class stores the total density of each chemical component and is populated by individual **Molecule** classes. For example, an A-homopolymer/B-homopolymer/AB-diblock copolymer blend would have two species (A and B) and three molecules. ### Units All simulations in MATLAB.FT are performed in reduced units. The base units for TILD simulations are _energy_\([k_{b}T]\), _mass_\(m\) of a monomer, and _length_ is normalized by the statistical segment size \(b\). For the FT simulations, the unit length scale is the radius of gyration computed for an ideal chain of a specified reference chain length, \(N_{r}\), \(R_{g}=\sqrt{(N_{r}-1)/6}\). In addition, the field variables are also scaled by this reference chain length so that the natural interaction parameters become \(\chi N_{r}\), \(\kappa N_{r}\), etc. By substituting an appropriate value, other derived units can be obtained, such as time and force. ### Output and Additional Features The user might want to retrieve the information generated during the simulation. By default, a frame of the trajectory file is written with a specified frequency. This trajectory file is consistent with the LAMMPS traj file style. Since this operation slows down the program execution, an alternative binary file can also be saved. Two binary outputs are written out - one storing particle positions and another storing the density fields. These can be later converted to a human-readable format with the post-processing tools that are distributed with the code. The simulation can be initialized from a configuration file, following LAMMPS _atomic_ or _charge_ input styles. It can also be resumed from the previously generated trajectory file by using a resume command in the input script. Each job run can also be split into an equilibration and production sections by explicitly setting the number of steps for each section in the input script. This writes the data to two separate files, each of which can have their write rates modified independently of other sections. This allows for data to be collected during and post equilibration within a single run, without a need to manually resume the simulation. The trajectory data can also be saved in the GSD format. More details about the specific output options can be found in thedocumentation. ## III Structure of theoretically-informed coarse-grained and field-theoretic models In this section we provide the necessary theoretical background to understand the models used in MATLAB_FT, and the logic of the simulation workflow. The starting ingredients for all of the modeling to be handled by MATLAB_FT are highly coarse-grained models for soft-matter systems. For simplicity, we will describe the basic structure in terms of a simple A-B Gaussian chain diblock copolymer melt, though the generalization to other systems will become apparent below. For a polymer melt with \(n\) polymer chains each containing \(N_{A}+N_{B}=N\) monomers, the microscopic polymer densities are \[\hat{\rho}_{K}(\mathbf{r})=\sum_{j}^{n}\sum_{s}^{N_{K}}\delta(\mathbf{r}- \mathbf{r}_{j,s}), \tag{1}\] where \(K\) is either species A or B, and \(\mathbf{r}_{j,s}\) is the position of the \(s^{\text{th}}\) bead on the \(j^{\text{th}}\) chain. The monomers on each chain are typically connected via harmonic bonds, \[\beta U_{0}=\sum_{j}^{n}\sum_{s}^{N-1}\,\frac{3}{2b^{2}}|\mathbf{r}_{j,s}- \mathbf{r}_{j,s+1}|^{2}, \tag{2}\] where \(b\) is the statistical segment size, and we have assumed equal \(b\) for species A and B. The Flory repulsion is written in one of two equivalent forms[19; 31; 32] depending on whether the model is implemented as a field- or particle-based model. In the particle-based approaches, we make the potential non-local as \[\beta U_{1}=\frac{\chi}{\rho_{0}}\int d\mathbf{r}\int d\mathbf{r}^{\prime}\, \hat{\rho}_{A}(\mathbf{r})\,u_{G}(|\mathbf{r}-\mathbf{r}^{\prime}|)\,\hat{ \rho}_{B}(\mathbf{r}^{\prime}), \tag{3}\] where \(u_{G}\) is a unit Gaussian potential, \(u_{G}(r)=(2\pi\sigma^{2})^{-\text{D}/2}e^{-r^{2}/2\sigma^{2}}\), \(\mathbb{D}\) is the dimensionality of the system, and \(\sigma\) controls the range of the interactions. The standard Flory-Huggins model is recovered in the limit \(\sigma\to 0\). The final potential penalizes deviations of the local density from the average[33]\(\rho_{0}=nN/V\), \[\beta U_{2}=\frac{\kappa}{2\rho_{0}}\int d\mathbf{r}\int d\mathbf{r}^{\prime} \,\left[\hat{\rho}_{+}(\mathbf{r})-\rho_{0}\right]u_{G}(\mathbf{r}-\mathbf{r} ^{\prime})\left[\hat{\rho}_{+}(\mathbf{r}^{\prime})-\rho_{0}\right] \tag{4}\] With these ingredients in hand, we can write the partition function as \[\mathcal{Z}=z_{0}\int d\mathbf{r}^{nN}\;e^{-\beta U}, \tag{5}\] where \(z_{0}\) is a prefactor that contains all of the self-energy terms, factors accounting for molecular indistinguishability, and the thermal de Broglie wavelengths. In equilibrium particle-based simulations, one is primarily interested in calculating averages of quantities that can be expressed as functions of the particle coordinates, \(M(\mathbf{r}^{nN})\), as \[\langle M\rangle=\frac{1}{\mathcal{Z}}\int d\mathbf{r}^{nN}\,M(\mathbf{r}^{nN })\,e^{-\beta U}, \tag{6}\] and expressions for the usual thermodynamic quantities of interest, such as the average density, energy, and pressure, can be readily obtained from expressions commonly used in molecular dynamics simulations[34]. For field-theoretic approaches, we use a local potential but render the densities non-local by distributing the point particles over a Gaussian distribution \(h(r)=(2\pi a^{2})^{-\text{D}/2}e^{-r^{2}/2a^{2}}\), and the Gaussian-distributed particle density is given by \[\tilde{\rho}_{K}(\mathbf{r})=\int d\mathbf{r}^{\prime}\;h(\mathbf{r}-\mathbf{ r}^{\prime})\,\hat{\rho}_{K}(\mathbf{r}^{\prime})=[h*\hat{\rho}_{K}](\mathbf{r}), \tag{7}\] where the final equality introduces our short-hand notation for a convolution integral. For the choice \(\sigma^{2}=2a^{2}\), we can exactly re-write[19; 31; 32] the non-bonded potentials in Eqs. 3 and 4 as \[\beta U_{1}=\frac{\chi}{\rho_{0}}\int d\mathbf{r}\;\tilde{\rho}_{A}(\mathbf{ r})\,\tilde{\rho}_{B}(\mathbf{r}), \tag{8}\] and \[\beta U_{2}=\frac{\kappa}{2\rho_{0}}\int d\mathbf{r}\;[\tilde{\rho}_{+}( \mathbf{r})-\rho_{0}]^{2}. \tag{9}\] Using known Gaussian functional integrals [13; 14], one can then exactly transform the particle-partition function in Eq. 5 to a field-theoretic one of the form \[Z=z_{1}\int\mathcal{D}\{w\}\;e^{-\mathcal{H}[\{w\}]}, \tag{10}\] where \(z_{1}\) contains the constants from \(z_{0}\) as well as the normalizing factors from the Gaussian functional integrals, \(\{w\}=\{w_{+},w_{AB}^{(+)},w_{AB}^{(-)}\}\) is the set of chemical potential fields, and \(\mathcal{H}\) is the effective Hamiltonian governing the weights of the microstates. For the diblock copolymer model considered here, \(\mathcal{H}\) takes the form \[\begin{array}{ll}\mathcal{H}&=\frac{C}{\chi N_{r}}\int d\mathbf{r}\left([w _{AB}^{(+)}(\mathbf{r})]^{2}+[w_{AB}^{(-)}(\mathbf{r})]^{2}\right)\\ &\quad+\frac{C}{2\kappa N_{r}}\int d\mathbf{r}[w_{+}(\mathbf{r})]^{2}-iC\int d \mathbf{r}\;w_{+}(\mathbf{r})\\ &\quad-n_{D}\log Q_{D}[\mu_{A},\mu_{B}],\end{array} \tag{11}\] where the first line contains the potential fields that arise due to the Flory interaction[19], the second line contains the terms that arise from the Helfand potential, and the final line contains the excess chemical potential of the polymers in a given field. The potential fields \(\mu_{A}\) and \(\mu_{B}\) experienced by monomers A and B computed using \[\begin{split}\mu_{A,c}(\mathbf{r})&=\ \left\{i(w_{+}+w_{AB}^{(+)})-w_{AB}^{(-)}\right\}( \mathbf{r})/N_{r}\\ \mu_{B,c}(\mathbf{r})&=\ \left\{i(w_{+}+w_{AB}^{(+)}) +w_{AB}^{(-)}\right\}(\mathbf{r})/N_{r},\end{split} \tag{12}\] with the smeared potential fields appearing in Eq. 11 calculated as \(\mu_{K}(\mathbf{r})=[h\ast\mu_{K,c}](\mathbf{r})\). While the particle implementation can report qualitatively realistic dynamic quantities, the FT implementation is strictly interested in equilibrium quantities. Equilibrium averages are typically expressed as functionals of the potential fields and calculated as \[\left\langle M\right\rangle=\frac{1}{\mathcal{Z}}\int\mathcal{D}\{w\}\,M[\{w \}]\,e^{-\mathcal{H}}. \tag{13}\] Since \(\mathcal{H}\) is typically complex-valued, sampling the integral over the field configurations is non-trivial; this is typically accomplished through the mean-field approximation, leading to SCFT, through complex Langevin (CL) sampling[35, 36, 13], or Monte Carlo sampling[37]. To update the chemical potential fields in either a CL or an SCFT calculation, the effective "forces" on the fields must be obtained as functional derivatives of \(\mathcal{H}\). These can be obtained through explicit differentiation, \[\mathbf{F}_{w}(\mathbf{r})=-\frac{\delta\mathcal{H}}{\delta w(\mathbf{r})}, \tag{14}\] where \(w(\mathbf{r})\) is one of the three fields \(w_{+}(\mathbf{r}),w_{AB}^{(+)}\), or \(w_{AB}^{(-)}\). In a CL simulation, the fields are sampled using an overdamped Langevin equation, \[\frac{\partial w}{\partial t}=\lambda_{w}\mathbf{F}_{w}(\mathbf{r})+\eta( \mathbf{r},t), \tag{15}\] where \(\eta(t)\) is a stochastic noise term chosen to satisfy the fluctuation-dissipation theorem[38, 13]. An algorithm that drives the system to an SCFT solution is easily obtained from Eq 15 by simply setting the noise term to zero. ## IV Parallel algorithms, data structures and object-oriented programming with CUDA and C++ The unique feature which sets MATLAB.FT apart from other popular MD codes, such as LAMMPS or NAMD is that it has been designed from the beginning to execute on CUDA-enabled GPUs. The code is intended for highly coarse-grained models where the non-bonded forces can be evaluated using density fields and not summing over neighboring pairs of particles. It uses a dedicated CUDA/C++ programming language in order to fully harness parallel capabilities. Its model of parallelization differs from the conventional CPU domain decomposition. Whereas on the CPU the groups of particles are assigned to different processor based on their spatial arrangement, the GPU parallelization occurs on the particle or individual grid location level, where each thread is responsible for processing instruction for the selected particle/grid point. It is simply handled by assigning a separate thread to individual particles, by filtering the thread IDs. This is handled by the first statement in each kernel call, provided in Listing 2. ``` intlistind=blockIdx.x*blockDim.x+threadIdx.x; if(listind>=ns) return; ``` Listing 2: Kernel call to assign a thread to a particle id. MATILDA.FT also makes extensive use of Thrust library, which is an extension of C++ Standard Template Library (STL) to work with GPUs [39]. The Thrust Library provides dedicated storage containers (equivalent to STL vectors in C++), which enable easier host-device communication and avoiding the requirement for explicit _cudaMemcpy_ calls. The Thrust Library also makes available dedicated parallel algorithms to operate on these containers and achieve better performance. In addition, by avoiding complicated host-device memory transfer syntax, the use of thrust makes it easy for those who do not have much GPU-programming experience to understand and expand the MATLAB.FT source code. ### Neighbour lists Neighbour lists keep track of the other particles present within a certain distance of the center of the particle of interest. This information is required, for example, when model reactive particles need to search for reaction partners. In this way, each time the force is being applied, a comparison of all possible inter-particle distances, an \(\mathcal{O}(N_{tot}^{2})\) operation, where \(N_{tot}\) is the total number of particles, is avoided. Instead, an \(\mathcal{O}(N)\) scaling is achieved during each function call, since only pre-computed neighbours are checked for possible interactions. The process of creating the neighbour list is based on cells, to which the simulation box is divided, and thus it also scales as \(\sim N\). Each neighbour list requires two parameters: \(r_{cutoff}\) and \(r_{skin}\). They correspond to the maximum range of the force using the neighbour list, and the skin radius (\(r_{skin}>r_{cutoff}\)) respectively. The skin radius is the actual radius of the sphere (or a disc in 2D) within which the search for the possible neighbours occurs. By specifying the skin radius, some particles which are initially beyond the range of the force are included in the neighbour list, and thus the list can be reused over multiple time steps. They neighbour list can be rebuilt with a specified frequency, or, alternatively, an automatic trigger can be used. Currently, list rebuilding is triggered when a particle moves a distance larger than half the skin radius. Each cell is chosen to have a side length equal to the half of the specified skin radius. The process of neighbour list building is divided into two stages. In the first phase, particle "binning" takes place, where each particle in the group gets assigned to one cell in the box. Finding the corresponding cell is trivially parallelized over all particles, scaling as \(\mathcal{O}(N)\), whereas their assignment the grid point requires an atomic operation. Each cell stores the ID of its member particles, and keeps track of their total count. Subsequently, the program loops over each particle in the group and checks the distance to other particles present in the cells overlapping with \(r_{skin}\). Each cell is chosen to have a side length equal to the half of the specified skin radius, resulting in 125 cells in 3D, and 25 in 2D. Only the particles within the specified radius are included as particle's neighbours. The full neighbor list is stored as a 1D thrust device vector, so no host-GPU data transfer is required. Another 1D device vector stores cell content and their respective occupation count. This method uses a predefined maximum cell capacity. If the specified capacity is too small, it gets automatically adjusted during the simulation run. The full neighbour list stores all neighbours of each particle, effectively double counting each particle pair. A "half-list" stores, for each particle, only the neighbours with index lower than its own index. Other, specialized lists are also available, and interface with specific particle-particle operations. For example, to aid in dynamic bonding, where each donor particle only stores the information about the acceptors present within the skin radius. In the future release, other, more efficient schemes of neighbour lists will be implemented. For example, each particle can be responsible for the region covering only half of its neighbourhood, such that when combined, the particles cover the search space in a minimally overlapping manner. ### Bonded interactions Bonded interactions are present whenever polymer chains are modeled. Currently, they represent the harmonic springs connecting adjacent monomers of the same molecule. The calculation of the resulting forces has been parallelized to be performed on the GPU, and Newton's third law is _not_ used to avoid the use of atomic operations. The contribution of the bonded interactions is calculated individually for each particle by assigning it to a separate tread. MATLAB.FT can also report the resulting bonded energy and the contribution the bonded interactions make to the pressure virial coefficient. These calculations are performed on the CPU. Two common angle potentials are also implemented in MATLAB.FT. To enable simulations of discrete worm-like chains, we have the cosine form \[u_{wlc}(\theta_{ijk})=\lambda\left[1+\cos(\theta_{ijk})\right], \tag{16}\] where \(\lambda\) controls the stiffness of the potential and \(\theta_{ijk}\) is the _inside_ angle between particles \(i,j\) and \(k\). The second potential implements harmonic angles as \[u_{h}(\theta_{ijk})=k_{\theta}\left(\theta_{ijk}-\theta_{0}\right)^{2}, \tag{17}\] with spring constant \(k_{\theta}\) and equilibrium angle \(\theta_{0}\). The angle styles are specified in the input script along with the type ("wlc" or "harmonic"), followed by the force constant (for both styles) and the equilibrium angle if harmonic angles are used. ### Long-range interactions Long-range interactions include repulsive interactions mediated by the Flory-Huggins \(\chi\) parameter, and the electrostatic forces acting between the charged monomers. The distinctive feature of MATLAB.FT is the way in which it handles these interactions. While bonded interactions use explicit coordinates to calculate inter-particle distances, long-range interactions use the mass/charge density field to compute the resulting forces. In this process a Particle-to-Mesh (PM) method is used. In this scheme, the box is divided into a discrete grid, with the number of grid-points in each direction being a user-defined quantity. At the beginning of the simulation, a Fourier-space representation of the inter-particle potential is calculated using Fast Fourier Transform (FFT), and is stored for the rest of the simulation. Then, at each time-step, every particle assigns its density contribution to nearby grid points, using a spline interpolation scheme with the weights given in the appendix of Ref. [40]. The order of the interpolating spline can be chosen from 1 to 4, with higher order interpolation requiring more computation. Regardless of the form of the pair potential \(u(r)\), forces are given in real space by \[f(\mathbf{r})=-\int d\mathbf{r}^{\prime}\nabla u(\mathbf{r}-\mathbf{r}^{\prime })\rho(\mathbf{r}^{\prime}). \tag{18}\] The convolution in Eq. 18 is evaluated in Fourier space using FFTs, where it becomes a simple multiplication. Then an inverse FFT is used to transform the forces back into real space where the forces are interpolated back onto the particle centers. The process is illustrated schematically in Figure 1. The current pair potentials implemented in MATLAB.FT include Gaussian forms as well as the nanoparticle-nanoparticle and nanoparticle-monomer forms demonstrated in previous work by some of us[19]. ### Liquid Crystalline Systems We model liquid crystalline interactions through a modified Maier-Saupe (MS) potential that is a discrete version of the McMillan model[41]. In our implementation, the MS interactions involve two pairs of particles: one of each pair is the "center" of the interaction \(i\), and the other becomes a partner particle \(j\) that is used to define the local molecular orientation on particle \(i\), \(\mathbf{u}_{i}=\frac{\mathbf{r}_{i}-\mathbf{r}_{j}}{|\mathbf{r}_{i}-\mathbf{r} _{j}|}\) (see schematic in Fig. 9 below). The local orientation vector is used to define an orientation tensor for each particle \(\mathbf{S}_{i}=\mathbf{u}_{i}\mathbf{u}_{i}-\mathbf{I}/\mathbb{D}\), which is mapped onto an orientation field similar to the density fields, \[\mathbf{S}(\mathbf{r})=\sum_{i}\mathbf{S}_{i}\,\delta(\mathbf{r}-\mathbf{r}_{ i}). \tag{19}\] In Equation 19, the sum is over the LC centers. The orientation field \(\mathbf{S}(\mathbf{r})\) is then used to compute the Maier-Saupe potential energy[42], \[\beta U_{MS}=-\frac{\mu}{\rho_{0}}\int d\mathbf{r}\int d\mathbf{r}^{\prime} \;\mathbf{S}(\mathbf{r}):\mathbf{S}(\mathbf{r}^{\prime})\,u_{G}(|\mathbf{r}- \mathbf{r}^{\prime}|), \tag{20}\] where \(\mu\) is the Maier-Saupe potential parameter and \(u_{G}(r)\) is the Gaussian potential that renders the interactions non-local. The forces are derived by explicit differentiation and are presented in the documentation of the code, and as we show below in Section VII, this model captures both nematic and smectic A phases. Particles that carry an orientation vector \(\mathbf{u}_{i}\) are specified in an additional input file that is similar in nature to the lists of bonded partners. When specifying that the MS potential is to be used, the name of the additional input file is also provided; contained in this file is a list of pairs of particles \(i\) and \(j\) used to define the orientation vector associated with particle \(i\). This implementation allows for easy creation of either main-chain liquid crystalline polymers or side-chain liquid crystalline polymers, a detailed study of which will be the subject of a forthcoming publication. Furthermore, by making one of the end sites within an LC mesogen a different site type, one can indirectly control anchoring conditions at phase boundaries by making making this other type more or less repulsive with a particular species in the nearby phase. ## V Methods In this section, we describe in more detail how selected functionalities are implemented in MATLAB.FT. When possible, we emphasize how the algorithms presented below were optimized to take advantage of the GPU architecture to accelerate performance. The summary of all available functionalities and parameters can be found in the documentation. ### Drude oscillators Polarization can play an important role in phase behaviour of polymer solutions, especially in biological context. Molecular polarizability has influence on polymer solubility, and it can also modify how polymer chains interact with salt ions present in the environment. In MATLAB.FT, polarizability effects are introduced through the use of classical Drude oscillators. In this approach, a "Drude particle" is attached to the parent particle via a harmonic spring with stiffness \(k_{D}\) and zero equilibrium length. This Drude particle is assigned a partial charge \(\delta q_{D}\) and the partner particle \(-\delta q_{D}\), such that the net charge of the two-particle pair remains unchanged. The magnitude of the spring constant \(k_{D}\) can be related to the molecular polarizability, with polarizability decreasing with increasing stiffness. The Drude particle gets assigned a small mass, so that it can be integrated with other particles using standard equations of motion. This simplification circumvents the issue of treating polarizability effects on the quantum-mechanical level, while still being able to reproduce spatial variations in polarization. Drude particles do not participate in excluded volume interactions, and thus the only forces acting on them are electrostatic in nature. We note that our implementation is different from those typically used in atomistic or more fine-scale coarse-grained models, where the Drude particle is typically thermostatted independently at a low temperature, enabling the polarizability to be estimated with the classical expression \(\alpha=\delta q_{D}^{2}/k_{D}\). Since our charges are distributed over a unit Gaussian, this expression does not apply, and we have parameterized our effective dielectric constant as a function of the various parameters of the Drude oscillators (\(q_{D},k_{D}\), and the spread of the Gaussian, \(\sigma\)), which is presented below in Example Systems. Figure 1: Schematic illustration of the Particle-to-Mesh (PM) scheme. Specifically, shown here is a first-order spline interpolation, where the particle density is mapped to the two nearest grid points in each dimension. The same spline weights are used to map the forces back to the particles. ### Dynamic bonding and Lewis acid-base pairs In addition to static bonds, which are initialized at the beginning of the simulation, and remain unchanged throughout its course, MATLAB.FT allows for dynamic bonds to be formed between particles during the simulation run. Dynamic bonds are created or destroyed based on the user-defined acceptance criterion. Currently, Metropolis-Hastings acceptance criterion is used, with an optional shift in the reaction energy. The energy of the bond is calculated based on the extension of the harmonic spring assigned to the pair of bonded atoms. Dynamic bonds are created between a donor and an acceptor particle. Whether a particle acts as a donor or an acceptor is specified in an external text file, which maps these roles onto particle indices. To make the process computationally efficient while executing on a GPU, MATLAB.FT utilizes a dedicated neighbour list style. Only the donor particles are allowed to initialize the process of bond making and breaking. They store information only about the acceptor particles present in their vicinity. Two separate lists keep track of bonded and free donor particles. During each bonding/unbonding step, two kernels are launched in a randomly chosen order. One kernel attempts to break some of the existing bonds, while the other creates new ones. These kernels are only dispatched using the indices of relevant donor particles - free donors for the bonding kernel, and bonded donors for the bond-breaking one. Dynamic bonds can also be used for simulating induced dipoles, for example the Lewis acid-base pairs. This requires only a minimal change in the input script, namely, specifying the magnitude of the charge to be assigned to each of the bonding partners. Partial charge of opposite sign gets assigned to donors and acceptors upon binding, so the net charge of the system remains constant. ### Hydrodynamics using DPD Thermostat Random noise is introduced into the simulation to account for the lost degrees of freedom that are integrated out during the coarse-graining process. Although Brownian Dynamics can be used to capture the behaviour of polymer solutions, the resulting equations of motion do not satisfy the fluctuation-dissipation theorem. Thus, they do not reproduce correct hydrodynamics. An alternative method, available in MATLAB.FT, which obeys Navier-Stokes equations, is the Dissipative Particle Dynamics (DPD) thermostating. A brief overview of the method is provided below. For more detailed description, we refer the reader to [43]. In this method, three types of forces act on particles. A conservative force \(\mathbf{f}^{c}\), frictional (dissipative) force \(\mathbf{f}^{d}\), and a random force \(\mathbf{f}^{r}\). The conservative force corresponds to the gaussian repulsions, and to the electrostatic interactions. The dissipative and random forces are pairwise additive and thus the local momentum is conserved. The dissipative force on particles i is given by Eq. 21, \[\mathbf{f}_{i}^{d}=-\frac{1}{2}\frac{\sigma^{2}}{k_{b}T}\sum_{j}(\omega(| \mathbf{r}_{ij}|))^{2}(\mathbf{v}_{ij}\cdot\mathbf{r}_{ij})\,\mathbf{r}_{ij}, \tag{21}\] whereas the random force is given by \[\mathbf{f}_{i}^{r}=\sigma\sum_{j}(\omega(|\mathbf{r}_{ij}|))\gamma_{ij}\frac{ 1}{\sqrt{\Delta t}}\hat{\mathbf{r}}_{ij}, \tag{22}\] where \(\hat{\mathbf{r}}_{ij}\) indicates a unit vector in the direction of \(\mathbf{r}_{ij}\). As DPD involves the use of range-limited forces, it requires a neighbour list. Since the interactions are pairwise-additive and symmetric, only a "half" list is needed, where each particle only stores the information about the particles with the index lower than its own. This reduces the amount of required computation, and ensures optimal GPU thread utilization. The DPD thermostat should be used with the Velocity Verlet integrator. Code performance is affected by the choice of the value of \(\sigma\), and the time step. Strategies for choosing the values and the reasoning behind them is outlined in the documentation. ## VI Code structure, extensibility, and flexibility This section will give a brief overview of the organization of the code and how its different parts cooperate with each other. ### Input Script The first step of the simulation is selecting the method to be used, either TILD or FT. This is passed as a command line argument when the program is called. Using./MATILDA.FT -particles will run the TILD simulation, whereas -ft option will initialize an FT run. Additional command line arguments, like the name of the input script to be used, are described in the documentation. Two files are required for a TILD simulation. The main input file is responsible for setting up simulation parameters, such as the dimensionality of the system, size of the simulation box, grid density, time step size, and the number of time steps to perform. The input script also defines the interaction potentials between selected atom types (Potentials), and the parameters used for calculating electrostatic forces - Bjerrum length and the charge spreading length (currently uniform for the entire system, will be allowed to vary between different types of particles in the future release). The same file also specifies particle groups, along with the corresponding neighbour lists, integrators, and additional forces to operate on selected particles. The second (data) file provides the initial positions of the particles, their types, and the molecules they belong to. This data file also initialize the static bonds and angles used in the simulation. Currently, the initial atom configuration can be read either from the LAMMPS data file (in angle or charge style) or from a GSD file. ### Code Organization The code takes advantage of C++ object oriented programming approach. It is divided into classes, which interact with each other and exchange data as needed. Each class is responsible for handling a particular functionality. The base class serves as an interface used to interact with other parts of the code. Then specialized sub-classes are derived from the base class, to provide a specific functionality. This organization makes extending the code to include new functionalities a relatively easy and straightforward process, with simple integration of the new components into the existing code VI.4. For example, the NeighbourList class is responsible for constructing and storing the neighbour list for the selected group of particles. This neighbour list is then used by the additional forces (created as a subclass of the ExtraForce class) to accelerate the operations performed on this group. Depending on the nature of the additional force, specialized neighbour lists can be used, in order to further accelerate the performance. A diagram of class code organization in shown in Fig. 2. Below, we provide a brief description of selected classes. The outline of the code structure, along with the detailed description of all class functionalities and options are provided in the documentation. #### vi.2.1 Global variables space The global variable space holds the main data structures used in the simulation. It stores the global arrays containing particle types, positions, forces acting upon them, velocities, and static bonds, arranged according to the particle ID. Reading of the input script is also handled at this level. Before the beginning of the simulation, these arrays are initialized and then periodically updated by other classes during the time-stepping process. In the future release these structures will be placed in a separate Box class, to closely resemble to organization of the FT branch of the code. #### vi.2.2 Group Class The **Group** base class provides data structures which store indices of the member particles. It sets device-specific variables (BLOCK and GRID sizes) that are used in kernel calls dispatch on the group particles. All forces and neighbour lists in MATLAB.FT operate on specific groups. Pointers to each group object are stored in a globally accessible vector array. Each group is assigned a unique name, which is used to pass its pointer to their classes. Groups can be static or dynamic. Static groups are initialized at the beginning of the simulation and their content remain unchanged over the simulation course. Dynamic groups, on the other hand, periodically check and update their members, based on the specified membership criterion. A special group, named _"all"_, is initialized by default at the beginning of the simulation, and contains all particles in the simulation box. Currently, two static group types are available - grouping by particle **type** or by its global **id**. Type-based groups collect all the particles with the same type (as specified in the input.data). Id-based groups require the user to provide an external plain text file which has contains the indices of the particles to be included in the group. Currently available dynamic group style, "regions", allows the user to define a separate region in space (along all or only specific axis). Particles found within that region get assigned to the group. The can be easily extended to incorporate user-defined groups, see Section VI.4 below. #### vi.2.3 Neighbour List Class Like all operations in MATLAB.FT, neighbour lists act on groups. For the purpose of neighbour list building, the simulation box is assigned a grid which divides it into discrete, non-overlapping cells. Interfacing with a specific group of particles and the division of the simulation box into non-overlapping cells is handled by the **NList** base class. Figure 2: Schematic outline of code structure Building up on this basic functionality, more specialized sub-classes of neighbour lists are created. The simplest one, _distance_, is the "full" neighbour list, which stores the entire information about particle proximity, and thus double-counts each pair of neighbours. A slightly more elaborate sub-class, _half_distance_, stores only half of this information. More specifically, each particle only keeps track of the particles which have a lower id than their own. This method also avoids unnecessary calculations when pairwise interactions are present. Otherwise, a branching statement needs to be included, to perform the computation on only one member of the particle pair. Alternatively, the same calculation is performed for both members. Unfortunately, thread divergence is not permitted in the GPU model of execution, and thus a branching statement forces all the threads to wait until the other ones have competed the first branch. This waste of resources is avoided by using an appropriate neighbour list. A dedicated neighbour list has been designed specifically to support the dynamic bonding functionality. Briefly, only the donor particles, which initialize the bonding, are assigned the neighbours. These neighbours are then filtered to only include the acceptor particles, as only the donor-acceptor combination can create a valid bonded pair. By pre-calculating the possible pairs at this step, these checks can later be avoided in the kernel calls (preventing possible thread divergence). #### iv.1.4 ExtraForce Class In addition to electrostatic and repulsive interactions, selected groups of particles can be subject to additional user-defined forces. These are specified in the input script using the _extraforce_ command. The **ExtraForce** base-class is responsible for assigning the force to the specific group of particles, and ensuring it is applied to this group at the specified time-steps (wither each step or user-defined frequency). In addition, some range-limited forces require a neighbour list to restrict the search space only to the particles present within the specific range. Currently available forces are: * which enables the particles to be confined within a specific region or to simulate surface interactions. The user can chose from available wall-particle potentials, or specify their own form of interaction, by extending the source code. * Adds random noise to the selected group of particles. Can be used with Velocity Verlet integrator to simulate Brownian dynamics. * Adds a force to push the selected group of particles towards the center of the box along a specified axis. * Dissipative Particle Dynamics. This sub-class of ExtraForce provides an alternative way to introduce random noise into the simulation, and should be used along with the Velocity Verlet integrator. In contrast the Langevin thermostat, however, it is pair-wise additive and conserves local momentum. Thus it capable of correctly reproducing hydrodynamic behaviour of the system. Since the force acts over a limited range, a neighbour list needs to be constructed for the particles of interest. * This additional forces can be used to introduce dynamic bonds in the simulation. Whereas static bonds are initialized at the beginning of the simulation, and remain unchanged, dynamic bonds can be formed and broken according to the specified acceptance criterion. This force requires a specialized neighbour list (_bonding_) which has been designed to optimize the required computations. More details about the ExtraForce class can be found in the documentation. #### iv.1.5 Compute Class The _Compute_ class is responsible for performing on-the fly calculations of properties of the system. This enables the user to monitor the evolution of the system in real time and also saves time spent on post-processing. * this compute provides the information about the average static structure factor of the particle system. The static structure factor, given by \[S(k)=\frac{1}{N}\langle\hat{\rho}_{k}\hat{\rho}_{-k}\rangle,\] (23) is defined as the correlation function of the system density represented in the Fourier space. The density is given by \(\rho(r)=\sum_{i=1}^{N}\delta(r-r_{i})\), and in Fourier space it becomes \(\hat{\rho}_{k}=\sum_{i=0}^{N}e^{ik\cdot r_{i}}\)[44]. It performs the calculation and writes the data to an external file according to user-specified frequency. * this compute can be used to calculate chemical potential of the given species in the system using a chain deletion method. User provides a range (based on global molecule ID) of molecules to operate on. These particles are removed at random from the system, and the change in energy upon removal is calculated. This data is written to an external file. #### iii.2.6 Integrator Class In order to solve the equations of motion and propagate the particle coordinates in time, numerical integration is required. In MATLAB.FT, three different numerical algorithms are available, and are briefly described below: * **Velocity-Verlet (VV)**. Needs to be coupled with additional thermostat. Available thermostats include Langevin noise or Dissipative Particle Dynamics, which of which are part of the ExtraForce class. * **Euler-Maruyama (EM)**. Generates the thermal noise internally during the update and serves as the simplest stochastic integration scheme to implement. * **Gronbech-Jensen and Farago (GJF)**. Generates thermal noise internally during the update, and We find that this algorithm allows time steps up to 10x larger than the EM algorithm with no loss of accuracy or stability. ### FTS Implementation #### iii.3.1 Class structure As the FTS branch was begun more recently, many planned features are still in development. There is also a difference in class organization between the older (TILD), and the newer (FT) branches. The classes that comprise an FT simulation are more tightly integrated and the scope of object-oriented organization is larger, as compared to the TILD branch which still uses global variables. A field-theoretic simulation lives in an **FTS_Box** class, which contains three key classes: **Potentials**, **Molecules**, and **Species** (see Figure 3 for graphical outline of FT branch organization). The **Potentials** class performs all of the functions that are related to the the various non-bonded interactions, including updating the potential fields associated with a particular interaction. The densities that show up in the effective forces are taken from the **Species** class, which serves as a container for these densities. **Species** generates the unsmeared chemical potential fields, \(\mu_{K,c}(\mathbf{r})\), by looping over the interaction potentials and accumulating the relevant potential fields. Next, the **Molecules** class takes these potential fields, applies any density smearing that may be necessary, and computes both the center and smeared density fields. The _smeared_ density fields are then accumulated into the relevant **Species** class. The general flow of the code is summarized in Figure 4. #### iii.3.2 Field update schemes Currently, two schemes have been implemented to update potential fields, in order to evolve in time equations of motion such as Eq. 15. The straightforward explicit Euler-Maruyama (EM) integration scheme discretizes the equation in time and uses the forces at the current time to estimate the field configurations at a future time as \[w^{t+\delta t}(\mathbf{r})=w^{t}+\delta t\,\lambda_{w}F_{w}^{t}(\mathbf{r})+ \sqrt{2\delta t\lambda_{w}}\zeta_{t}(\mathbf{r}), \tag{24}\] where \(\delta t\) is the size of the time step and \(\zeta_{t}(\mathbf{r})\) is purely real Gaussian noise with unit variance that is uncorrelated in both space and time. As mentioned above, simply neglecting the noise term converts this algorithm to one that drives the system to a mean-field solution. The other algorithm that is implemented is a 1st-order, semi-implicit updating scheme, that has been shown to allow for time steps significantly larger than allowed by the EM scheme[31; 38; 45]. In this approach, one derives an Figure 4: Outline of an FTS simulation as implemented in MATLAB.FT. The termination condition could be convergence to within a prescribed tolerance in SCFT or reaching the maximum desired number of time steps in a CL simulation. Figure 3: Basic actions and roles of the FTS Classes in MATLAB.FT. approximate expression for the force \(F_{w}^{t,lin}(\mathbf{r})\) that is linear in the potential field. In real-space, these expressions take the form of a convolution \[F_{w}^{t,lin}(\mathbf{r})=\int d\mathbf{r}^{\prime}\;A_{w}(\mathbf{r}-\mathbf{r }^{\prime})\,w(\mathbf{r}^{\prime}), \tag{25}\] where \(A_{w}(\mathbf{r})\) is the linear coefficient. As a result of this convolution, the 1S updating scheme is most effectively handled in Fourier space where we have \[F_{w}^{t,lin}(\mathbf{k})=A_{w}(\mathbf{k})\,w(\mathbf{k}). \tag{26}\] To affect the semi-implicit scheme, Eq. 24 is written in Fourier space and modified by subtracting the linear term at \(t+\delta t\) and adding it at \(t\) giving, \[w^{t+\delta t}(\mathbf{k}) = \delta t\,\lambda_{w}\left[F_{w}^{t}(\mathbf{k})+A_{w}(\mathbf{k })\,w^{t}(\mathbf{k})-A_{w}(\mathbf{k})w^{t+\delta t}(\mathbf{k})\right] \tag{27}\] \[+w^{t}(\mathbf{k})+\sqrt{2\delta t\lambda_{w}}\zeta_{t}(\mathbf{ k}).\] We note that \(\zeta_{t}(\mathbf{k})\) is generated as a spatially uncorrelated noise field in real-space that is explicitly Fourier transformed. Equation 27 can be readily solved for the field at \(t+\delta t\) giving \[w^{t+\delta t}=\frac{w^{t}+\delta t\,\lambda_{w}\left[F_{w}^{t}+A_{w}\,w^{t} \right]+\sqrt{2\delta t\lambda_{w}}\zeta_{t}}{1+\delta t\lambda_{w}\,A_{w}}, \tag{28}\] where we have suppressed the wavevector dependence for brevity. The functional form of the linear coefficients \(A_{w}\) generally contain one or two contributions that have a stabilizing effect on the time integration[45, 38]. The first arises from the terms that are quadratic in the fields in \(\mathcal{H}\) (e.g., the first two lines in Eq. 11); this term is included for every type of interaction potential. During the initialization of an FT simulation, the **Potentials** class adds this relevant term to \(A_{w}\). The second contributions are the linear approximates of the density operators, which involve convolutions of Debye functions with the potential fields; these contributions are handled by the **Molecules** class during initialization. #### iii.4.3 Molecule Types Currently, the only implemented molecule type is a linear, discrete Gaussian chain with an arbitrary number of blocks. This class handles the calculation of the chain propagators, and during initialization the code automatically checks whether the molecule is symmetric to avoid calculating the complimentary propagator if possible. The other key step taken in the initialization is to accumulate the relevant Debye-function-like contributions to the linear coefficients associated with each potential. From previous studies [45, 38], not all terms that show up in the precise linear expansion of the force are stabilizing; to that end, we do not include the Debye terms in the \(w_{AB}^{(-)}\) field, but they are included in the \(w_{+}\) and \(w_{AB}^{(+)}\) fields. ### Extensibility MATILDA.FT has been designed to be easily extensible by other users, according to their specific needs. The process of expanding the source code is simplified thanks to the division into classes, consistent with C++ philosophy of Object Oriented Programming. Due to this organization, the user needs only to understand how the base class interfaces with other parts of the program, and does not need to rewrite the entire logic. Simple functionalities can easily be added by inheriting the capabilities of provided base classes or modifying the existing once. ``` 1/*kernelfunctionupdategroupmembers 2basedontherpositionandtype*/ 3__global__voidd_CheckGroupMembers( 4constfloatx,//positionarray 5thrust:device_ptr<float>d_wall_data, 6constintn_walls,//numberofwalls 7thrust::device_ptr<int>d_all_id, 8constintns,//groupsize 9constintDin,//Dimensionality 10constint*tp>t> 11//tp[]arraystoreparticletypes 12intlist_ind=blockIdx.x*blockDim.x+threadIdx.x; 13if(list_ind>=ns) 14return; 15intind=list_ind; 16for(inti=0;i<n_walls;++i){ 17intj=int(d_wall_data[3*i]); 18floatlow=d_wall_data[3*i+1]; 19floathigh=d_wall_data[3*i+2]; 20floatxp=x[ind*Dim+j]; 21d_all_id[ind]=ind; 22if(xp=low&&xp<high&&tp[ind]==1) 23//additionaltypecheck 24d_all_id[ind]=ind; 25else 26d_all_id[ind]=-1; 27}//i<n_walls ``` Listing 3: Code extensibility example This simplicity is demonstrated in Listing 3, where a new group is produced. It is based on the dynamic group which only includes particles present in the specific region. Here, in addition to spatial criterion, only particles of a particular type (here 1) are considered group members. Here, only the modified GPU kernel is shown for brevity. Two changes to the original code were introduced to achieve new functionality. The first one on line 10 (an additional parameter in the function call), and another on line 22 (type check). This file can later be incorporated into a new subclass of the **Group**, compiled with the rest of the source code, and will work seamlessly in the simulation. Included with the source-code is also a pre-made make file which makes compilation of additional components easy, only requiring that the be added to the source list. The full example, along with a step-by-step explanation, and make file description is available in the _examples/extend_ folder in the GitHub repository. ## VII Example systems ### Coacervate In this example, a small system consisting of the total of 434 molecules, each with a degree of polymerization, \(N=82\), has been simulated. Half of these molecules carry positively charged monomers, with the other half having each monomer with negative charges of the same magnitude. No explicit solvent is present. The system starts in a random, homogeneous phase and over the course of simulation phase separates into polymer-rich and polymer-depleted regions as coacervation occurs. The snapshots from the beginning (left), and the end (right) are shown in Figure 5. On the Nvidia Quadro RTX 5000 GPU, the simulation took 1399 seconds to perform 2,000,000 time steps, with the resulting speed of 1429.6 ts/sec. This example can be found in the GitHub repository, in the _examples/Coacervate_ directory. The full movie of the time evolution of the system is available in the supporting information. ### Hydrodynamics using DPD integrators To demonstrate the effect of using DPD thermostat, we simulated a simple binary mixture of uncharged particles in a two-dimensional box. The value of \(\kappa\) in the Helfand potential has been chosen to be equal 1 and \(\chi_{AB}=5\) to induce strong phase separation. The value of \(\sigma\) was arbitrarily set to 0.5, with \(r_{cutoff}=1.0\). The neighbour list update frequency was set to every 6 time-steps, and \(r_{skin}=2.5\). On the Nvidia Quadro RTX 5000 GPU, the simulation took 5259 seconds to perform 1,200,000 time steps, with the resulting speed of 228.18 ts/sec. In Figure 6, the insets show representative snapshots from the course of the simulation along with the corresponding structure factor for the red species in the main figure. ### Polarizable diblock co-polymer with explicit solvent In this example, 1330 diblock co-polymer chains with \(N=74\), and blocks of equal size, were simulated in explicit solvent. Both the solvent and polymer chains are were made polarizable through the use of classical Drude oscillators. Since in this system only the polymer concentration of the condensate was of interest, all simulations were started with the polymers in a dense "slab" configuration. In this configuration all particles are biased to migrate towards the middle of the simulation box, creating a homogeneous, dense polymer phase. During the production run this bias is removed, and the slab is allowed to expand. Figure 5: Initial (left) and final (right) snapshot from the simulation of coacreving binary mixture. This simulation is a particle-based implementation of the model considered previously as a field-theory by one of us[46] with a dimensionless excluded volume parameter \(B=0.05\) and dimensionless Bjerrum length \(E=10000\). Figure 6: Static structure factor \(S(k)\) calculated on the red species undergoing spinodal decomposition at various time steps (TS) using the DPD integrator. The insets show representative snapshots of the system at the same times. The parameters for the Drude oscillator attached to the solvent molecules have been chosen as to reproduce the dielectric constant of water. In this way, the Bjerrum length can be set to the value it has in the vacuum, and dielectric screening is then emergent from the polarizable solvent. The calculated dielectric constant for chosen combination of parameters are shown in Figure 8. The dielectric constant of water is around 78.4, so that optimal choice of parameter corresponds to \(a_{0}\approx 0.5\) and \(k_{D}\approx 1.0\) In Figure 7 we show the plot of the reduced concentration \(c^{*}\) of the dense and dilute phases phase, and the corresponding value of \(\chi\) between the polymer monomers and the solvent molecules. We also include corresponding snapshots of final structure of the system. ### Liquid crystals A simple model of a pure liquid crystal was simulated where the mesogen was discretized into three interaction sites with the Maier-Saupe (MS) interaction taken from the center of the mesogen, see the inset in Figure 9. Bonds between adjacent liquid crystal sites used a force constant \(k_{b}=100\) and equilibrium distance 1, and the orientation was maintained with a worm-like chain angle potential with prefactor \(\lambda=50\). Finally, an additional Helfand potential was employed to maintain an approximately uniform density with \(\kappa=11\) and the total site density was \(\rho_{0}=3\). This combination of the anisotropic molecular shape with a MS interaction taken from its center makes the model similar to the McMillan mean-field model [41]. Figure 9 shows the average liquid-crystalline order parameter \(\lambda\) that is calculated on the fly as the MS \(\mu\) parameter is increased from 0 to 120. \(\lambda\) is taken as 3/2 times the largest eigenvalue of the average \(\mathbf{S}\) tensor. When \(\mu\approx 45\), a sharp increase in \(\langle\lambda\rangle\) indicates the isotropic-to-nematic transition. ## VIII Planned future developments In this section we present the near-future and long term goals regarding the development of MATLAB.FT. For the TILD branch, the main modification that is currently in progress, will be converting to a fully object-oriented style, to closely follow the design of the newer, FT branch. Figure 8: Calculated values of dielectric constant for the solvent molecule, as a function of charge spreading length, \(a_{0}\) and the spring constant of the Drude oscillator, \(k_{D}\) Figure 7: Plot of the density obtained in the dilute and dense phase as the value of \(\chi\) between the monomers and solvent is varied. Included are also renders of the three-dimensional structure of the system corresponding to the selected data points. Positively charged monomers are displayed in red, while negatively charged ones are colored blue. Figure 9: Liquid crystalline order parameter as a function of the strength of the Maier-Saupe parameter \(\mu\) calculated during a simulation where \(\mu\) was continuously ramped throughout. This will make the two parts of code operate more seamlessly, and will facilitate future modifications and make extensibility easier. Afterwards, we will focus on optimizing existing algorithms, specifically modifying the calculation of repulsive interactions based on Gaussian density. The long-term goals for the TILD branch include introducing enhanced sampling techniques, such as Umbrella Sampling and Gibbs Ensemble Monte Carlo. This would enable the study of systems that suffer from being stuck for extended periods of time in metastable potential minima, such as polypeptides. We are also planning on expanding the software to be able to perform rigid body dynamics, which can be used to model patchy particles with specific interaction. Finally, we plan to harness the power of using multi-GPU architectures, to aid in simulations with multiple boxes. This feature would greatly facilitate simulations using enhanced sampling techniques like replica exchange, parallel tempering or Gibbs Ensemble MC. A "quality of life" feature we plan to implement is internal routines to generate starting configurations so that one could simply specify the polymer architecture, length, and density, and the code provides a starting configuration. Finally, the existing neighbor lists could be used to implement slip-springs to capture entanglement effects[47; 48], which would enable studies of, for example, viscoelastic phase separation. As the user community grows, we will try to implement features that are in high demand, and would provide useful tools to the scientific community. We plan to implement several key features for the FT simulation methods that should be available in the near term. While the time evolution equations of the fields described above are presented from the perspective of a complex Langevin (CL) simulation, the CL equations are not yet implemented, and all of the equations of motion are currently noise free. In addition, we plan to implement monomer smearing with unit Gaussians to regularize the models against ultra-violet divergences[19; 31; 46]. Finally, the inclusion of electrostatic interactions, including polar and polarizable polymer monomers[49], is planned in the near future. Further in the future we plan to implement more exotic polymer architectures such as bottlebrush and star polymers. Finally, a variety of nanoparticles, including anisotropic[19] and polymer-grafted particles[50; 51] as both field-based particles and explicit "hybrid" particles[52] are planned for the future. ## IX Concluding Remarks In this paper we presented MATILDA.FT, an open source mesoscale simulation software, which utilizes GPU architecture and primarily a CUDA/C++ programming language to achieve massive parallelism over thousands or millions of particles. The particle-based simulations are primarily designed for highly coarse-grained potentials that are finite on contact and where the particle density will be relatively high so there is a gain in overall efficiency by evaluating the non-bonded interactions using density fields. It has already proven to be able so simulate a vast array of distinct systems and can be easily extended to incorporate new ones, thanks to the object-oriented implementation. As far as we are aware, MATILDA.FT is the first published open-source software to combine both coarse-grained Langevin dynamics and field-theoretic simulation frameworks into a single code base. The code has been written with new users in mind, and its use and basic extensibility do not require specialized programming knowledge. This will make it a valuable to the broad part of the scientific community, providing a powerful computational tool to both theorists and experimentalists across many fields of material science and bio-engineering. We plan on continuously extending and improving the existing code, and building a user community where scientists can share their ideas and experience. As with many other open-source codes, such as LAMMPS, this approach is extremely beneficial and allows for a faster, and more impactful software development, to keep up with upcoming scientific challenges. ## X Acknowledgements This work used Bridges-2 GPU at Pittsburgh Supercomputing Center through allocation DMR150034 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. This work was supported by the National Science Foundation through grant MRSEC/DMR-1720530 (R.A.R. and partial Z.M.J.), NSF awards OISE-1545884 (C.T.) and CHE-220375 (C.G.), and the Department of Physics and Astronomy at the University of Pennsylvania (Z.M.J.). The images used in this publication were generated using the free visualization software Ovito version 2.9.0 [53]. ## XI Data Availability The source code is open source and available under the Gnu public license (GPL2) at www.github.com/rar-ensemble/MATILDA.FT. The "examples" folder contains all of the input files needed to reproduce the simulations presented herein. ## XII Author contributions **Z.M.J:** writing/original draft (lead); software (supporting); Validation (equal); Visualization (lead); **C.T:** writing/review and editing (supporting); software (supporting); **N.H:** software (supporting); **A.Y:** software (supporting); **R.A.R:** writing/original draft (supporting); review and editing (lead); software (supporting); **N.A:** software (supporting); **R.A.R:** writing/original draft (supporting); review and editing (lead); software (lead); conceptualization (lead); Validation (equal); Visualization (supporting).
2306.10587
Acceleration in Policy Optimization
We work towards a unifying paradigm for accelerating policy optimization methods in reinforcement learning (RL) by integrating foresight in the policy improvement step via optimistic and adaptive updates. Leveraging the connection between policy iteration and policy gradient methods, we view policy optimization algorithms as iteratively solving a sequence of surrogate objectives, local lower bounds on the original objective. We define optimism as predictive modelling of the future behavior of a policy, and adaptivity as taking immediate and anticipatory corrective actions to mitigate accumulating errors from overshooting predictions or delayed responses to change. We use this shared lens to jointly express other well-known algorithms, including model-based policy improvement based on forward search, and optimistic meta-learning algorithms. We analyze properties of this formulation, and show connections to other accelerated optimization algorithms. Then, we design an optimistic policy gradient algorithm, adaptive via meta-gradient learning, and empirically highlight several design choices pertaining to acceleration, in an illustrative task.
Veronica Chelu, Tom Zahavy, Arthur Guez, Doina Precup, Sebastian Flennerhag
2023-06-18T15:50:57Z
http://arxiv.org/abs/2306.10587v2
# Acceleration in Policy Optimization ###### Abstract We work towards a unifying paradigm for accelerating policy optimization methods in reinforcement learning (RL) by integrating foresight in the policy improvement step via optimistic and adaptive updates. Leveraging the connection between policy iteration and policy gradient methods, we view policy optimization algorithms as iteratively solving a sequence of surrogate objectives, local lower bounds on the original objective. We define optimism as predictive modelling of the future behavior of a policy, and adaptivity as taking immediate and anticipatory corrective actions to mitigate accumulating errors from overshooting predictions or delayed responses to change. We use this shared lens to jointly express other well-known algorithms, including model-based policy improvement based on forward search, and optimistic meta-learning algorithms. We analyze properties of this formulation, and show connections to other accelerated optimization algorithms. Then, we design an optimistic policy gradient algorithm, adaptive via meta-gradient learning, and empirically highlight several design choices pertaining to acceleration, in an illustrative task. ## 1 Introduction Policy gradient (PG) methods (Williams, 1992; Sutton et al., 1999) are one of the most effective reinforcement learning (RL) algorithms (Espeholt et al., 2018; Schulman et al., 2015, 2017; Abdolmaleki et al., 2018; Hessel et al., 2021; Zahavy et al., 2020; Flennerhag et al., 2021). These methods search for the optimal policy in a parametrized class of policies by using gradient ascent to maximize the cumulative expected reward that a policy collects when interacting with an environment. While effective, this objective poses challenges to the analysis and understanding of PG-based optimization algorithms due to its non-concavity in the policy parametrization (Agarwal et al., 2019; Mei et al., 2020, 2021). Nevertheless, PG methods globally converge sub-linearly for smoothly parametrized softmax policy classes. This analysis relies on local linearization of the objective function in parameter space and uses small step sizes and gradient domination to control the errors introduced from the linearization (Agarwal et al., 2019; Mei et al., 2020, 2021). In contrast, policy iteration (PI) linearizes the objective w.r.t. (with respect to) the functional representation of the policy (Agarwal et al., 2019; Bhandari and Russo, 2019; Bhandari and Russo, 2021), and converges linearly when the surrogate objective obtained from the linearization is known and can be solved in closed form. Relying on the illuminating connections between PI and several instances of PG algorithms (including (inexact) natural policy gradients (NPG) and mirror ascent (MA)), recent works (Bhandari and Russo, 2021; Chen et al., 2022; Mei et al., 2021; Yuan et al., 2023; Alfano and Rebeschini, 2023; Chen and Theja Maguluri, 2022) extended the above results and showed linear convergence of PG algorithms with large step sizes (adaptive or geometrically increasing). Other works showed that PG methods can achieve linear rates via entropy regularization. These guarantees cover some (approximately) closed policy classes, e.g., tabular, or log-linear--cf. Table 1 in Appendix A. More generally, in practice, each iteration of these PI-like algorithms is solved approximately, using a few gradient ascent update steps, in the space of policy parameters, which lacks guarantees due to non-concavity induced by non-linear transformations in the deep neural networks used to represent the policy (Agarwal et al., 2019; Abdolmaleki et al., 2018; Tomar et al., 2020; Vaswani et al., 2021). This recent understanding about the convergence properties of policy gradient methods in RL leaves room to consider more advanced techniques. In this work, we focus on **acceleration** via _optimism_--a term we borrow from online convex optimization (Zinkevich, 2003), and is unrelated to the exploration strategy of _optimism in the face of uncertainty_. In this context, _optimism_ refers to predicting future gradient directions in order to accelerate convergence (for instance, as done in Nesterov's accelerated gradients (NAG) (Nesterov, 1983; Wang and Abernethy, 2018; Wang et al., 2021), extra-gradient (EG) methods (Korpelevich, 1976), mirror-prox (Nemirovski, 2004; Juditsky et al., 2011), optimistic MD (Rakhlin and Sridharan, 2013; Joulani et al., 2020), AO-FTRL (Rakhlin and Sridharan, 2014; Mohri and Yang, 2015), etc.). In RL, optimistic policy iteration (OPI) (Bertsekas and Tsitsiklis, 1996; Bertsekas, 2011; Tsitsiklis, 2002) considers policy updates performed based on incomplete evaluation, with a value function estimate that gradually tracks the solution of the most recent policy evaluation problem. Non-optimistic methods, on the other hand, regard the value estimation problem as a series of isolated evaluation problems and solve them by Monte Carlo or temporal difference (TD) estimation. By doing so, they ignore the potentially _predictable_ nature of the evaluation problems, and their solutions, along a policy's optimization path. In previous work, optimism has been studied in policy optimization to mitigate oscillations (Wagner, 2014, 2013; Moskovitz et al., 2023) as well as for accelerated optimization (Cheng et al., 2018; Hao et al., 2020), resulting in sub-linear, yet unbiased convergence, cf. Table 1 in Appendix A. In this paper, we introduce a general policy optimization framework that allows us to describe seemingly disparate algorithms as making specific choices in how they represent, or adapt optimistic gradient predictions. Central to our exposition is the idea of _prospective learning_, i.e. making _predictions_ or projections of the future behavior, performance, or state of a system, based on existing historical data (_interpolation_), or extending those predictions into uncharted territory by predicting beyond data (_extrapolation_). This learning approach explicitly emphasizes the ability to anticipate the future when a recognizable pattern exists in the sequence. In particular, we show that two classes of well-known algorithms--_meta-learning algorithms_ and _model-based planning algorithms_--can be viewed as optimistic variants of vanilla policy optimization, and provide a theoretical argument for their empirical success. For example, STACK (Zahavy et al., 2020) represents an optimistic variant of Impala (Espeholt et al., 2018) and achieves a doubling of Impala's performance on the Atari-57 suite; similarly, adding further optimistic steps in BMG (Flennerhag et al., 2021) yields another doubling of the performance relative to that of STACK. In model-based RL, algorithms with extra steps of planning, e.g., the AlphaZero family of algorithms (Silver et al., 2016, 2017), with perfect simulators, also enjoy huge success in challenging domains, e.g. chess, Go, and MuZero (Schrittwieser et al., 2019), with an _adaptive_ model, achieves superhuman performance in challenging and visually complex domains. **Contributions** After some background in Sec. 2, we define a simple template for accelerating policy optimization algorithms in Sec. 3. This formulation involves using _proximal policy improvement methods_ with _optimistic auto-regressive policy update rules_, which adapt to anticipate the future policy performance. We show this _acceleration_ template based on optimism & adaptivity is a generalization of the update rule of proximal policy optimization algorithms, where the inductive bias is _fixed_, and does not change with past experience. We use the introduced generalization to show that a _learned_ update rule can form other inductive biases, that can accelerate convergence. We use the introduced formulation to highlight the commonalities among several algorithms, expressed in this formalism in Sec. 3, including model-based policy optimization algorithms relying on run-time forward search (e.g. Silver et al. (2016, 2017); Schrittwieser et al. (2019); Hessel et al. (2021)), and a general algorithm for _optimistic policy gradients_ via _meta-gradient optimization_ (common across the algorithmic implementations of Zahavy et al. (2020); Flennerhag et al. (2021)). Leveraging theoretical insights from Sec. 3, in Sec. 3.2, we introduce an optimistic policy gradient algorithm that is adaptive via meta-gradient learning. In Sec. 3.2.1, we use an illustrative task to test several theoretical predictions empirically. First, we tease apart the role of optimism in forward search algorithms. Second, we analyze the properties of the optimistic algorithm we introduced in Sec. 3.2. Using acceleration for functional policy gradients is under-explored, and we hope this unifying template can be used to design other accelerated policy optimization algorithms, or guide the investigation into other collective properties of these methods. ## 2 Preliminaries & notation NotationThroughout the manuscript, we use \(\doteq\) to distinguish a definition from standard equivalence, the shorthand notation \(\nabla_{x}f(x_{t})\doteq\nabla_{x}f(x)|_{x=x_{t}}\), \(\langle\cdot,\cdot\rangle\) denotes a dot product between the arguments. The notation \(\lceil\cdot\rfloor\) indicates that gradients are not backpropagated through the argument. ### Markov Decision Processes We consider a standard reinforcement learning (RL) setting described by means of a discrete-time infinite-horizon discounted Markov decision process (MDP) (Puterman, 1994)\(\mathcal{M}\doteq\{\mathcal{S},\mathcal{A},r,P,\gamma,\rho\}\), with state space \(\mathcal{S}\) and action space \(\mathcal{A}\), discount factor \(\gamma\in[0,1)\), with initial states sampled under the initial distribution \(\rho\), assumed to be exploratory \(\rho(s)>0,\forall s\in\mathcal{S}\). The agent follows an online learning protocol: at timestep \(t\geq 0\), the agent is in state \(S_{t}\in\mathcal{S}\), takes action \(A_{t}\in\mathcal{A}\), given a policy \(\pi_{t}(\cdot|s_{t})\)--the distribution over actions for each state \(\pi:\mathcal{S}\rightarrow\Delta_{\mathcal{A}}\), with \(\Delta_{\mathcal{A}}\)--the action simplex--the space of probability distributions defined on \(\mathcal{A}\). It then receives a reward \(R_{t}\sim r(S_{t},A_{t})\), sampled from the reward function \(r:\mathcal{S}\times\mathcal{A}\rightarrow[0,R_{\max}]\), and transitions to a next state \(S_{t+1}\sim P(\cdot|S_{t},A_{t})\), sampled under the transition probabilities or dynamics \(P\). Let \(d_{\pi}(s)\) be a measure over states, representing the discounted visitation distribution (or discounted fraction of time the system spends in a state \(s\)) \(d_{\pi}(s)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}\Pr(S_{t}=s|S_{0}\sim\rho,A _{k}\sim\pi(\cdot|S_{k}),\forall k\leq t)\), with \(\Pr(S_{t}=s|S_{0}\dots)\) the probability of transitioning to a state at timestep \(t\) given policy \(\pi\). The RL problem consists in finding a policy \(\pi\) maximizing the discounted return \[J(\pi)\doteq\mathbb{E}_{S\sim\rho}[V_{\pi}(S)]=(1-\gamma)\mathbb{E}_{\pi, \rho}\left[\sum_{t\geq 0}\gamma^{t}R_{t+1}\right]\quad\text{(the policy performance objective)} \tag{1}\] where \(V_{\pi}\in\mathbb{R}^{|\mathcal{S}|}\) is the value function, and \(Q_{\pi}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\) the action-value function of a policy \(\pi\in\Pi=\{\pi\in\mathbb{R}_{+}^{|\mathcal{S}|\times|\mathcal{A}|}|\sum_{a \in\mathcal{A}}\pi(s,a)=1,\forall s\in\mathcal{S}\}\), s.t. (such that) \(Q_{\pi}(s,a)\doteq\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}R_{t}|S_{ 0}=s,A_{0}=a\right]\), and \(V_{\pi}(s)\doteq\mathbb{E}_{\pi}\left[Q(s,A)\right]\). Let \(\mathcal{T}_{\pi}:\mathbb{R}^{|\mathcal{S}|}\rightarrow\mathbb{R}^{|\mathcal{ S}|}\) be the Bellman evaluation operator, and \(\mathcal{T}:\mathbb{R}^{|\mathcal{S}|}\rightarrow\mathbb{R}^{|\mathcal{S}|}\) the Bellman optimality operator, s.t. \((\mathcal{T}_{\pi}V)(s)\doteq r(s,\pi(s))+\gamma\sum_{s^{\prime}\in\mathcal{S}}P (s^{\prime}|s,\pi(s))V(s^{\prime})\), and \((\mathcal{T}V)(s)\doteq\max_{a\in\mathcal{A}}r(s,a)+\gamma\sum_{s^{\prime} \in\mathcal{S}}P(s^{\prime}|s,a)V(s^{\prime})=\max_{\pi\in\Pi}(\mathcal{T}_{ \pi}V)(s)\), with Q-function (abbr. Q-fn) analogs. ### Policy Optimization Algorithms The classic _policy iteration (PI)_ algorithm repeats consecutive stages of (i) one-step greedy policy improvement w.r.t. a value function estimate \(\pi_{t+1}\in\mathcal{G}(V_{\pi_{t}})\doteq\{\pi:\mathcal{T}_{\pi}V_{\pi_{t}}= \mathcal{T}V_{\pi_{t}}\}\), with \(\mathcal{G}\) the greedy set of \(V_{\pi_{t}}\), followed by (ii) evaluation of the value function w.r.t. the greedy policy \(V_{\pi_{t+1}}=\lim_{m\rightarrow\infty}\mathcal{T}_{\pi_{t+1}}^{m}V_{\pi_{t}}\). Approximations of either steps lead to _approximate PI (API)_(Scherer et al., 2015). Relaxing the greedification leads to _soft PI_(Kakade and Langford, 2002)\(\pi_{t+1}\doteq(1-\alpha)\pi_{t}+\alpha\pi_{t+1}^{+}\), with \(\pi_{t+1}^{+}\doteq\arg\max_{\pi\in\Pi}(Q_{\pi_{t}},\pi)\), for \(\alpha\in(0,1]\), a step size. _Optimistic PI (OPI)_(Bertsekas and Tsitsiklis, 1996) relaxes the evaluation step instead to \(Q_{t+1}\doteq Q_{t}-\lambda[Q_{t}-\mathcal{T}Q_{t}]\). Others (Sminnova and Dohnatob, 2020; Asadi et al., 2021) have extended these results to deep RL and or alleviated assumptions. More commonly used in practice are _policy gradient_ algorithms. These methods search over policies using surrogate objectives \(\ell_{t}(\pi)\) that are local linearizations of the performance \(\ell_{t}(\pi)\doteq J(\pi_{t})+\langle\pi,\nabla_{\pi}J(\pi_{t})\rangle-\nicefrac{{ 1}}{{\alpha}}\|\pi-\pi_{t}\|_{\Omega}^{2}\), rely on the true gradient ascent direction of the previous policy in the sequence \(\nabla_{\pi}J(\pi_{t})\), and lower bound the policy performance (Agarwal et al., 2019; Li et al., 2021; Vaswani et al., 2021) when \(J(\pi)\) is \(\frac{1}{\alpha}\Omega\)-relatively convex w.r.t. the policy \(\pi\)(Lu et al., 2017; Johnson and Zhang, 2020), which holds when \(\alpha\) is sufficiently conservative. As \(\alpha\rightarrow\infty\) (the regularization term tends to zero), \(\pi_{t+1}=\arg\max_{\pi\in\Pi}\ell_{t}(\pi)\) converges to the solution of \(\ell_{t}\), which is exactly the policy iteration update. For intermediate values of \(\alpha\), the _projected gradient ascent_ update decouples across states and takes the following form for a direct policy parametrization: \(\pi_{t+1}\doteq\mathcal{P}_{\Pi}(\pi_{t}+\alpha Q_{\pi_{t}})\), with \(\mathcal{P}_{\Pi}\) a projection operator. Generally, the methods employed in practice extend the policy search to parameterized policy classes with softmax transforms \(\Pi_{\Theta}\doteq\{\pi_{\theta}\big{|}\pi_{\theta}(s,a)=\exp{z_{\theta}(s,a)}/ \sum_{a}\exp{z_{\theta}(s,a)}\forall s\in\mathcal{S},a\in\mathcal{A},\theta\in \Theta\subset\mathbb{R}^{m}\}\), with \(z_{\theta}\) a differentiable function, either tabular \(z_{\theta}(s,a)\doteq\theta_{s,a}\), log-linear \(z_{\theta}(s,a)\doteq\phi(s,a)^{\top}\theta\), with \(\phi\) a feature representation, or neural parametrizations (\(z_{\theta}\)-a neural network) [Agarwal et al., 2019]. These methods search over the parameter vector \(\theta\) of a policy \(\pi_{\theta}\in\Pi_{\Theta}\). _Actor-critic_ methods approximate the gradient direction with a parametrized critic \(Q_{w_{t}}\approx Q_{\pi_{t}}\), with parameters \(w\in\mathcal{W}\subset\mathbb{R}^{m}\), yielding \(\theta_{t+1}\doteq\arg\max_{\theta\in\Theta}\ell(\pi_{\theta},Q_{w_{t}})\), with the surrogate objective \(\ell(\pi_{\theta},Q_{w_{t}})\doteq\langle\pi_{\theta},\hat{g}_{t}\rangle-1/ \alpha\operatorname{KL}_{[d_{\pi_{\theta_{t}}}]}(\pi_{\theta},\pi_{\theta_{t}})\), where \(\hat{g}_{t}\doteq d_{\pi_{\theta_{t}}}^{\top}Q_{w_{t}}\), and we denoted \(\operatorname{KL}_{[d]}(\pi,\pi^{\prime})\doteq\sum_{s}d(s)\sum_{a}\pi(a|s)( \log\pi(a|s)-\log\pi^{\prime}(a|s))\) the weighted KL-divergence. The projected gradient ascent version of this update uses the KL-divergence--the projection associated with the softmax transform, \(\pi_{t+1}\doteq\operatorname{KL}(\pi,\exp{z_{t+1/2}}/\sum_{A}\exp{z_{t+1/2}})\) with \(z_{t+1/2}\!=\!\log\pi_{t}+\alpha Q_{\pi_{t}}\) a target-based update. AccelerationWhen the effective horizon \(\gamma\) is large, close to \(1\) the number of iteration before convergence of policy or value iteration, scales on the order \(\mathcal{O}\left(\nicefrac{{1}}{{1}}-\gamma\right)\). Each iteration is expensive in the number of samples. One direction to accelerate is designing algorithms convergent in a smaller number of iterations, resulting in significant empirical speedups. **Anderson acceleration**Anderson [1965] is an iterative algorithm that combines information from previous iterations to update the current guess, and allows speeding up the computation of fixed points. Anderson Acceleration has been described for value iteration in Geist and Scherrer [2018], extensions to Momentum Value Iteration and Nesterov's Accelerated gradient in Goyal and Grand-Clement [2021], and to action-value (Q) functions in Vieillard et al. [2019]. In the following, we present a policy optimization algorithm with a similar interpretation. Model-based policy optimization (MBPO)MBPO algorithms based on Tree Search [Coulom, 2006, Silver et al., 2016, Hallak et al., 2021, Rosenberg et al., 2022, Dalal et al., 2023] rely on approximate online versions of multi-step greedy improvement implemented via Monte Carlo Tree Search (MCTS) [Browne et al., 2012]. These algorithms replace the one-step greedy policy in the improvement stage of PI with a multi-step greedy policy. Cf. Grill et al. [2020], relaxing the hard greedification, and adding approximations over parametric policy classes, forward search algorithms at scale, can be written as the solution to a regularized optimal control problem, by replacing the gradient estimate in the regularized policy improvement objectives \(\ell(\pi)\) of actor-critic algorithms with update \(U_{t}\) resulting from approximate lookahead search using a model or simulator \((\hat{r},\hat{P})\)[Silver et al., 2016, 2017, Schrittwieser et al., 2019] up to some horizon \(h\), using a tree search policy \(\pi_{b}\doteq\pi_{\theta_{t}}\colon\theta_{t+1}\doteq\arg\max_{\theta\in \Theta}\langle\pi,U_{t}\rangle_{d_{\pi_{t}}}-1/\alpha\operatorname{KL}_{[d_{ \pi_{t}}]}(\pi,\pi_{t})\), where \(U_{t}\doteq\hat{r}^{h}(s,a)+\gamma\sum_{s^{\prime}}\hat{P}^{h}(s^{\prime}|s,a) \sum_{a^{\prime}}\pi_{b}(a^{\prime}|s^{\prime})Q_{w_{t}}(s^{\prime},a^{\prime})\). Meta-gradient policy optimization (MGPO)In MGPO [Xu et al., 2018, Zahavy et al., 2020, Flennerhag et al., 2021] the policy improvement step uses a parametrized recursive algorithm \(\pi_{\theta_{t+1}}=\varphi(\eta_{t},\pi_{\theta_{t}})\) with \(\eta\in\mathbb{R}^{n}\) the algorithm's (meta-)parameters. For computational tractability, we generally apply inductive biases to limit the functional class of algorithms the meta-learner searches over, e.g., to gradient ascent (GA) parameter updates \(\theta_{t+1}=\theta_{t}+y_{\eta_{t}}\). The meta-parameters \(\eta\) can represent, e.g., inializations [Finn et al., 2017], losses [Sung et al., 2017, Wang et al., 2019, Kirsch et al., 2019, Houthooft et al., 2018, Chebotar et al., 2019, Xu et al., 2020], internal dynamics [Duan et al., 2016], exploration strategies [Gupta et al., 2018, Flennerhag et al., 2021], hyperparameters [Veeriah et al., 2019, Xu et al., 2018, Zahavy et al., 2020], and intrinsic rewards [Zheng et al., 2018]. The meta-learner's objective is to adapt the parametrized optimization algorithm based on the learner's post-update performance \(J(\pi_{\theta_{t+1}})\)--unknown in RL, and replaced with a surrogate objective \(\ell(\pi_{\theta_{t+1}})\). Zahavy et al. [2020] uses a linear model, whereas Flennerhag et al. [2021] a quasi-Newton method [Nocedal and Wright, 2006, Martens, 2014] by means of a trust region with a hard cut-off after \(h\) parameter updates. ## 3 Acceleration in Policy Optimization We introduce a simple template for accelerated policy optimization algorithms, and analyze its properties for finite state and action MDPs, tabular parametrization, direct and softmax policy classes. Thereafter, we describe a practical and scalable algorithm, adaptive via meta-gradient learning. ### A general template for acceleration Consider finite state and action MDPs, and a tabular policy parametrization. The following policy classes will cause policy gradient updates to decouple across states since \(\Pi\equiv\Delta_{\mathcal{A}}\times\ldots\Delta_{\mathcal{A}}\) --the \(|\mathcal{S}|\)-fold product of the probability simplex: (i) the _direct policy representation_ using a policy class consisting of all stochastic policies \(\Pi=\{\pi\in\mathbb{R}_{+}^{|\mathcal{S}|\times|\mathcal{A}|}\big{|}\sum_{a \in\mathcal{A}}\pi(s,a)=1,\forall s\in\mathcal{S}\}\), and (ii) the _softmax policy representation_, \(\Pi\doteq\big{\{}\pi\big{|}\pi(s,a)=\nicefrac{{\exp{(s,a)}}}{{\sum_{a}\exp{z(s, a)}}},\forall s\in\mathcal{S},a\in\mathcal{A}\big{\}}\), with \(z\) a dual target, the logits of a policy before normalizing them to probability distributions. Let \(\Omega:\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\to\mathbb{R}^{|\mathcal{ S}|}\), be a mapping function s.t. \(z=(\nabla\Omega)^{-1}(\pi)\). For (i) the direct parametrization, we have \((\nabla\Omega)^{-1}\) the identity mapping, and \(z=\pi\). For (ii) the softmax transform, we have \(z=(\nabla\Omega)^{-1}(\pi)\) the logarithm function, and \(\nabla\Omega(z)\) the exponential function. A new policy is obtained by projecting \(\nabla\Omega(z)\) onto the constraint set induced by the probability simplex \(\Pi\), using a projection operator \(\pi\doteq\mathcal{P}_{\Pi}\nabla\Omega(z)\). Let \(\{g_{t}\}_{t\geq 0}\) be (functional) policy gradients, \(g\doteq\nabla_{\pi}J(\pi)\), and \(\hat{g}_{t}\approx g_{t}\) approximations, e.g. stochastic gradients, or the outputs of models or simulators. Let \(\{u_{t}\}_{t\geq 0}\) be a sequence of (functional) policy updates, described momentarily. Base algorithmIterative methods decompose the original multi-iteration objective in Eq. 1 into single-iteration surrogate objectives \(\{\ell_{t}(\pi)\}_{t\geq 0}\), which correspond to finding a maximal policy improvement policy \(\pi_{t+1}\) for a single iteration \(\pi_{t+1}=\arg\max_{\pi\in\Pi}\ell_{t}(\pi)\) and following \(\pi_{t}\) thereafter. We consider first-order surrogate objectives \[\pi_{t+1}\!\doteq\!\arg\max_{\pi\in\Pi}\ell_{t}(\pi,u_{t})\qquad\ell_{t}(\pi, u)\!\doteq\!\langle\pi,u\rangle\!-\!\nicefrac{{1}}{{\alpha}}\|\pi\!-\!\pi_{t}\|_{ \Omega}^{2}\quad\text{(local surrogate objective)} \tag{2}\] with \(\alpha\) a step size set to guarantee relative convexity of \(\ell\) w.r.t. \(\pi\)(Lu et al., 2017; Johnson and Zhang, 2020), and \(\|\cdot\|_{\Omega}\) the policy distance measured in the norm induced by the policy transform \(\Omega\)(Euclidean norm for the direct parametrization, and KL-divergence \(\operatorname{KL}(\cdot,\cdot)\) for the softmax parametrization, cf. Agarwal et al. (2019)). At optimality, we obtain projected gradient ascent (cf.Bubeck (2015); Bhandari and Russo (2021); Vaswani et al. (2021)) \[\pi_{t+1}\doteq\mathcal{P}_{\Delta^{|\mathcal{A}|}}\left(\nabla\Omega(z_{t+ \nicefrac{{1}}{{2}}})\right)\qquad z_{t+\nicefrac{{1}}{{2}}}=z_{t}+\alpha u _{t}\qquad\qquad\text{(policy improvement)} \tag{3}\] with \(z_{t}\doteq(\nabla\Omega)^{-1}(\pi_{t})\), and \(\mathcal{P}_{\Pi}\) the projection operator associated with the policy class (Euclidean for the direct parametrization, and the KL-divergence for the softmax parametrization, cf. Agarwal et al. (2019); Bhandari and Russo (2021)). It is known that for the softmax parametrization the closed-form update is the natural policy gradient update \(\pi_{t+1}\propto\pi_{t}\exp(\alpha u_{t})\). AccelerationIf the update rule \(u_{t}\) returns just an estimation of the standard gradient \(u_{t}\doteq\hat{g}_{t}\), with \(\hat{g}_{t}\approx g_{t}\), then the algorithm reduces to the inexact NPG (mirror ascent/proximal update) \(\pi_{t+1}\doteq\mathcal{P}_{\Delta^{|\mathcal{A}|}}(\nabla\Omega)^{-1}\left(z _{t}+\alpha\hat{g}_{t}\right)\). The inductive bias is fixed and does not change with past experience, and acceleration is not possible. If the update rule is auto-regressive, the inductive bias formed is similar to the canonical **momentum** algorithm--Polyak's Heavy-Ball method (Polyak, 1964), \[u_{t}\!\doteq\!\mu u_{t-1}\!+\!\beta\hat{g}_{t}\implies z_{t+\nicefrac{{1}}{ {2}}}\!=\!z_{t}\!+\!\mu(z_{t-\nicefrac{{1}}{{2}}}\!-z_{t-1})\!+\!\alpha\beta \hat{g}_{t}\qquad\text{(momentum/Heavy-Ball)} \tag{4}\] with \(\beta\) a step-size, and \(\mu\) the momentum decay value. Because Heavy Ball carries momentum from past updates, it can encode a model of the learning dynamics that leads to faster convergence. OptimismA typical form of optimism is to predict the next gradient in the sequence \(\hat{g}_{t+1}\approx g_{t+1}\), while simultaneously subtracting the previous prediction \(\hat{g}_{t}\), thereby adding a tangent at each iteration \[u_{t}\!\doteq\!\beta\hat{g}_{t+1}\!+\!\mu[u_{t-1}\!-\!\beta\hat{g}_{t}] \implies z_{t+\nicefrac{{1}}{{2}}}\!=\!z_{t}\!+\!\mu(z_{t-\nicefrac{{1}}{{2}}} -z_{t-1})\!+\!\alpha\beta(\hat{g}_{t+1}\!-\!\hat{g}_{t})\ \ \text{(optimism)} \tag{5}\] If the gradient predictions \(\{\hat{g}_{t+1}\}_{t\geq 0}\) are accurate \(\hat{g}_{t+1}=g_{t+1}\), the optimistic update rule can accelerate. The policy updates are **extrapolations** based on predictions of next surrogate objective in the sequence. Using \(u_{t-1}=g_{t}\) we obtain the predictor-corrector approach (Cheng et al., 2018). But in RL, agents generally do not have \(g_{t}\), so the distance to the true gradient \(\|g_{t}-u_{t-1}\|_{*}\) will depend on how good the prediction \(\hat{g}_{t}\) was at the previous iteration \(\|g_{t}-\hat{g}_{t}\|_{*}\), with \(\|\cdot\|_{*}\) the dual norm. Since we have not computed \(\pi_{t+1}\) at time \(t\), and we do not have the prediction \(\hat{g}_{t+1}\), existing methods perform the following techniques. Lookahead_Model-based_ policy optimization methods (Silver et al., 2017; Grill et al., 2020; Schrittwieser et al., 2019) use an update \(u_{t}\) that anticipates the future performance, by looking one or multiple steps ahead (\(h\geq 1\)), using a model or simulator in place of the environment dynamics (\(\hat{r}\), \(\hat{P}\)) to compute \(U_{t}^{(h)}=\hat{\mathcal{T}}_{\pi_{b}}^{h}Q_{t}\). with \(\pi_{b}\) a Tree Search policy, e.g. the greedy policy. With \(Q_{t+1}\doteq\hat{\mathcal{T}}_{\pi_{b}}Q_{t}\), and \(\hat{\mathbb{E}}_{\pi_{b}}\) the distribution under the model's dynamics \[U_{t}^{(h)}=\hat{\mathcal{T}}_{\pi_{b}}Q_{t}+\hat{\mathcal{T}}_{\pi_{b}}^{h}Q_{ t}-\hat{\mathcal{T}}_{\pi_{b}}Q_{t}=Q_{t+1}+\gamma\hat{P}_{\pi_{b}}(\hat{ \mathcal{T}}^{h-1}Q_{t}-Q_{t})=Q_{t+1}+\gamma\hat{\mathbb{E}}_{\pi_{b}}[U_{t}^ {(h-1)}-Q_{t}]\] Extra-gradientsWe interpret _optimistic meta-learning_ algorithms (Flennerhag et al., 2021, 2023) as extra-gradient methods, since they use the previous prediction \(u_{t-1}\) as a proxy to compute a _half-step proposal_\(\pi_{t+\nicefrac{{1}}{{2}}}=\mathcal{P}_{\Pi}(\nabla\Omega)^{-1}(z_{t}+ \alpha u_{t-1})\), which they use to obtain an estimate of the next gradient \(\hat{g}_{t+\nicefrac{{1}}{{2}}}\doteq\nabla_{\pi}J(\pi_{t+\nicefrac{{1}}{{2}}})\) (e.g. for sample efficiency, using samples from a non-parametric model, like a reply buffer). Retrospectively, they adapt the optimistic update \[u_{t}\doteq\beta\hat{g}_{t+\nicefrac{{1}}{{2}}}+\mu[u_{t-1}-\beta\hat{g}_{t} ]\implies z_{t+\nicefrac{{1}}{{2}}}=z_{t}+\mu(z_{t-\nicefrac{{1}}{{2}}}-z_{ t-1})+\alpha\beta(\hat{g}_{t+\nicefrac{{1}}{{2}}}-\hat{g}_{t})\ \ \ \textit{(extra- grad)} \tag{6}\] A new target policy should also be recomputed \(\pi_{t+1}=\mathcal{P}_{\Pi}(\nabla\Omega)^{-1}(z_{t}+\alpha u_{t})\), but practical implementations (Zahavy et al., 2020; Flennerhag et al., 2021) omit this, and resort to starting the next iteration from the half-step proposal \(\pi_{t+1}\doteq\pi_{t+\nicefrac{{1}}{{2}}}\). Alg. 1 summarizes the procedure. ### Towards a practical Accelerated Policy Gradient algorithm More commonly used in practice are neural, or log-linear parametrizations for actors, and equivalent parametrizations of gradient-critics (e.g., Espeholt et al. (2018); Schulman et al. (2015, 2017); Abdolmaleki et al. (2018); Tomar et al. (2020); Zahavy et al. (2020); Flennerhag et al. (2021); Hessel et al. (2021)). Consider a parameterized softmax policy class \(\Pi_{\Theta}\doteq\{\pi_{\theta}|\pi_{\theta}(s,a)\doteq\exp z_{\theta}(s,a) \big{/}\sum_{a}\exp z_{\theta}(s,a),\forall s\in\mathcal{S},a\in\mathcal{A}, \theta\in\Theta\subset\mathbb{R}^{m}\}\), with \(\pi_{\theta}\in\Pi_{\Theta}\), and \(z_{\theta}\) a differentiable logit function. Let \(u_{\eta}\) represent a parametric class of policy updates, with parameters \(\eta\in\mathbb{R}^{m^{\prime}}\), which we discuss momentarily. Base algorithmWe recast the policy search in Eq. 2 over policy parameters \(\theta\) \[\theta_{t+1}\doteq\arg\max_{\theta\in\Theta}\ell_{t}(\pi_{\theta},u_{\eta_{t} })\qquad\ell_{t}(\pi_{\theta},u_{\eta})\doteq\langle\pi_{\theta},u_{\eta} \rangle-\nicefrac{{1}}{{\alpha}}\operatorname{KL}_{[d_{\pi_{\theta_{t}}}]}( \pi_{\theta},\pi_{\theta_{t}}) \tag{7}\] Using function composition, we write the policy improvement step using a parametrized recursive algorithm \(\pi_{\theta_{t+1}}=\varphi(\eta_{t-1},\pi_{\theta_{t}})\) with \(\eta\) the algorithm's (meta-)parameters. We assume the \(\varphi(\eta,\cdot)\) is differentiable and smooth w.r.t. \(\eta\). If Eq. 7 can be solved in closed form, an alternative is to compute the non-parametric closed-form solution \(\pi_{t+1}\propto\pi_{\theta_{t}}\exp\alpha u_{\eta_{t}}\) and (approximately) solve the projection \(\theta_{t+1}\doteq\arg\min_{\theta\in\Theta}\operatorname{KL}(\pi_{\theta}, \pi_{t+1})\). For both approaches, we may use \(h\geq 1\) gradient steps, to solve on Eq. (7) \[\theta_{t}^{k+1}=\theta_{t}^{k}+\xi y_{t+1}\qquad y_{t+1}\doteq\nabla_{\theta }\ell_{t}(\pi_{\theta_{t}^{k}},u_{\eta_{t}})\qquad\forall k\in[0..h),\theta_{t }^{0}\doteq\theta_{t}\qquad\theta_{t+1}\doteq\theta_{t}^{h} \tag{8}\] with \(\xi\) a parameter step size. By function compositionality, we have \(\nabla_{\theta}\ell(\pi,g)=\nabla_{\theta}\pi_{\theta}^{\top}\nabla_{\pi}\ell( \pi,g)\). This part of the gradient \(\nabla_{\theta}\pi_{\theta}(s,a)=\pi_{\theta}(s,a)\nabla_{\theta}\log\pi_{ \theta}(s,a)\) is generally not estimated, available to RL agents, and computed by backpropagation. Depending on how the other component \(\nabla_{\pi}\ell(\pi,g)\) is defined, in particular \(u_{\eta_{t}}\), we may obtain different algorithms. Generally, this quantity is expensive to estimate accurately in the number of samples for RL algorithms. ``` input:\((\theta_{0},\eta_{0})\), predictions \(\{Q_{t}\}_{t\geq 0}\) for each iteration \(t=1,2\dots\)do Sample from \(\pi_{\theta_{t}}\) & store in buffer \(\mathcal{B}_{\hat{d},\pi_{t}}\) # policy improvement Update \(\theta_{t+1}\) with Eq. 9 # acceleration Compute \(\pi_{t+2}\propto\pi_{\theta_{t+1}}\exp\alpha Q_{t+1}\) or \(\pi_{\theta_{t+2}}=\arg\max_{\theta}\ell(\pi_{\theta},Q_{t+1})\) with \(\ell\) from Eq. 9 using samples from \(\mathcal{B}_{\hat{d},\pi_{t+1}}\) Update \(\eta_{t+1}\) with Eq. 12 and \(\mathcal{B}_{\hat{d},\pi_{t+1}}\) endfor ``` **Algorithm 2** Accelerated Policy Gradients (in practice) _Example_ (A non-optimistic algorithm): Under this definition, the standard actor-critic algorithm uses \(U_{\eta}\doteq Q_{w}\) and updates \(w\) with semi-gradient temporal-difference (TD) algorithms toward a target \(Q_{t+1}\), typically bootstrapped from \(Q_{w_{t}}\). Let \(\zeta\) be a step size. The TD objective denoted as \(w_{t+1}\doteq\arg\min_{w\in\mathbb{R}^{m}}f_{t}(\eta,Q_{t+1})\) is a surrogate objective for value error (Patterson et al., 2021) \[f_{t}(w,Q_{t+1})\!\doteq\!\mathbb{E}_{\mathbb{B}_{d_{w_{t}}}}[\! \nicefrac{{1}}{{2}}\big{(}[Q_{t+1}](S,A)-Q_{w}(S,A)\big{)}^{2}]\!+\nicefrac{{ 1}}{{2\zeta}}\|w\!-\!w_{t}\|_{2}^{2} \tag{10}\] AccelerationWe replace the standard policy update with an optimistic decision-aware update, retrospectively updated using the objective \[f_{t}(\eta,Q_{t+1})\!\doteq\!\beta\ell_{t+1}(\pi_{t+2},Q_{t+1}) +\mathcal{B}[\ell_{t}(\varphi(\eta,\pi_{\theta_{t}}),U_{\eta})\!-\!\beta\ell_ {t}(\pi_{\theta_{t+1}},Q_{t})] \tag{11}\] where we used the same notation as before, \(\pi_{\theta_{t+1}}\doteq\varphi(\eta_{t-1},\pi_{\theta_{t}})\), to denote that the policy improvement step uses a parametrized recursive algorithm with parameters \(\eta_{t-1}\), and \(\ell\) defined in Eq. 9. The optimal solution to \(\ell_{t}(\varphi(\eta_{t-1},\pi_{\theta_{t}}))\) is \(\pi_{\theta_{t+1}}=\mathcal{P}_{\Pi_{\Theta}}\pi_{t+1}\), with \(\pi_{t+1}\doteq\arg\max_{\pi}\ell_{t}(\pi,U_{\eta_{t-1}})\propto\pi_{\theta_{ t}}\exp(\alpha U_{\eta_{t-1}})\), and the optimal next-iteration target is \(\pi_{t+2}\doteq\arg\max_{\pi}\ell_{t+1}(\pi,Q_{t+1})\propto\pi_{\theta_{t+1}} \exp(\alpha Q_{t+1})\), which we may be also approximate using Eq.8. Since \(\ell_{t}(\pi_{\theta_{t+1}},U_{\eta_{t-1}})=\operatorname{KL}(\pi_{\theta_{t} },\pi_{\theta_{t+1}}),\ell_{t+1}(\pi_{t+2},\beta Q_{t+1})-\ell_{t}(\pi_{\theta _{t+1}},\beta Q_{t})=\operatorname{KL}(\pi_{\theta_{t+1}},\pi_{t+2})\), the objective in Eq. 11 is captured in the left-hand side of the generalized Pythagorean theorem \[\operatorname{KL}(\pi_{\theta_{t}},\pi_{\theta_{t+1}})\!+\! \operatorname{KL}(\pi_{\theta_{t+1}},\pi_{t+2})=\operatorname{KL}(\pi_{ \theta_{t}},\pi_{t+2})+\langle\nabla_{\pi}\operatorname{KL}(\pi,\pi_{t+2})|_{ \pi=\pi_{\theta_{t+1}}},\pi_{\theta_{t}}-\pi_{\theta_{t+1}}\rangle\] By cosine law, if \(\langle\nabla_{\pi}\operatorname{KL}(\pi,\pi_{t+2})|_{\pi=\pi_{\theta_{t+1}}}, \pi_{\theta_{t}}-\pi_{\theta_{t+1}}\rangle)\geq 0\), then \(\operatorname{KL}(\pi_{\theta_{t}},\pi_{t+2})\geq\operatorname{KL}(\pi_{\theta _{t}},\pi_{\theta_{t+1}})+\operatorname{KL}(\pi_{\theta_{t+1}},\pi_{t+2})\), and \(\pi_{\theta_{t+1}}\) was not the optimal projection of \(\pi_{t+2}\), i.e. \(\pi_{\theta_{t+1}}\neq\arg\min_{\pi\in\Pi}KL(\pi,\pi_{t+2})\). Therefore, we can move \(\eta\) to relax \(\operatorname{KL}(\pi_{t},\pi_{t+2})\) by minimizing \(\langle\nabla_{\pi}\operatorname{KL}(\pi,\pi_{t+2})|_{\pi=\pi_{\theta_{t+1}}}, \pi_{\theta_{t}}-\varphi(\eta,\pi_{\theta_{t}})\rangle\). We use a first-order method, which linearizes this objective in the space of parameters \(\eta\) \[f_{t}(\eta,Q_{t+1})\!\doteq\!\langle\eta,\nabla_{\eta}\varphi( \eta_{t},\pi_{\theta_{t}})^{\top}\nabla_{\pi}\operatorname{KL}(\pi_{\theta_{t+1 }},\pi_{t+2})\rangle\!+\!\nicefrac{{1}}{{2\zeta}}\|\eta\!-\!\eta_{t}\|_{2}^{2} \tag{12}\] where \(\pi_{t+2}\) depends on \(Q_{t+1}\). Alg. 2 summarizes the procedure. Next, we empirically study: (i) the effect of grounded meta-optimization targets \(Q_{t+1}\) based on true optimistic predictions \(Q_{t+1}\doteq Q_{\pi_{\theta_{t+1}}}\), and (ii) using self-supervised, inaccurate predictions--obtained with another estimator: \(Q_{t+1}=Q_{w_{t+1}}\), with \(w\) learned separately with TD. We leave to future work the exploration of other ways of adding partial feedback to ground the bootstrap targets. #### 3.2.1 Illustrative empirical analysis In this section, we investigate acceleration using optimism for online policy optimization, in an illustrative task. We mentioned one option for computing optimistic predictions \(\{Q_{t}\}_{t\geq 0}\) is using a model or simulator. Consequently, in Sec. 3.2.2, we begin with a brief study on the effect of the lookahead horizon on the optimistic step, in order to understand the acceleration properties of multi-step Tree Search algorithms, and distinguish between two notions of optimism. Thereafter, in Sec. 3.2.3, we consider the accelerated policy gradient algorithm we designed in Sec. 3.2 (summarized in Alg. 2), and investigate emerging properties for several choices of policy targets \(\pi_{t+2}\) obtained with optimistic predictions \(\{Q_{t+1}\}_{t\geq 0}\). Experimental setupIn both experiments, we use the discrete navigation task from (Sutton and Barto, 2018), illustrated aside (details in Appendix C). #### 3.2.2 Optimism with multi-step forward search For this illustration, we use exact simulators. The only source of inaccuracy stems from the depth truncation from using a fixed lookahead horizon \(h\). We use this experiment to show the difference between: (i) optimism within the local policy evaluation problem (\(U_{t}\doteq\mathcal{T}^{h}_{\pi_{t}}Q_{t}\)), and (ii) optimism within the global maximization problem (\(U_{t}\doteq\mathcal{T}^{h}Q_{t}\)). AlgorithmsWe consider an online AC algorithm, with forward planning up to horizon \(h\) for computing the search values \(U_{t}\doteq\mathcal{T}^{h}_{\pi_{t}}Q_{w_{t}}\), bootstrapping at the leaves on \(Q_{w_{t}}\), trained with using Eq. 10, and \(\pi_{b}\), a tree-search policy. We optimize the policy \(\pi_{\theta}\) online, using \(h=1\) gradient steps on Eq 9: \(\theta_{t+1}=\theta_{t}+\beta\nabla_{\theta}\log\pi_{\theta_{t}}(A|S)\mathrm{A}_{t }(S,A)\), with actions sampled online and \(\mathrm{A}_{t}\doteq U_{t}-V_{t}\) the \(h\)-step advantage, where \(V_{t}(S)\doteq\mathbb{E}_{\pi_{\pi_{b}}(\cdot|S)}[U_{t}(S,A)]\). Relationship between acceleration and optimistic lookahead horizonWe use a multi-step operator in the optimistic step, which executes Tree-Search up to horizon \(h\). For the tree policy \(\pi_{b}\), we distinguish between: (i) extra policy evaluation steps with the previous policy, \(U_{t}\doteq\mathcal{T}^{h}_{\pi_{t}}Q_{w_{t}}\) (Fig 2(a)), and (ii) extra greedification steps, \(U_{t}\doteq\mathcal{T}^{h}Q_{w_{t}}\) (Fig 2(b)). The policy is trained online with \(\theta_{t+1}=\theta_{t}+\xi y_{t}\), s.t. \(\mathbb{E}_{\bar{d},\pi_{t}}[y_{t}]\doteq\mathbb{E}_{\bar{B}_{\bar{d},\pi_{t}}} [\mathrm{A}_{t}(\xi,A)\nabla_{\theta}\log\pi_{\theta_{t}}(A|S))]\), with \(\mathrm{A}_{t}=U_{t}-V_{t}\), where \(V_{t}(S)\doteq\mathbb{E}_{\pi_{t}(\cdot|S)}[U_{t}(S,A)]\). The advantage function \(\mathrm{A}_{t}\) uses search values \(U_{t}\), and critic parameters \(w\) trained online with Eq. 10 from targets based on the search values \(r(S,A)+\gamma U_{t}(S^{\prime},A^{\prime})\). Results & observationsFig. 1(c) shows the difference between optimistic improvement--the gradient prediction has foresight of future policies on the optimization landscape, and optimistic evaluation--the gradient prediction is a refinement of the previous gradient prediction toward the optimal solution to the local policy improvement sub-problem. As Fig. 1(a) depicts, more lookahead steps with optimistic evaluation, can significantly improve inaccurate gradients, where accuracy is quantified by the choice of \(\zeta\), the Q-fn step size for \(w\). Thus, for \(\pi_{b}\doteq\pi_{t}\), increasing \(h\to\infty\), takes the optimistic step with the exact (functional) policy gradient of the previous policy, \(U_{t}=\mathcal{T}^{h}_{\pi_{b}}Q_{w_{t}}=\mathcal{T}^{h}_{\pi_{t}}Q_{w_{t}} \stackrel{{ h\to\infty}}{{\longrightarrow}}Q_{\pi_{t}}\). As Fig. 1(b) shows, the optimal horizon value for optimistic improvement is another, one that trades off the computational advantage of extra depth of search, if this leads to accumulating errors, as a result of depth truncation, and bootstrapping on inaccurate values at the leaves, further magnified by greedification. #### 3.2.3 Accelerated policy optimization with optimistic policy gradients We now empirically analyze some of the properties of the practical meta-gradient based adaptive optimistic policy gradient algorithm we designed in Sec. 3.2 (Alg. 2). (i) Acceleration with optimistic policy gradientsWe first remove any confounding factors arising from tracking inaccurate target policies \(\pi_{t+2}\) in Eq.12, and resort to using the true gradients of the post-update performance of \(\pi_{\theta_{t+1}}\), \(Q_{t+1}\!\doteq\!Q_{\pi_{\theta_{t+1}}}\), but distinguish between two kinds of lookahead steps: (a) _parametric_, or (b) _geometric_. This difference is indicative of the farsightedness of the optimistic prediction. In particular, this distinction is based on the policy class of the target, whether it be a (a) parametric policy target \(\pi_{\theta_{t+2}}\), obtained using \(h\) steps on Eq. 8, with \(y_{t+1}\doteq\nabla_{\theta}\ell_{t}(\pi_{\theta_{t+1}},Q_{t+1})\forall k\geq h\), or a (b) non-parametric policy target, \(\pi_{t+2}\propto\pi_{\theta_{t+1}}\exp\alpha Q_{t+1}\). The results shown are for \(h=1\), and \(\alpha=1\). Results & observationsWhen the meta-optimization uses an adaptive optimizer (Adam (Kingma and Ba, 2015)), Fig 2(a) shows there is acceleration when using targets \(\pi_{t+2}\) one step ahead of the learner parametric, or geometric. The large gap in performance between the two optimistic updates owes to the fact that target policies that are one geometric step ahead correspond to steepest directions Figure 1: **Optimism with extra steps of forward search with a simulator. x-axis: lookahead horizon \(h\); y-axis: total cumulative regret \(\sum_{k\leq t}J(\pi^{*})-J(\pi_{k})\). Lookahead targets \(r(S,A)+\gamma U_{t}(S^{\prime},A^{\prime})\) are used for: (a) Optimistic evaluation \(U_{t}\doteq\mathcal{T}^{h}_{\pi_{t}}Q_{w_{t}}\). Increasing the optimistic lookahead horizon \(h\) helps, and a horizon \(h=0\) is worst. Colored curves denote the step size \(\zeta\) used to learn the parameter vector \(w\) of \(Q_{w}\) with online TD(0). The step size controls the quality of the gradient via the accuracy of the search values \(U_{t}\) (more details in the main text). Shades denote confidence intervals over \(10\) runs. (b) Optimistic improvement \(U_{t}\doteq\mathcal{T}^{h}Q_{w_{t}}\). Intermediate values of the optimistic lookahead horizon \(h\) trade off accumulating errors for shorter horizons. (c) Comparison between the two notions of optimismlocal—evaluation within the current prediction problem, and global—improvement within the optimization problem, for two step sizes.** of ascent, and consequently, may be further ahead of the policy learner in the space of parameters, leading to acceleration. Additional results illustrating sensitivity curves to hyperaparameters are added in Appendix C. When the meta-optimization uses SGD, the performance of the meta-learner algorithms is slower, lagging behind the PG baseline, but the ordering over the optimistic variants is maintained (Fig 5(a) in Appendix C), which indicates that the correlation between acceleration and how far ahead the targets are on the optimization landscape is independent of the choice of meta-optimizer. _(ii) How target accuracy impacts acceleration_ Next, we relax the setup from the previous experiment, and use inaccurate predictions \(Q_{t+1}\approx Q_{\pi_{\theta_{t+1}}}\), instead of the true post-update gradients. In particular, we resort to online sampling under the empirical on-policy distribution \(\hat{d}\), and use a standard Q-fn estimats to track the action-value of the most recent policy \(Q_{w_{t+1}}\approx Q_{\pi_{\theta_{t+1}}}\) using Eq. 10, with TD(0): \(w_{t+1}=w_{t}-\zeta[r(S,A)+\gamma Q_{w_{t}}(S,A)-Q_{w_{t}}(S,A)]\nabla_{w}Q_{w_ {t}}(S,A)\), with step size \(\zeta\). With respect to the policy class of the targets, we experiment with the same two choices (a) _parametric_\(\pi_{\theta_{t}+2}\), or (b) _non-parametric_\(\pi_{t+2}\). Targets are ahead of the optimistic learner, in (a) _parameter_ steps, for the former, and geometric steps for the latter. Results & observationsEven when the target predictions are inaccurate, Fig.2(b) shows that optimistic policy ascent directions distilled from lookahead targets that use these predictions can still be useful (meta-optimization uses Adam, although promising results are in Appendix C also for meta-optimization with SGD). Non-parametric targets, ahead in the optimization, show similar potential as when using true optimistic predictions. Fig 2(c) illustrates the total cumulative regret (y-axis) stays consistent across different levels of accuracy of the optimistic predictions used by the targets, which is quantified via the Q-fn step sizes (\(\zeta\)), and indicated by different tones for each algorithm. As expected, we observe parametric targets to be less robust to step size choices, compared to non-parametric ones, analogous the distinct effect of non-covariant gradients vs natural gradients. ## 4 Concluding remarks We presented a simple template for accelerating policy optimization algorithms, and connected seemingly distinct choices of algorithms: model-based policy optimization algorithms, and optimistic meta-learning. We drew connections to well-known accelerated universal algorithms from convex optimization, and investigated some of the properties of acceleration in policy optimization. We used this interpretation to design an optimistic PG algorithm based on meta-gradient learning, highlighting its features empirically. Related workWe defer an extensive discussion on related work to the appendix. The closest in spirit to this acceleration paradigm we propose is the predictor-corrector framework, used also by Figure 2: **Accelerated policy optimization with optimistic policy gradients (a)** x-axis: number of steps, y-axis: regret \(J(\pi^{*})-J(\pi_{t})\). Different colored curves denote: standard PG, _optimistic policy gradients (OPG)_ - with parametric target policies, and non-parametric target policies, trained with meta-gradient learning from optimistic predictions using the true post-update gradients. (b) x-axis: number of episodes, y-axis: regret. _Optimistic policy gradients (OPG)_ are meta-learned from inaccurate optimistic predictions using Q-fn estimations. **(c)** Hyper-parameter sensitivity curves. x-axis: meta-learning rate for \(\eta_{y}\) y-axis: total cumulative regret \(\sum_{k\leq t}J(\pi^{*})-J(\pi_{k})\). The plot shows _optimistic policy gradients_ meta-learned from inaccurate optimistic predictions. Different tones depict different accuracies of the optimistic prediction, indirectly quantified via the optimistic Q-fn’s step size. Straight lines show a baseline standard AC. Shades denote confidence intervals over \(10\) runs. Cheng et al. (2018). The most similar optimistic algorithm for policy optimization is AAPI Hao et al. (2020). Both analyze optimism from a smooth optimization perspective, whereas we focus the analysis on Bellman-operators and PI-like algorithms, optimistic update rules, thus allowing the unification. We extend the empirical analysis of Flennerha et al. (2021), who only focused on meta-learning hyperparameters of the policy gradient, and used optimistic update rule in parameter space, which is less principled and lacks guarantees. Other meta-gradient algorithms Sunge et al. (2017); Wang et al. (2019); Chebotar et al. (2019); Xu et al. (2020) take to more empirical investigations. We focused on understanding the core principles common across methods, valuable in designing new algorithms in this space, optimistic in spirit. Future workWe left many questions unanswered, theoretical properties, and conditions on guaranteed accelerated convergence. The scope of our experiments stops before function approximation, or bootstrapping the meta-optimization on itself. Conceptually, the idea of optimizing for future performance has applicability in lifelong learning, and adaptivity in non-stationary environments Flennerha et al. (2021); Luketina et al. (2022); Chandak et al. (2020). ## Acknowledgements Veronica Chelu gratefully acknowledges support from FRQNT--Fonds de Recherche du Quebec, Nature et Technologies, and IVADO. ## Appendix A Convergence rates for policy gradient algorithms Related work ### Optimism in policy optimization Problem formulationThe RL problem consists in finding a policy \(\pi\) maximizing the discounted return--the policy performance objective: \(J(\pi)\equiv\mathbb{E}_{S\sim\rho}[V_{\pi}(S)]=(1-\gamma)\mathbb{E}_{\pi,\rho} \big{[}\sum_{t\geq 0}\gamma^{t}R_{t}\big{]}\), where \(V_{\pi}\in\mathbb{R}^{|\mathcal{S}|}\) is the value function, and \(Q_{\pi}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\) the action-value function of a policy \(\pi\in\Pi=\{\pi\in\mathbb{R}_{+}^{|\mathcal{S}|\times|\mathcal{A}|}|\sum_{a \in\mathcal{A}}\pi(s,a)=1,\forall s\in\mathcal{S}\}\), s.t. \(Q_{\pi}(s,a)\equiv\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}R_{t}|S_{0 }=s,A_{0}=a\right]\), and \(V_{\pi}(s)\equiv\mathbb{E}_{\pi}\left[Q(s,A)\right]\). #### b.1.1 Policy iteration Policy iterationThe classic **policy iteration** algorithm repeats consecutive stages of (i) one-step greedy policy improvement w.r.t. a value function estimate \[\pi_{t+1}\in\mathcal{G}(V_{\pi_{t}})=\{\pi:\mathcal{T}_{\pi}V_{\pi_{t}}= \mathcal{T}V_{\pi_{t}}\}\iff\pi_{t+1}=\arg\max_{\pi\in\Pi}\langle\nabla J(\pi _{t}),\pi\rangle=\langle Q_{t},\pi\rangle_{d_{\pi_{t}}} \tag{13}\] with \(\mathcal{G}\) the greedy set of \(V_{\pi_{t}}\), followed by (ii) evaluation of the value function w.r.t. the greedy policy \[V_{\pi_{t+1}}=\lim_{h\to\infty}\mathcal{T}_{\pi_{t+1}}^{h}V_{\pi_{t}}\text{ or }Q_{\pi_{t+1}}=\lim_{h\to\infty}\mathcal{T}_{\pi_{t+1}}^{h}Q_{\pi_{t}} \tag{14}\] Approximate policy iterationApproximations of either steps lead to approximate PI (API) (Scherer et al., 2015), in which we replace the two steps above with \[\pi_{t+1}\in\mathcal{G}(V_{\pi_{t}})=\{\pi:\mathcal{T}_{\pi}V_{\pi_{t}}\geq \mathcal{T}V_{\pi_{t}}-\epsilon_{t+1}\} \tag{15}\] with \(\epsilon_{t+1}\) a greedification and/or value approximation error. Soft policy iterationRelaxing the greedification leads to **soft policy iteration**, or conservative policy iteration (Kakade and Langford, 2002), called Frank-Wolfe by Bhandari and Russo (2021). The minimization problem decouples across states to optimize a linear objective over the probability simplex \[\pi_{t+1}=(1-\alpha)\pi_{t}+\alpha\pi_{t+1}^{+}\text{ with }\pi_{t+1}^{+}= \arg\max_{\pi\in\Pi}\langle Q_{\pi_{t}},\pi\rangle_{d_{\pi_{t}}} \tag{16}\] for \(\alpha\in[0,1]\), a (possibly time-dependent) step size, and \(\langle\cdot,\cdot\rangle_{d}\) a state weighting that places weight \(d(s)\) on any state-action pair \((s,a)\). Optimistic policy iteration (OPI)(Bertsekas and Tsitsiklis, 1996) relaxes the evaluation step instead to \[Q_{t+1}=(1-\lambda)Q_{t}+\lambda Q_{t+1}^{+}\text{, with }Q_{t+1}^{+}= \mathcal{T}_{\pi_{t+1}}^{h}Q_{t},\forall h\geq 0 \tag{17}\] with \(\lambda\in[0,1]\). #### b.1.2 Policy gradients Projected Gradient DescentStarting with some policy \(\pi\in\Pi\), an iteration of projected gradient ascent with step size \(\alpha\) updates to the solution of the regularized problem \[\pi_{t+1} =\arg\max_{\pi}\langle\nabla J(\pi_{t}),\pi\rangle-\frac{1}{ \alpha}\sum_{s\in\mathcal{S}}d_{\pi_{t}}(s)\sum_{a\in\mathcal{A}}(\pi(a|s)- \pi_{t}(a|s))^{2} \tag{18}\] \[=\arg\max_{\pi}\langle Q_{\pi_{t}},\pi\rangle_{d_{\pi}}-\frac{1} {\alpha}\|\pi-\pi_{t}\|_{2,d_{\pi_{t}}}^{2} \tag{19}\] which is a first-order Taylor expansion of \(J\) w.r.t. the policy's functional representation \(\pi\) (see Bhandari and Russo (2021, 2019)) \[J(\pi^{\prime}) =J(\pi)+\langle\nabla J(\pi),\pi^{\prime}-\pi\rangle-\mathcal{O} (\|\pi^{\prime}-\pi\|^{2}) \tag{20}\] \[=J(\pi)+\langle Q_{\pi},\pi^{\prime}-\pi\rangle_{d_{\pi}}- \mathcal{O}(\|\pi^{\prime}-\pi\|^{2}) \tag{21}\] With per state decoupling, for specific values of \(\alpha\) this yields a per state projection on the decoupled probability simplex \[\pi_{t+1}=\mathcal{P}_{[d_{\pi_{t}}]}^{\Pi}\pi_{t+2}=\arg\max_{\pi\in\Pi}\|\pi- \pi_{t+2}\|_{2,d_{\pi_{t}}}^{2}\text{ with }\pi_{t+2}=\pi_{t}+\alpha Q_{\pi_{t}} \tag{22}\] with \(\|\cdot\|_{2,d_{\pi_{t}}}^{2}\) the weighted \(L_{2}\)-norm. Mirror descent (MD)Mirror descent adapts to the geometry of the probability simplex by using a non-Euclidean regularizer. The specific regularizer used in RL is the entropy function \(H(\pi)\equiv\pi\log\pi\), such that the resulting mirror map is the \(\log\) function. The regularizer decouples across the state space and captures the curvature induced by the constraint of policies lying on the policy simplex via the softmax policy transform. Starting with some policy \(\pi_{t}\in\Pi_{\Theta}\), an iteration of mirror descent with step size \(\alpha\) updates to the solution of a regularized problem \[\pi_{t+1} =\arg\max_{\pi\in\Pi}\langle\nabla J(\pi_{t}),\pi\rangle+\frac{1 }{\alpha}\sum_{s\in\mathcal{S}}d_{\pi_{t}}(s)\operatorname{KL}(\pi(s),\pi_{t} (s)) \tag{23}\] \[=\arg\max_{\pi\in\Pi}\langle Q_{\pi_{t}},\pi\rangle_{d_{\pi_{t}} }+\frac{1}{\alpha}\operatorname{KL}_{[d_{\pi_{t}}]}(\pi,\pi_{t}) \tag{24}\] which is known to be the exponentiated gradient ascent update \(\pi_{t+1}=\frac{\pi_{t}\exp\alpha Q_{\pi_{t}}}{\sum_{a}\pi(a|\cdot)\exp\alpha Q _{\pi_{t}}(\cdot,a)}\) (obtained using the Lagrange approach, see Bubeck (2015)). Using state decoupling, for specific values of \(\alpha\) we may also write MD as a projection using the corresponding Bregman divergence for the mirror map \(\nabla_{\pi}H(\pi)\) (cf. Bubeck (2015)) \[\pi_{t+1} =\mathcal{P}_{[d_{\pi_{t}}]}^{\Pi,H}\pi_{t+2}=\arg\max_{\pi\in \Pi}\operatorname{KL}_{[d_{\pi}]}(\pi,\pi_{t+2})\text{ with} \tag{25}\] \[\log\pi_{t+2} =\log\pi_{t}+\alpha Q_{\pi_{t}}-\log\sum_{a}\pi(a|\cdot)\exp \alpha Q_{\pi_{t}}(\cdot,a) \tag{26}\] Policy parametrizationFor parametric policy classes the search written over policies, translates into similar versions of the linear objective, except over policy parameters. Since the class of softmax policies can approximate stochastic policies to arbitrary precision, this is nearly (we can only come infinitesimally close to an optimal policy) the same as optimizing over the class \(\Pi\). Natural policy gradients (NPG)The natural policy gradient (NPG) of Kakade (2001) applied to the softmax parameterization is actually an instance of mirror descent for the entropy-based regularizer \(H\). Natural policy gradient is usually described as steepest descent in a variable metric defined by the Fisher information matrix induced by the current policy (Kakade, 2001; Agarwal et al., 2019) \[\theta_{t+1} =\theta_{t}+\alpha\mathbf{F}_{\rho}(\theta_{t})^{\dagger}\nabla _{\theta_{t}}J(\pi_{\theta_{t}}) \tag{27}\] \[\mathbf{F}_{\rho}(\theta_{t}) =\mathbb{E}_{S\sim d_{\pi_{\theta_{t}}},A\sim\pi_{\theta_{t}}} \left[\nabla_{\theta_{t}}\log\pi_{\theta_{t}}\nabla_{\theta_{t}}\log\pi_{ \theta_{t}}^{\top}\right] \tag{28}\] and is equivalent to mirror descent under some conditions (Raskutti and Mukherjee, 2014). Cf. Bhandari and Russo (2021); Li et al. (2021), the aforementioned base MD and NPG updates are closely related to the practical instantiations in TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017), MPO (Abdolmaleki et al., 2018), MDPO (Tomar et al., 2020). All these algorithmic instantiations use approximations for the gradient direction. #### b.1.3 Actor-critic methods Generally, in RL, an agent only has access to partial evaluations of the gradient \(\nabla_{\pi}J(\pi)\), and commonly these involve some sort of internal representation of the action-value function \(Q_{t}\approx Q_{\pi_{t}}\). Natural actor-critic.MD with an estimated critic.Consider a parameterized softmax policy class \(\pi_{\theta}\in\Pi_{\Theta}\), with parameter vector \(\theta\), and \(Q_{\eta}\in\mathcal{F}_{\eta}\), with parameter vector \(\eta\), s.t. For the softmax policy class, this will be \(\log\pi_{\theta}\), with \(f_{\theta}\) a differentiable function, either tabular \(f_{\theta}(s,a)=\theta_{s,a}\), log-linear \(f_{\theta}(s,a)=\phi(s,a)^{\top}\theta\), with \(\phi\) a feature representation, or neural (\(f_{\theta}\)-a neural network) parametrizations (Agarwal et al., 2019). Written as a proximal policy improvement operator, at iteration \(t\), starting with some policy \(\pi_{t}\equiv\pi_{\theta_{t}}\). the next policy is the solution to the regularized optimization problem \[\pi_{\theta_{t+1}}=\arg\max_{\pi_{\theta}\in\Pi_{\Theta}}\langle Q_{w_{t}}, \pi_{\theta}\rangle_{d_{\pi_{t}}}-\frac{1}{\alpha}\operatorname{KL}_{[d_{\pi_{ t}}]}(\pi_{\theta},\pi_{t}) \tag{29}\] with \(\alpha\) a (possibly time-dependent) step size, and \(\langle\cdot,\cdot\rangle_{d}\), \(\operatorname{KL}_{[d]}(\cdot,\cdot)\) indicates an additional state-weighting per state-action pair. Using the connection between the NPG update rule with the notion of compatible function approximation (Sutton et al., 1999), as formalized in (Kakade, 2001), we may try to approximate the functional gradient using \(w\) \[\mathbf{F}_{\rho}(\theta)^{\dagger}\nabla_{\theta}J(\pi_{\theta})= \frac{w}{1-\gamma} \tag{30}\] where \(w\) are parameters of an advantage function \(A_{w}\)--which is the solution to the projection of \(A_{\pi_{\theta}}\) on the dual gradient space of \(\pi\), the space spanned by the particular feature representation that uses \(\phi_{t}\equiv\nabla_{\theta}\log\pi_{\theta_{t}}\) as (centered) features \[w_{t}=\arg\min_{w}\mathbb{E}_{S\sim d_{\pi_{\theta_{t}}},\lambda \sim\pi_{\theta_{t}}}[(w^{\top}\phi_{t}(S,A)-\mathrm{A}_{\pi_{\theta_{t}}}(S,A) )^{2}] \tag{31}\] Similarly there is an equivalent version for Q-NPG considering possibly (un-centered) features (\(\phi_{s,a}\), for \(f_{\theta}(s,a)=\phi_{s,a}^{\top}\theta\)) and projecting \[w_{t}=\arg\min_{w}\mathbb{E}_{S\sim d_{\pi_{\theta_{t}}},\lambda \sim\pi_{\theta_{t}}}[(w^{\top}\phi_{t}(S,A)-Q_{\pi_{\theta_{t}}}(S,A))^{2}] \tag{32}\] For both of them we can now replace the NPG parameter update with \[\theta_{t+1}=\theta_{t}+\alpha w_{t} \tag{33}\] #### b.1.4 Forward search Multi-step policy iterationThe single-step based policy improvement used in the aforementioned algorithms, e.g., policy iteration, approximate PI, actor-critic methods, and its practical algorithmic implementations, is not necessarily the optimal choice. It has been empirically demonstrated in RL algorithms based on Monte-Carlo Tree Search (MCTS)(Browne et al., 2012)(e.g., Schrittwieser et al., 2019; Schmidhuber, 1987) or Model Predictive Control (MPC), that multiple-step greedy policies can perform conspicuously better. Generalizations of the single-step greedy policy improvement include (i) \(h\)-step greedy policies, and (ii) \(\kappa\)-greedy policies. The former output the first optimal action out of a sequence of actions, solving a non-stationary \(h\)-horizon control problem: \[\pi(s)\in\arg\max_{\pi_{0}}\max_{\pi_{1},\ldots\pi_{h-1}}\mathbb{E }^{\pi_{0},\ldots\pi_{h-1}}\left[\sum_{t=0}^{h-1}\gamma^{t}r(S_{t},\pi_{t}(S_{ t}))+\gamma^{h}V(S_{h})|S_{0}=s\right] \tag{34}\] equivalently described in operator notation as \(\pi\in\mathcal{G}(\mathcal{T}^{h-1}V)\equiv\{\pi|\mathcal{T}_{\pi}T^{h-1}V \geq\mathcal{T}^{h}V\}\). A \(\kappa\)-greedy policy interpolates over all geometrically \(\kappa\)-weighted \(h\)-greedy policies \(\pi\in\mathcal{G}(\mathcal{T}^{\kappa}V)\equiv\{\pi|\mathcal{T}_{\pi}^{\kappa} V\geq\mathcal{T}^{\kappa}V,\mathcal{T}_{\pi}^{\kappa}\equiv(1-\kappa)\sum_{h=0}^{ \kappa}\kappa^{h}\mathcal{T}_{\pi}^{h+1}\}\). Multi-step soft policy iterationEroni et al. (2018) shows that when using soft updates with \(h>1\) \[\pi_{t+1}=(1-\alpha)\pi_{t}+\alpha\pi_{t+1}^{+},\pi_{t+1}^{+}\in \mathcal{G}(\mathcal{T}^{h-1}V)\equiv\{\pi|\mathcal{T}_{\pi}T^{h-1}V\geq \mathcal{T}^{h}V\} \tag{35}\] policy improvement is guaranteed only for \(\alpha=1\), and when using \[\pi_{t+1}=(1-\alpha)\pi_{t}+\alpha\pi_{t+1}^{+},\pi_{t+1}^{+}\in \mathcal{G}(\mathcal{T}^{\kappa}V)\equiv\{\pi|\mathcal{T}_{\pi}^{\kappa}V\geq \mathcal{T}^{\kappa}V,\mathcal{T}_{\pi}^{\kappa}\equiv(1-\kappa)\sum_{h=0}^{ \infty}\kappa^{h}\mathcal{T}_{\pi}^{h+1}\} \tag{36}\] policy improvement is guaranteed only for \(\alpha\in[\kappa,1]\). This result appears in Efroni et al. (2018), and a more general version in Konda and Borkar (1999). Tree searchNotable examples of practical algorithms with empirical success that perform multi-step greedy policy improvement are AlphaGo and Alpha-Go-Zero (Silver et al., 2016, 2017, 2016), MuZero (Schrittwieser et al., 2019). There, an approximate online version of multiple-step greedy improvement is implemented via Monte Carlo Tree Search (MCTS) (Browne et al., 2012). In particular, Grill et al. (2020) shows that the tree search procedure implemented by AlphaZero is an approximation of the regularized optimization problem \[\pi_{\theta_{t+1}}=\arg\max_{\pi_{\theta}\in\Pi_{\Theta}}\langle U _{t},\pi\rangle_{d_{\pi_{t}}}-\frac{1}{\alpha_{t}}\operatorname{KL}_{[d_{\pi_{ t}}]}(\pi_{\theta},\pi_{t}) \tag{37}\] with \(U_{t}\)--the Tree Search values, i.e., those estimated by the search algorithm that approximates \(\mathcal{T}^{h}Q_{w_{t}}\)--\(w\) parameters of the critic, with stochastic sampling of trajectories in the tree up to a horizon \(h\), and bootstrapping on a Q-fn estimator at the leaves. For a full description of the algorithm, refer to Silver et al. (2017). The step size \(\alpha_{t}\) captures the exploration strategy, and decreases the regularization based on the number of simulations. #### b.1.5 Meta-learning Optimistic meta-gradientsMeta-gradient algorithms further relax the optimistic policy improvement step to a parametric update rule \(\pi_{\theta_{t+1}}\equiv\varphi_{\pi_{\theta_{t}}}(\eta_{t})\), e.g., \(\theta_{t+1}=\theta_{t}+u_{\eta_{t}}\), when limited to a functional class of parametric GA update rules \(u_{\eta}\in\mathcal{F}_{\eta}\). These algorithms implement adaptivity in a practical way, they project policy targets \(\pi_{t+2}\) ahead of \(\pi_{\theta_{t+1}}\) \[u_{\eta_{t+1}}=\arg\min_{u_{\eta}\in\mathcal{F}_{\eta}}\mathrm{ KL}_{[d_{\pi_{t+1}}]}(\pi_{\theta_{t+1}},\pi_{t+2})\qquad\text{ (hindsight adaptation \& projection)} \tag{38}\] The targets can be parametric \(\pi_{t+2}\equiv\pi_{\theta_{t+2}}\), initialized from \(\theta_{t+1}^{(0)}=\theta_{t+1}\), and evolving for \(h\) step further ahead of \(\theta_{t+1}\), s.t. \(\theta_{t+1}^{(k+1)}=\theta_{t+1}^{k}+g_{t}^{k},\forall k\leq h\), with \(g_{t}^{k}\) representing predictions used by the bootstrapped targets. Alternatively, targets may be non-parameteric, e.g., \(\pi_{t+2}\propto\pi_{\theta_{t+1}}\exp(Q_{t+1}-Q_{t})\), e.g., if \(Q_{t+1}=\mathcal{T}_{\pi_{\theta_{t+1}}}Q_{\eta_{t}}\) then \(\pi_{t+2}\propto\pi_{\theta_{t+1}}\exp(\mathcal{T}_{\pi_{\theta_{t+1}}}Q_{ \eta_{t}}-Q_{\eta_{t}})=\pi_{\theta_{t+1}}\)--capturing the advantage of using the proposal \(\pi_{\theta_{t+1}}\). #### b.1.6 Optimism in online convex optimization One way to design and analyze iterative optimization methods is through online linear optimization (OLO) algorithms. Online learningPolicy optimization through the lens of online learning (Hazan, 2017) means treating the policy optimization algorithm as the learner in online learning and each intermediate policy that it produces as an online decision. The following steps recast the iterative process of policy optimization into a standard online learning setup: (i) at iteration \(t\) the learner plays a decision \(\pi_{t}\in\Pi\), (ii) the environment responds with feedback on the decision \(\pi_{t}\), and the process repeats. The iteration \(t\) might be different than the timestep of the environment. Generally, it is assumed that the learner receives an unbiased stochastic approximation as a response, whereas that is not always the case for RL agents, using bootstrapping in their policy gradient estimation with a learned value function. For an agent it is important to minimize the **regret** after \(T\) iterations \[\mathrm{Reg}_{T}\equiv\sum_{t=0}^{T-1}\left(J(\pi^{*})-J(\pi_{t})\right) \tag{39}\] The goal of **optimistic online learning** algorithms (Rakhlin and Sridharan, 2013, 2014; Rakhlin and Sridharan, 2013) is obtain better performance, and thus guaranteed lower regret, when playing against "easy" (i.e., predictable) sequences of online learning problems, where past information can be leveraged to improve on the decision at each iteration. PredictabilityAn important property of the above online learning problems is that they are not completely adversarial. In RL, the policy's true performance objective cannot be truly adversarial, as the same dynamics and cost functions are used across different iterations. In an idealized case where the true dynamics and cost functions are exactly known, using the policy returned from a model-based RL algorithm would incur zero regret, since only the interactions with the real MDP environment, not the model, are considered in the regret minimization problem formulation. The main idea is to use (imperfect) predictive models, such as off-policy gradients and simulated gradients, to improve policy learning. #### b.1.7 Online learning algorithms We now summarize two generalizations of the well-known core algorithms of online optimization for predictable sequences, cf. Joulani et al. (2020): (i) a couple variants of optimistic mirror descent (Chiang et al., 2012; Rakhlin and Sridharan, 2013, 2013), including extragradient descent (Korpelevich (1976), and mirror-prox (Nemirovski, 2004; Juditsky et al., 2011), and (ii) adaptive optimistic follow-the-regularized-leader (AO-FTRL) (Rakhlin and Sridharan, 2013, 2014; Mohri and Yang, 2016). Optimistic mirror descent (OMD). Extragradient methodsStarting with some previous iterate \(x_{t}\in\mathcal{X}\), an OMD (Joulani et al., 2020) learner \(x\) uses a prediction \(\tilde{g}_{t+1}\in\mathcal{X}^{*}\) (\(\mathcal{X}^{*}\)--dual space of \(\mathcal{X}\)) to minimize the regret on its convex loss function \(f:\mathcal{X}\rightarrow\mathbb{R}\) against an optimal comparator \(x^{*}\in\mathcal{X}\) with \[x_{t+1}=\arg\min_{x\in\mathcal{X}}(g_{t}+\tilde{g}_{t+1}-\tilde{g}_{t},x)+ \mathcal{B}_{\Omega}(x,x_{t}) \tag{40}\] with \(\tilde{g}_{t+1}\approx\nabla f(x_{t+1})\) optimistic gradient prediction, and \(g_{t}\equiv\nabla f(x_{t})\) true gradient feedback, \(\mathcal{B}_{\Omega}\) a Bregman divergence with mirror map \(\Omega\). Extragradient methods consider two-step update rules for the same objective using an intermediary sequence \(\tilde{x}\) \[\tilde{x}_{t+1} =\arg\min_{x\in\mathcal{X}}\langle\tilde{g}_{t+1},x\rangle+ \mathcal{B}_{\Omega}(x,x_{t}) \tag{41}\] \[x_{t+1} =\arg\min_{\pi\in\Pi}\langle g^{+}_{t+1},x\rangle+\mathcal{B}_{ \Omega}(x,x_{t}) \tag{42}\] with \(\tilde{g}_{t+1}\approx\nabla f(x_{t+1})\) a gradient prediction, and \(g^{+}_{t+1}\equiv\nabla f(\tilde{x}_{t+1})\) the true gradient direction, but for the intermediary optimistic iterate \(\tilde{x}_{t+1}\). Adaptive optimistic follow-the-regularized-leader (AO-FTRL)A learner using AO-FTRL updates \(x\) using \[x_{t+1}=\arg\min_{x\in\mathcal{X}}\langle g_{0:t}+\tilde{g}_{t+1},x\rangle+ \omega_{1:t-1}(x) \tag{43}\] where \(g_{0:t}=\sum_{j=0}^{t}g_{j}\) are true gradients, \(\tilde{g}_{t+1}\) is the optimistic part of the update, a prediction of the gradient before it is received, and \(\omega_{0:t}(x)=\sum_{j=0}^{t}\omega_{j}(x)\) represent the "proximal" part of this adaptive regularization (cf. Joulani et al. (2020)), counterparts of the Bregman divergence we have for MD updates that regularizes iterates to maintain proximity. #### b.1.8 Policy optimization with online learning algorithms Cheng et al. (2018) follows the extragradient approach for policy optimization \[\pi_{t+2} =\arg\max_{\pi\in\Pi}\langle Q_{t},\pi\rangle+\mathrm{KL}(\pi,\pi _{t}) \tag{44}\] \[\pi_{t+1} =\arg\max_{\pi\in\Pi}\langle Q_{\pi_{t}},\pi\rangle-\mathrm{KL}( \pi,\pi_{t}) \tag{45}\] but changes the second sequence to start from the intermediary sequence and add just a correction \[\pi_{t+2} =\arg\max_{\pi\in\Pi}\langle Q_{t},\pi\rangle-\mathrm{KL}(\pi,\pi _{t}) \tag{46}\] \[\pi_{t+1} =\arg\max_{\pi\in\Pi}\langle Q_{\pi_{t}}-Q_{t},\pi\rangle- \mathrm{KL}(\pi,\pi_{t+2}) \tag{47}\] This approach uses \(\pi_{t+2}\) as the optimistic prediction, and \(\pi_{t+1}\) as the hindsight corrected prediction--a policy optimal in hindsight w.r.t. the average of all previous Q-functions rather than just the most recent one. But it needs an additional model for the value functions \(Q_{t}\), and another learning algorithm to adapt \(Q_{t}\) to the \(Q_{t+1}\). Additionally, an agent does not generally have access to \(Q_{\pi_{t}}\), but only partial evaluations. Hao et al. (2020) also designs an adaptive optimistic algorithm based on AO-FTRL, which updates \[\pi_{t+1}=\arg\max_{\pi\in\Pi}\left\langle\left(\sum_{j=0}^{t}Q_{j}+\hat{Q}_{ t+1}\right),\pi\right\rangle-\alpha_{t}\omega(\pi) \tag{48}\] with \(\omega\)--a regularizer, and with \(Q_{j}\approx Q_{\pi_{j}},\forall j\leq t\) predictions for the true gradients, and \(\hat{Q}_{t+1}\approx Q_{\pi_{t+1}}\) is also a prediction for the gradient of the next policy, which uses the previous predictions \(Q_{\pi_{j}},\forall j\leq t\) to compute it. The authors also propose an adaptive method for learning \(\alpha_{t}\) that uses gradient errors of \(Q_{j},\forall j\leq t\). Averaging value functions has also been explored by Vieillard et al. (2020) and Vieillard et al. (2020). ## Appendix C Empirical analysis details ### Algorithms ``` 1:Init: params \(\theta_{0}\), buffer \(\mathcal{B}=[()]\) 2:for\(t\in 0..T\) iterations do 3: Every \(n\) steps using a rollout \(\mathcal{B}\leftarrow(S_{t},A_{t},R_{t},S_{t+1}\dots S_{t+n})\sim\pi_{\theta_{t}}\) 4: Update policy learner \(\pi_{\theta_{t+1}}\) cf. Eq.49 5:endfor ``` **Algorithm 3** Policy gradient Policy gradientsAlgorithm 3 describes a standard PG algorithm (cf. [Williams, 1992]) with an expert oracle critic \(Q_{\pi_{\theta}}\), for the policy evaluation of \(\pi_{\theta}\). The standard policy gradient update is \[\theta_{t+1}=\theta_{t}+\xi\frac{1}{n}\sum_{i=t}^{t+n}\nabla_{\theta_{t}}\log \pi_{\theta_{t}}(A_{i}|S_{i})\left(Q_{\pi_{\theta_{t}}}(S_{i},A_{i})-\mathbb{ E}_{\pi_{\theta_{t}}}[Q_{\pi_{\theta_{t}}}(S_{i},\cdot)]\right) \tag{49}\] ``` 1:Init: params \((\theta_{0},w_{0})\), buffer \(\mathcal{B}=[()]\) 2:for\(t\in 0..T\) iterations do 3: Every \(n\) steps using a rollout \(\mathcal{B}\leftarrow(S_{t},A_{t},R_{t},S_{t+1}\dots S_{t+n})\sim\pi_{\theta_{t}}\) 4: Update critic \(Q_{w_{t+1}}\) cf. Eq.51 and policy learner \(\pi_{\theta_{t+1}}\) cf. Eq.50 5:endfor ``` **Algorithm 4** Actor-critic Actor-criticAlgorithm 4 describes a standard AC algorithm (cf. [Sutton et al., 1999]) with an estimated critic \(Q_{w}\), for the policy evaluation of \(\pi_{\theta}\). The policy updates \[\theta_{t+1}=\theta_{t}+\xi\frac{1}{n}\sum_{i=t}^{t+n}\nabla_{\theta_{t}}\log \pi_{\theta_{t}}(A_{i}|S_{i})\left(Q_{w_{t}}(S_{i},A_{i})-\mathbb{E}_{\pi_{ \theta_{t}}}[Q_{w_{t}}(S_{i},\cdot)]\right) \tag{50}\] and the critic's update using TD(0) learning, writes \[w_{t+1}=w_{t}-\zeta\frac{1}{n}\sum_{i=t}^{t+n}\left(R_{i}+\gamma E_{\pi_{t}}[Q _{w_{t}}(S_{i+1},\cdot)]-Q_{w_{t}}(S_{i},A_{i})\right)\nabla_{w_{t}}Q_{w_{t}}( S_{i},A_{i}) \tag{51}\] \begin{table} \begin{tabular}{l l} \hline \(t\) & iterations/timesteps \\ \(T\) & number of iterations \\ \(n\) & rollout length \\ \(\mathcal{B}\) & buffer \\ \(\mathcal{M}\) & meta-buffer \\ \(w\) & standard critic (Q-fn \(Q_{w}\)) parameters \\ \(\eta\) & meta parameters of meta-learner (\(u_{\eta}\) or \(U_{\eta}\)) \\ \(\nu\) & step size for meta-learner’s parameters \(\eta\) (\(U_{\eta}\)) \\ \(\zeta\) & step size for standard critic’s parameters \(w\) (Q-fn \(Q_{w}\)) \\ \(\xi\) & step size for the policy learner’s parameters \(\theta\) (\(\pi_{\theta}\)) \\ \(h\) & lookahead horizon \\ \(U\) & search values up to lookahead horizon \(h\) (tree depth) \\ \hline \end{tabular} \end{table} Table 2: Notation Forward search with a modelAlgorithm 5 describes an AC algorithm with \(h\)-step lookahead search in the gradient critic \[\theta_{t+1}=\theta_{t}+\xi\frac{1}{n}\sum_{i=0}^{n}\nabla_{\theta_{t}}\log\pi_{ \theta_{t}}(A_{i}|S_{i})\left(U_{t}(S_{i},A_{i})-\mathbb{E}_{\pi_{t}}[U_{t}(S_{ i},\cdot)]\right) \tag{52}\] where \(U_{t}\) is either (i) \(U_{t}=\mathcal{T}^{h}Q_{w_{t}}\) or (ii) \(U_{t}=\mathcal{T}^{h}Q_{w_{t}}\), depending on the experimental setup, and the critic is updated toward the search Q-values \[w_{t+1}=w_{t}-\zeta\frac{1}{n}\sum_{i=0}^{n}\nabla_{w_{t}}Q_{w_{t}}(S_{i},A_{i })\left(R_{i}+\gamma E_{\pi_{t}}[U_{t}(S_{i+1},\cdot)]-Q_{w_{t}}(S_{i},A_{i})\right) \tag{53}\] ``` 1:Input: params \((\theta_{0},\eta_{0})\), buffer \(\mathcal{B}=[()]\) 2:for\(t\in 0..T\) iterations do 3: Every \(n\) steps using a rollout \(\mathcal{B}\leftarrow(S_{t},A_{t},R_{t},S_{t+1}\ldots S_{t+n})\sim\pi_{\theta_ {t}}\) 4: Generate search values \(U_{t}\) up to lookahead horizon \(h\) with 5: (i) \(U_{t}=\mathcal{T}_{n}^{h}Q_{w_{t}}\) 6: (ii) \(U_{t}=\mathcal{T}^{h}Q_{w_{t}}\) 7: Update critic \(Q_{w_{t+1}}\) cf. Eq.53 and policy learner \(\pi_{\theta_{t+1}}\) cf. Eq.52, using \(U_{t}\) 8:endfor ``` **Algorithm 6 Optimistic policy gradients with policy targets computed from expert targets** Optimistic policy gradients with expert targetsAlgorithm 6 describes a meta-gradient based algorithm for learning _optimistic policy gradients_ by supervised learning from policy targets computed with accurate optimistic predictions \(Q_{\pi_{t+1}}\). The meta-update used updates \[\theta_{t+1}=\theta_{t}+\xi u_{\eta_{t-1}} \tag{54}\] where \(u_{\eta_{t}}=\frac{1}{n}\sum_{i=0}^{n}\nabla_{\theta_{t}}\log\pi_{\theta_{t} }(A_{i}|S_{i})\left(\left(U_{\eta_{t}}(S_{i},A_{i})-\mathbb{E}_{\pi_{\theta_{ t}}}[(U_{\eta_{t}}(S_{i},A_{i})]\right)\right)\) The policy targets are (i) _parametric policies_ obtained at iteration \(t\) by starting from the parameters \(\theta_{t+1}\) (\(\theta_{t+2}^{0}=\theta_{t+1}\)) and executing \(h\) parameter updates with data from successive batches of rollouts \(\mathcal{B}_{t+1:t+h}\) sampled from the meta-buffer \(\mathcal{M}\) \[\theta_{t+2}^{j+1}=\theta_{t+2}^{j}+\xi\hat{g}_{t+1}^{j} \tag{55}\] with \(\hat{g}_{t+1}^{j}=\frac{1}{n}\sum_{i=0}^{n}\nabla_{\theta_{t}}\log\pi_{\hat{ \theta}_{t}^{j}}(A_{i}|S_{i})\left(\left(Q_{\pi_{t+1}}(S_{i},A_{i})-\mathbb{E} _{\pi_{\theta_{t}}}[(Q_{\pi_{t+1}}(S_{i},A_{i})]\right)\right)\). After \(h\) steps the resulting target parameters \(\theta_{t+2}\equiv\theta_{t+2}^{h}\), and yield the target policy \(\pi_{\theta_{t+2}}\). The other choice we experiment with is to use a target constructed with (ii) _geometric updates_ for one (or more) steps ahead, similarly to tree-search policy improvement procedures. The targets are initialized with \(\pi_{t+1}^{0}=\pi_{\theta_{t+1}}\) and execute one (or more) steps of policy improvement \[\pi_{t+2}^{j+1}\propto\pi_{t+2}^{j}\exp\alpha Q_{\pi_{t+1}} \tag{56}\] yielding the non-parametric policy target \(\pi_{t+2}\equiv\pi_{t+2}^{j+1}\). Setting \(\alpha\rightarrow\infty\) in Eq.56, if the predictions are given, or can be computed with the help of the simulator model, we obtain an update similar to the multi-step greedy operator \(\mathcal{T}^{h}\) used in forward search. The next parameter vector \(\eta_{t}\) for the gradient \(u_{\eta}\) is distilled via meta-gradient learning by projecting the expert policy target \(\pi_{t+2}\) (or \(\pi_{\theta_{t+2}}\)) using the data samples from \(\mathcal{B}_{t},\ldots\mathcal{B}_{t+h}\) from \(\mathcal{M}\) and the surrogate objective \[\eta_{t+1}=\eta_{t}-\nu\frac{1}{h}\sum_{j=t}^{t+h}\nabla_{\eta_{t}}\operatorname {KL}(\pi_{\theta_{t+1}}(S_{j}),\pi_{t+2}(S_{j})) \tag{57}\] ``` 1:Init: params \((\theta_{0},w_{0},\eta_{0})\), buffer \(\mathcal{B}=[()]\), meta-buffer \(\mathcal{M}=[\mathcal{B},..]\) 2:for\(t\in 0..T\) iterations do 3: Every \(n\) steps using a rollout \(\mathcal{B}_{t}\leftarrow(S_{t},A_{t},R_{t},S_{t+1}\ldots S_{t+n})\sim\pi_{ \theta_{t}}\) 4: Predict \(u_{\eta_{t}}\) 5: Update learner with optimistic prediction \(u_{\eta_{t}}\) using Eq. 54 6: Update \(Q_{w_{t+1}}\) cf. Eq.51 7: Every \(h\) steps using experience stored in the meta-buffer \(\mathcal{M}\leftarrow(\mathcal{B}_{t},\ldots\mathcal{B}_{t+h})\) 8: Compute policy targets \(\pi_{\theta_{t+2}}\) cf. Eq. 58 or \(\pi_{t+2}\) cf. Eq. 59 9: Update meta-learner \(u_{\eta_{t+1}}\) cf. Eq. 57 10:endfor ``` **Algorithm 7 Optimistic policy gradients with target predictions** Optimistic policy gradients with target predictionsAlgorithm 7 describes a meta-gradient based algorithm for learning _optimistic policy gradients_, by self-supervision from target predictions (learned estimators). The targets we use are (i) _parametric_, computed at iteration \(t\), similarly to the previous paragraph (Eq. 55, except we now replace the true optimistic predictions \(Q_{\pi_{t+j}}\) with \(Q_{w_{t+j}},\forall j\geq 1\) \[\hat{g}_{t+1}^{j}=\frac{1}{n}\sum_{i=0}^{n}\nabla_{\theta_{t}}\log\pi_{ \theta_{t}^{j}}(A_{i}|S_{i})\left((Q_{w_{t+1}}(S_{i},A_{i})-\mathbb{E}_{\pi_{ \theta_{t}}}[(Q_{w_{t+1}}(S_{i},A_{i})]\right) \tag{58}\] We also experiment with the (ii) _non-parametric target_ that takes _geometric_ steps, similarly to Tree-Search policy improvement procedure, where the optimistic prediction from Eq.56, uses the ground truth. \[\pi_{t+2}^{j+1}\propto\pi_{t+2}^{j}\exp\alpha Q_{w_{t+1}} \tag{59}\] ### Experimental setup Environment detailsAll empirical studies are performed on the same discrete navigation task from Sutton and Barto (2018), illustrated in Fig. 3. "G" marks the position of the goal and the end of an episode. "S" denotes the starting state to which the agent is reset at the end of the episode. The state space size is \(48\), \(\gamma=0.99\). There are \(4\) actions that can transition the agent to each one of the adjacent states. Reward is \(1\) at the goal, and zero everywhere else. Episodes terminate and restart from the initial state upon reaching the goal. ProtocolAll empirical studies report the regret of policy performance every step, and at every episode \(J(\pi^{*})-J(\pi_{t})\), for a maximum number of \(500\) episodes. Hyperparameter sensitivity plots show the cumulative regret per total number of steps of experience cumulated in \(500\) episodes, \(\sum_{t}J(\pi^{*})-J(\pi_{t})\). This quantity captures the sample efficiency in terms of number of steps of interaction required. Algorithmic implementation detailsMeta-gradient based algorithms keep parametric representations of the gradient fields via a parametric advantage \(\operatorname{A}_{\eta}(s,a)=U_{\eta}(s,a)-E_{\pi}[U_{\eta}(s,A)]\), \(\forall s,a\), s.t. a learned gradient update consists of a parametric gradient step on the loss \[\theta_{t+1} =\theta_{t}+\nabla\mathcal{L}(\theta;\mathcal{B}_{t})\Big{|}_{ \theta=\theta_{t}} \tag{60}\] \[\mathcal{L}(\theta;\mathcal{B}_{t}) =\frac{1}{n}\sum_{i=t}^{t+n}\log\pi_{\theta}(A_{i}|S_{i})\left(U _{\eta}(S_{i},A_{i})-\sum_{a}\pi_{\theta_{t}}(A_{i}|S_{i})U_{\eta}(S_{i},A_{i} )\right) \tag{61}\] Figure 3: **Maze Navigation: illustration of the MDP used in the empirical studies** Policies use the standard softmax transform \(\pi_{\theta}=\frac{\exp f_{\theta}(s,a)}{\sum_{b}\exp f_{\theta}(s,b)}\), with \(f_{\theta}\), the policy logits. In the experiments illustrated we use a tabular, one-hot representation of the state space as features, so \(f_{\theta}\) is essentially \(\theta\). The same holds for the critic's parameter vector \(Q_{w}\), and the meta-learner's parameter vector \(U_{\eta}\). The experiments were written using JAX, Haiku, and Optax (Bradbury et al., 2018; Babuschkin et al., 2020; Hennigan et al., 2020). Experimental details for the forward search experimentWe used forward search with the environment true dynamics model up to horizon \(h\), backing-up the current value estimate \(Q_{w_{t}}\) at the leaves. We distinguish between two settings: (i) using the previous policy \(\pi_{t}\) for _bootstrapping_ in the tree-search back-up procedure, i.e. obtaining \(U_{t}=\mathcal{T}_{\pi_{t}}^{h}Q_{w_{t}}\) at the root of the tree; or (ii) using the greedification inside the tree to obtain \(U_{t}=\mathcal{T}^{h}Q_{w_{t}}\) at the root. Table 3 specifies the hyperparameters used for both of the aforementioned experimental settings. Results shown in the main text are averaged over \(10\) seeds and show the standard error over runs. Experimental details for the meta-gradient experiments with expert target/hintsFor this experiment we used Algorithm 6 described in Sec. C.1, and the hyperapar \begin{table} \begin{tabular}{l l} \hline \hline Hyperparameter & \\ \hline \(\xi\) (policy step size) & 0.1 \\ & (training plots Fig.2-d) \\ & (\(\text{0.1, 0.5}\)) \\ & (sensitivity plots Fig.2-c) \\ \(\zeta\) (Q-fn step size) & - \\ \(h\) (lookahead horizon) & 1 \\ \(\alpha\) (step size \(\pi\)) & 1 \\ \(n\) (rollout length) & 2 \\ policy optimiser & SGD \\ meta-learner optimiser & Adam \\ \hline \hline \end{tabular} \end{table} Table 4: Hyperparameters for optimism via meta-gradient learning with expert targets/hints on the Maze Gridworld in Fig. 3 \begin{table} \begin{tabular}{l l} \hline \hline Hyperparameter & \\ \hline \(\xi\) (policy step size) & 0.5 \\ \(\zeta\) (Q-fn step size) & 0.01, 0.1, 0.5, 0.9] \\ \(h\) (lookahead horizon) & \{0, 1, 2, 4, 8, 16\} \\ \(n\) (rollout length) & 2 \\ policy/Q-fn optimiser & SGD \\ \hline \hline \end{tabular} \end{table} Table 3: Hyperparameters for optimism via forward search on the Maze Gridworld in Fig. 3 ### Additional results & observations Fig 4 shows learning curves, and Fig 5 hyperparameter sensitivity, for experiments with expert targets, when using Adam for the meta-optimization. Fig 6 (learning curves), and Fig 7 (hyperparameter sensitivity) illustrate results for when the meta-optimization uses SGD. The next set of figures show experiments with target predictions--for Adam Fig 8 (learning curves) and Fig 9 (hyperparameter sensitivity), and for SGD Fig 10 (learning curves) and Fig 11 (hyperparameter sensitivity). Figure 4: Meta-learner uses Adam. Policy optimization with adaptive optimistic policy gradients. x-axis - (a) no of episodes, (b) no of steps. y-axis - regret \(J(\pi^{*})-J(\pi_{t})\). Learning curves denote: the baseline - standard PG algorithm, _adaptive optimistic policy gradient learning algorithms_ - with parametric target policies, functional non-parametric target policies, trained with meta-gradients from _expert targets_. Shades (wherever noticeable) denote standard error over different runs. Figure 5: Meta-learner uses Adam. Hyper-parameter sensitivity curves for the meta-learning rate \(\nu\) - x-axis, y-axis - total cummulative regret (a): \(\sum_{i\leq t}J(\pi^{*})-J(\pi_{i})\), final regret (b): \(J(\pi^{*})-J(\pi_{T})\); Learning curves show _adaptive optimistic policy gradient learning algorithms_ - with parametric target policies, functional non-parametric target policies, trained with meta-gradients from _expert targets_. Different tones show the evolution of the meta-hyperparameter \(\nu\) to those used in the inner learned optimization algorithm, i.e. the policy step size. Different straight lines denote the baseline—standard PG. Shades denote standard error over different runs. Figure 8: Meta-learner uses Adam. Policy optimization with adaptive optimistic policy gradients. x-axis - (a) no of episodes, (b) no of steps. y-axis - regret \(J(\pi^{*})-J(\pi_{t})\). Learning curves denote: the baseline - standard AC algorithm, _optimistic policy gradient learning algorithms_ - with parametric target policies, functional non-parametric target policies, trained with meta-gradients from _target predictions_. Shades (wherever noticeable) denote standard error over different runs. Figure 6: Meta-learner uses SGD. Policy optimization with adaptive optimistic policy gradients. x-axis - (a) no of episodes, (b) no of steps. y-axis - regret \(J(\pi^{*})-J(\pi_{t})\). Learning curves denote: the baseline - standard PG algorithm, _optimistic policy gradient learning algorithms_ - with parametric target policies, functional non-parametric target policies, trained with meta-gradients from _expert targets_. Shades (wherever noticeable) denote standard error over different runs. Figure 7: Meta-learner uses SGD. Hyper-parameter sensitivity curves for the meta-learning rate \(\nu\) - x-axis. y-axis - total cumulative regret (a): \(\sum_{i\leq t}J(\pi^{*})-J(\pi_{t})\), final regret (b): \(J(\pi^{*})-J(\pi_{T})\). Learning curves show _adaptive optimistic policy gradient learning algorithms_ - with parametric target policies, functional non-parametric target policies, trained with meta-gradients from _expert targets_. Different tones show the evolution of the meta-hyperparameter \(\nu\) to those used in the inner learned optimization algorithm, i.e. the policy step size. Different straight lines denote the baseline—standard PG. Shades denote standard error over different runs. Figure 11: Meta-learner uses SGD. Hyper-parameter sensitivity curves for the meta-learning rate \(\nu\) - x-axis; y-axis - total accumulative regret (a): \(\sum_{i\leq t}J(\pi^{*})-J(\pi_{i})\), final regret (b): \(J(\pi^{*})-J(\pi_{T})\). Learning curves show _optimistic policy gradient learning algorithms_ - with parametric target policies, functional non-parametric target policies, trained with meta-gradients from _target predictions_. Different tones show the evolution of the meta-hyperparameter \(\nu\) to those used in the inner learned optimization algorithm, i.e. Q-fn step size. Different straight lines denote the baseline—standard AC. Shades denote standard error over different runs. Figure 10: Meta-learner uses SGD. Policy optimization with optimistic policy gradients. x-axis - (a) no of episodes, (b) no of steps. y-axis - regret \(J(\pi^{*})-J(\pi_{t})\). Learning curves denote: the baseline - standard AC algorithm, _optimistic policy gradient learning algorithms_ - with parametric target policies, functional non-parametric target policies, trained with meta-gradients from _target predictions_. Shades (wherever noticeable) denote standard error over different runs. Figure 9: Meta-learner uses Adam. Hyper-parameter sensitivity curves for the meta-learning rate \(\nu\) - x-axis. y-axis - total accumulative regret (a): \(\sum_{i\leq t}J(\pi^{*})-J(\pi_{i})\), final regret (b): \(J(\pi^{*})-J(\pi_{T})\). Learning curves show _optimistic policy gradient learning algorithms_ - with parametric target policies, functional non-parametric target policies, trained with meta-gradients from _target predictions_. Different tones show the evolution of the meta-hyperparameter \(\nu\) to those used in the inner learned optimization algorithm, i.e. Q-fn step size. Different straight lines denote the baseline—standard AC. Shades denote standard error over different runs.
2303.13888
Characters of prime power degree in principal blocks
We describe finite groups whose principal block contains only characters of prime power degree.
J. Miquel Martínez
2023-03-24T09:55:12Z
http://arxiv.org/abs/2303.13888v2
# Characters of prime power degree in principal blocks ###### Abstract. We describe finite groups whose principal block contains only characters of prime power degree. Key words and phrases:Character degrees, principal block, prime powers 2010 Mathematics Subject Classification: 20C20, 20C15 This research is partially supported by Ministerio de Ciencia e Innovacion PID2019-103854GB-I00, Generalitat Valenciana CIAICO/2021-163, as well as a fellowship UV-INV-PREDOC20-1356056 from Universitat de Valencia and a travel grant associated to the same fellowship. principal block version to Manz's results. Of course, in this case \(p\)-solvability is no longer guaranteed (take \(G=\mathfrak{A}_{5}\) and \(p\) any prime dividing \(|G|\)), but it is possible to accurately describe the structure of \(G\) and this description is the purpose of this note. Since this property is inherited by factor groups and normal subgroups (see Lemma 3.1), we inevitably run into the problem of determining which finite simple groups satisfy our hypothesis. The following completely describes these groups. **Theorem A**.: _Let \(S\) be a nonabelian finite simple group, and let \(p\) be a prime dividing \(|S|\). Then \(\operatorname{cd}(B_{0}(S))\) contains only prime powers if and only if \((S,p)\) is one of the following_ 1. \(S=\operatorname{PSL}_{2}(q)\) _for_ \(q\) _a Fermat or Mersenne prime and_ \(p\not\in\{2,q\}\)_,_ 2. \(S=\operatorname{SL}_{2}(2^{n})\) _where_ \(q=2^{n}\pm 1\) _is a prime,_ \(p\not\in\{2,q\}\)_,_ 3. \(S=\operatorname{SL}_{2}(8)\) _and_ \(p\in\{2,3,5,7\}\)_,_ 4. \(S=\operatorname{SL}_{2}(4)\cong\mathfrak{A}_{5}\) _and_ \(p\in\{2,3,5\}\)_,_ 5. \(S=\operatorname{PSL}_{2}(9)\cong\mathfrak{A}_{6}\) _and_ \(p=5\)_._ _and in all cases there are exactly two primes dividing the degrees in \(\operatorname{cd}(B_{0}(S))\)._ Most of the work towards the proof of Theorem A follows from the results of [10] and [1], where the prime power degree characters of finite (quasi-)simple groups were determined. In fact, Theorem A is fairly simple to obtain using these results and the work done in [11] and [12] on simple groups of Lie type. For general finite groups we have the following description. **Theorem B**.: _Let \(G\) be a finite group, \(p\) a prime dividing \(|G|\) and assume \(\operatorname{cd}(B_{0}(G))\) consists only of prime powers. Then one of the following happens:_ 1. \(G/\mathbf{O}_{p^{\prime}}(G)\) _is a solvable group described in_ _[_13_]__,_ 2. _there is a normal subgroup_ \(\mathbf{O}_{p^{\prime}}(G)\subseteq M\triangleleft G\) _such that_ \[M/\mathbf{O}_{p^{\prime}}(G)=H\times S\] _where_ \(H\) _is an abelian_ \(p\)_-group and_ \((S,p)\) _is one of the pairs from Theorem A. Further,_ \(G/M\) _is isomorphic to a subgroup of_ \(\operatorname{Out}(S)\)_._ We remark that the only case where \(G/M\) is not cyclic is when \(S\cong\mathfrak{A}_{6}\). Indeed, the characters in the principal \(5\)-block of \(\operatorname{Aut}(\mathfrak{A}_{6})\) have degrees \(1,9\) and \(16\). In all other cases, \(\operatorname{Out}(S)\) is cyclic. The following is an immediate corollary of Theorem B and the main result of [11]. **Corollary C**.: _Let \(G\) be a finite group, and \(p\) a prime such that \(\operatorname{cd}(B_{0}(G))\) consists only of prime powers. Then there are at most \(3\) primes dividing the degrees in \(\operatorname{cd}(B_{0}(G))\)._ Problems on character degrees of finite groups have led to the study of the so called character degree graph \(\Gamma(G)\) whose vertices are primes dividing the degree of some character of \(G\), and two vertices \(p,q\) are connected if there is \(\chi\in\operatorname{Irr}(G)\) with \(pq\mid\chi(1)\). Shortly after Manz's work, it was proved that \(\Gamma(G)\) has at most three connected components, and if \(G\) is solvable then it has at most two (see [14, Theorems 4.2 and 6.4]). In [15] an analogous graph \(\Gamma(B)\) was introduced for a \(p\)-block \(B\), where the degrees considered are only those of characters that lie in \(\operatorname{Irr}(B)\). In [15, Corollary C] the authors show that if \(G\) is \(p\)-solvable then \(\Gamma(B)\) has at most three connected components and if \(G\) is solvable then it has two, so it seems that \(\Gamma(B)\) behaves somewhat similarly to \(\Gamma(G)\). As pointed out by Moreto, it is interesting to speculate whether \(\Gamma(B_{0}(G))\) has at most \(3\) connected components in general, mimicking the situation in \(\Gamma(G)\). We prove Theorem A in Section 2 and we prove Theorem B and Corollary C in Section 3. ### Acknowledgements The results in this note were obtained while the author visited the Department of Mathematics of the Rheinland-Pfalzische Technische Universitat (formerly TU Kaiserslautern). He thanks Gunter Malle for supervising his visit and for a thorough read of this manuscript, and the entire department for their warm hospitality. Furthermore, he would like to thank Alexander Moreto for his question and very useful conversations on the topic, and Annika Bartelt for clarifying some formulas for unipotent characters. ## 2. Simple groups The aim of this section is to prove Theorem A. We start by recalling some classical results in number theory. **Lemma 2.1** (Zsigmondy's theorem).: _Let \(q\) be a prime and \(n>1\) an integer. Then_ 1. _there is a prime dividing_ \(q^{n}-1\) _that does not divide_ \(q^{m}-1\) _for all_ \(m<n\) _unless_ \(q=2\) _and_ \(n=6\) _or_ \(n=2\) _and_ \(q+1\) _is a power of_ \(2\)_,_ 2. _there is a prime dividing_ \(q^{n}+1\) _that does not divide_ \(q^{m}+1\) _for all_ \(m<n\) _unless_ \(q=2\) _and_ \(n=3\) **Lemma 2.2**.: _Suppose that \(q\) is an odd prime and \(q^{n}+1=2^{s}\) for some positive integers \(n\) and \(s\). Then \(n=1\)._ Proof.: See [10, Chapter IX, Lemma 2.7]. The following immediately follows from the previous lemmas. **Lemma 2.3**.: _Assume \(q\) is a power of \(2\) such that \(q-1\) and \(q+1\) are prime powers. Then \(q\in\{4,8\}\)._ Proof.: Assume \(q>8\) and that both \(q-1\) and \(q+1\) are prime powers. By Lemma 2.2, \(q-1\) is a prime, and therefore \(q+1\) is a power of \(3\). By Lemma 2.1 there is a prime dividing \(q+1\) that does not divide \(2+1=3\), a contradiction. The next result is one of our main tools for discarding simple groups for Theorem A. **Lemma 2.4**.: _If \(G\) is not a \(p\)-solvable group then \(|\mathrm{cd}(B_{0}(G))|\geq 3\), and if \(p\geq 5\) there are at least \(3\) character degrees in \(\mathrm{cd}(B_{0}(G))\) not divisible by \(p\)._ Proof.: This follows from the main results of [11] and [12]. Next, we exclude most families of finite simple groups as candidates for Theorem A. **Proposition 2.5**.: _Assume \(S\) is not one of \(\mathrm{PSL}_{n}(q),\mathrm{PSU}_{n}(q),\mathrm{PSp}_{2n}(q)\). Then there is some \(\chi\in\mathrm{Irr}(B_{0}(S))\) whose degree is not a prime power._ Proof.: Assume first that \(S\) is an alternating group \(\mathfrak{A}_{n}\) for \(n\geq 7\). If \(n\leq 9\) this is easily checked in [GAP]. If \(n\geq 10\) then by the main result of [1] there is at most one nonlinear representation of \(\mathfrak{A}_{n}\) of prime power degree \(q\). By Lemma 2.4 there is a character \(\chi\in\mathrm{Irr}(B_{0}(G))\) of degree \(\chi(1)\neq q\) so we are done in this case. If \(S\) is a simple group appearing in cases (7)-(27) of [11, Theorem 1.1] then this is also easily checked in [GAP]. Finally, if \(S\) is a simple group not appearing in any of the cases (2)-(27) of [11, Theorem 1.1] then this implies that \(S\) is a simple group of Lie type and the nonlinear character of \(S\) with prime power degree is the Steinberg character \(\mathsf{St}_{S}\). Then by Lemma 2.4 there is a nonlinear character \(\chi\in\mathrm{Irr}(B_{0}(S))\) with \(\chi(1)\neq\mathsf{St}_{S}(1)\) so we are done. Thus we are left to deal with the groups \(\mathrm{PSL}_{n}(q),\mathrm{PSU}_{n}(q)\) and \(\mathrm{PSp}_{2n}(q)\). We begin with an easy observation. **Proposition 2.6**.: _Let \(S\) be a simple group of Lie type in characteristic \(p\). Then Theorem A holds for \((S,p)\)._ Proof.: Assume \(S\) is not one of the groups in Proposition 2.5. The group \(S=\operatorname{PSp}_{4}(2)^{\prime}\cong\operatorname{PSL}_{2}(9)\) for \(p=2\) can be checked in [GAP]. Otherwise, by [1, Theorem 3.3], \(\operatorname{Irr}(B_{0}(S))=\operatorname{Irr}(S)\setminus\{\mathsf{St}_{S}\}\) where \(\mathsf{St}_{S}\) denotes the Steinberg character of \(S\). Since \(\mathsf{St}_{S}(1)\) is a prime power, \(\operatorname{cd}(B_{0}(S))\) consists only of prime powers if and only if every character degree of \(S\) is a prime power. By the main result of [11] we have that \(S\in\{\operatorname{SL}_{2}(4),\operatorname{SL}_{2}(8)\}\). To deal with the remaining groups we will need to use so-called unipotent characters, introduced by G. Lusztig. In the case of \(S=\operatorname{PSL}_{n}(q)\) or \(\operatorname{PSU}_{n}(q)\), these are characters of \(\operatorname{SL}_{n}(q)\) or respectively \(\operatorname{SU}_{n}(q)\) but as argued in, for example, the first paragraph of [12, Proposition 4.4], these characters contain \(\mathbf{Z}(\operatorname{SL}_{n}(q))\) or resp. \(\mathbf{Z}(\operatorname{SU}_{n}(q))\) in their kernels, and so they can be seen as characters of \(S\), and furthermore they are contained in the principal block of \(G\) if and only if they are in the principal block of \(S\) (using [1, Lemma 17.2]). In this case, they are parametrized by partitions \(n\) (see [1, Section 4.3]). Let \(p\) be a prime dividing \(|S|\) and let \(e\) denote the order of \(q\) modulo \(p\) if \(S=\operatorname{PSL}_{n}(q)\) and the order of \(-q\) modulo \(p\) if \(S=\operatorname{PSU}_{n}(q)\). Further, let \(r\) denote the remainder of \(n\) divided by \(m\). Following [10] we have that a unipotent character of \(S\) parametrized by the partition \(\alpha\) of \(n\) belongs to the principal \(p\)-block if its \(e\)-core is \((r)\). If \(S=\operatorname{PSp}_{2n}(q)\) then the unipotent characters are characters of \(\operatorname{Sp}_{2n}(q)\) parametrized by certain symbols (see [1, Section 4.4]). Exactly as before, they can be seen as characters of \(S\) and a unipotent character of \(\operatorname{Sp}_{2n}(q)\) belongs to \(B_{0}(\operatorname{Sp}_{2n}(q))\) if and only if it belongs to \(B_{0}(S)\). **Lemma 2.7**.: _Let \(G=\operatorname{SL}_{n}(q)\) or \(\operatorname{SU}_{n}(q)\) and let \(\chi\) be a unipotent character of \(G\). Then \(\chi(1)\) is not a prime power unless \(\chi=1_{G}\) or \(\chi=\mathsf{St}_{G}\)._ Proof.: Let \(\chi\) be the unipotent character parametrized by the partition \(\alpha\) of \(n\). It is easy to see in [1, Propositions 4.3.1 and 4.3.5] that \(\chi(1)\) has nontrivial \(q\)-part and nontrivial \(q^{\prime}\)-part unless \(\alpha=(n)\) or \(\alpha=(1^{n})\), which correspond to \(1_{G}\) and \(\mathsf{St}_{G}\) respectively. **Proposition 2.8**.: _Let \(S=\operatorname{PSL}_{n}(q)\) or \(\operatorname{PSU}_{n}(q)\) with \(n\geq 3\) and let \(p\nmid q\) be a prime dividing \(|S|\). Then there is \(\chi\in\operatorname{Irr}(B_{0}(S))\) with \(\chi(1)\) not a prime power._ Proof.: Let \(\epsilon=1\) if \(S=\operatorname{PSL}_{n}(q)\) and \(\epsilon=-1\) if \(S=\operatorname{PSU}_{n}(q)\). By Lemma 2.4, \(S\) has to be one of the groups in cases (3) or (4) of [11, Theorem 1.1], so \(n\) is an odd prime and the prime power degrees are \(\mathsf{St}_{S}(1)=|S|_{q}\) and \((q^{n}-\epsilon)/(q-\epsilon)\). By Lemma 2.1, \((q^{n}-\epsilon)/(q-\epsilon)\) can not be a power of \(2\). If \(q\) is odd then both prime power degrees are odd. If \(p=2\) then the order of \(\epsilon q\) modulo \(p\) is necessarily \(1\), and therefore all unipotent characters belong to \(B_{0}(S)\), and we are done by Lemma 2.7. If \(p\) is odd then [1, Theorem B] guarantees the existence of a character of even degree in \(B_{0}(S)\). Thus we assume \(q\) is a power of \(2\). Assume first \(n\geq 5\). If \(p\geq 5\) then [12, Table 1] produces a unipotent character in \(B_{0}(S)\) different from \(1_{G}\) and \(\mathsf{St}_{S}\), which can not have prime power degree by Lemma 2.7. If \(p=3\) then we argue identically with [13, Table 3]. We are left with the groups \(\mathrm{PSL}_{3}(q)\) and \(\mathrm{PSU}_{3}(q)\) with \(q\) a power of \(2\). If \(p=3\) or \(p\mid(q+\epsilon)\) then the last paragraph of the proof of [13, Proposition 3.10] and the first paragraph of the proof of [12, Proposition 4.5] produces a character in \(B_{0}(S)\) of degree \(q^{3}-\epsilon\), which is not a prime power. If \(p\geq 5\) then if \(p\nmid(q+\epsilon)\) then by the order formula for \(\mathrm{PSL}_{3}(q)\) and \(\mathrm{PSU}_{3}(q)\) we have that either \(p\mid(q-\epsilon)\) or \(p\mid(q^{2}+\epsilon q+1)=(q^{3}-\epsilon)/(q-\epsilon)\). In the first case, the order of \(\epsilon q\) modulo \(p\) is \(1\), and it follows that the unipotent character defined by the partition \((1,2)\) belongs to \(B_{0}(S)\) and has degree \(q(q+\epsilon)\) by [1, Propositions 4.3.1 and 4.3.5]. In the second case, we have that the prime power degrees of \(S\) are \(\mathsf{St}_{S}(1)=|S|_{q}\) and \((q^{3}-\epsilon)/(q-\epsilon)\) which must be a power of \(p\). Since \(p\geq 5\), Lemma 2.4 guarantees the existence of a character \(\chi\in\mathrm{Irr}(B_{0}(S))\) of \(p^{\prime}\)-degree different from \(\mathsf{St}_{S}(1)\), so \(\chi(1)\) is not a prime power. **Proposition 2.9**.: _Let \(S=\mathrm{PSL}_{2}(q)\) and let \(p\nmid q\) be a prime dividing \(|S|\) and assume first that every character degree in \(B_{0}(S)\) is a prime power. Then \((S,p)\) is one of the pairs in Theorem A._ Proof.: If \(q\) is odd then it is well known that \(\mathrm{cd}(S)=\{1,q,q+1,q-1,(q\pm 1)/2\}\) where the last sign depends on the congruence of \(q\) modulo \(4\). By Lemma 2.4 the set \(\mathrm{cd}(B_{0}(S))\) has size at least \(3\), of which forces at least one of \(q+1\) or \(q-1\) to be a power of \(2\). If \(p=2\) then let \(\chi\in\mathrm{Irr}(S)\) be a character of such degree. Notice that by the order formula for \(\mathrm{PSL}_{2}(q)\), \(\chi\) has \(2\)-defect zero and so it can not belong to the principal \(2\)-block of \(S\). This forces \(p\neq 2\). If \(q+1\) is a power of \(2\) the same argument works. If \(q\) is a power of \(2\) then \(\mathrm{cd}(S)=\{1,q,q+1,q-1\}\) which again forces \(q+1\) or \(q-1\) to be a prime power (and they both are prime powers only if \(q\in\{4,8\}\) by Lemma 2.3). Assume that \(q>8\). If \(r=q-1\) is a prime power then by Lemma 2.2 in fact is a prime. A character of degree \(r\) has \(r\)-defect zero and so it can not belong to the principal \(r\)-block, forcing \(p\neq r\). If \(r=q+1\) is a prime power then again by Lemma 2.1 we have that \(r\) is a prime and we can mimick the previous argument to reach the desired conclusion. **Proposition 2.10**.: _Let \(S=\operatorname{PSp}_{2n}(q)\) and let \(p\nmid q\) be a prime dividing \(|S|\). Then there is \(\chi\in\operatorname{Irr}(B_{0}(S))\) with \(\chi(1)\) not a prime power._ Proof.: By Lemma 2.4, we may assume that \(S\) is one of the groups in cases (5) or (6) of [13, Theorem 1.1]. Assume first that we are in case (5), so that the irreducible characters of \(S\) whose degree is a prime power have degrees \(\mathsf{St}_{S}(1)=|S|_{q}\) and \((q^{n}+1)/2\). Notice that \(q^{n}+1\) can not be a power of \(2\) by Lemma 2.1. If \(p\) is odd then there must exist a character \(\chi\in\operatorname{Irr}(B_{0}(S))\) of even degree by [1, Theorem B]. If \(p=2\) then by [1, Theorem 21.14], all unipotent characters belong to \(B_{0}(S)\). Consider the unipotent character \(\chi\) parametrized by the symbol \(\binom{0}{1}\). By [1, Proposition 4.4.15] and using the order formula for \(\operatorname{Sp}_{2n}(q)\), it is easy to see that \(\chi(1)_{q}>1\) and \(\chi(1)_{q^{\prime}}>1\) and so \(\chi(1)\) is not a prime power. If we are in case (6) of [13, Theorem 1.1], the characters of prime power degree have degrees \(3^{l}\) and \((3^{n}-1)/2\) which is only a power of \(2\) if \(n=2\) so \(S=\operatorname{PSp}_{4}(3)\), which is checked in [GAP]. If \(n>2\) then the same argument as before applies. Proof of Theorem A.: The only if direction is done in Propositions 2.5, 2.6, 2.8, 2.9, 2.10. For the if direction the cases (iii)-(v) can be checked in [GAP]. Assume first that \(S=\operatorname{PSL}_{2}(q)\) where \(q+1\) is a power of \(2\) and let \(p\neq\{2,q\}\). Then the characters of degree \((q-1)\) and \((q-1)/2\) (if they exist) have \(p\)-defect zero by the order formula for \(\operatorname{PSL}_{2}(q)\), so they can not belong to the principal \(p\)-block. This forces \[\operatorname{cd}(B_{0}(S))\subseteq\{1,q,q+1,(q+1)/2\}.\] The case where \(q\) or \(q-1\) is a power of \(2\) is done analogously. ## 3. Theorem B and Corollary C Our notation for this section follows [20] and [20]. **Lemma 3.1**.: _Assume \(\operatorname{cd}(B_{0}(G))\) consists only of prime powers. If \(N\triangleleft G\) then \(B_{0}(N)\) and \(B_{0}(G/N)\) consist only of prime powers._ Proof.: For all \(\theta\in\operatorname{Irr}(B_{0}(N))\) there is some \(\chi\in\operatorname{Irr}(B_{0}(G))\) such that \(\chi_{N}\) contains \(\theta\) by [20, Theorem 9.4], and \(\theta(1)\) divides \(\chi(1)\) by standard Clifford theory, and the first claim follows. For the second claim recall that \(\operatorname{Irr}(B_{0}(G/N))\subseteq\operatorname{Irr}(B_{0}(G))\) **Theorem 3.2**.: _Assume that \(G\) is not \(p\)-solvable and that \(\operatorname{cd}(B_{0}(G))\) consists only of prime powers. Then there is some \(\mathbf{O}_{p^{\prime}}(G)\subseteq M\triangleleft G\) with \(M/\mathbf{O}_{p^{\prime}}(G)=H\times S\) where \(H\) is an abelian \(p\)-group and \((S,p)\) is one of the pairs in Theorem A. Also, \(G/M\) is isomorphic to a subgroup of \(\operatorname{Out}(S)\)._ Proof.: Arguing by induction on \(|G|\), we may assume \(\mathbf{O}_{p^{\prime}}(G)=1\). Let \(E\) be the layer of \(G\). We claim that \(E\) is quasisimple. Indeed, write \(E=K_{1}\cdots K_{t}\) for components \(K_{1},\ldots,K_{t}\), assume \(t>1\) and let \(Z=\mathbf{Z}(E)\). Then \(Z\triangleleft G\) and by [14, 6.5.6]\(E/Z=E_{1}\times\cdots\times E_{t}\) where \(E_{i}=K_{i}Z/Z\). By Lemma 3.1, \(B_{0}(E/Z)\) consists only of prime powers, and so does \(B_{0}(E_{i})\). Now since the \(E_{i}\)'s are not \(p\)-solvable, by Lemma 2.4 and [13, Lemma 5.2], for each \(E_{i}\) there are characters \(\theta,\eta\in\operatorname{Irr}(B_{0}(E_{i}))\) of coprime degree. Let \(\theta\in\operatorname{Irr}(B_{0}(E_{1}))\) and \(\eta\in\operatorname{Irr}(B_{0}(E_{2}))\) be nonlinear and such that \(\theta(1)\) and \(\eta(1)\) are coprime and consider \(\psi=\theta\times\eta\times 1_{E_{3}}\times\cdots\times 1_{E_{t}}\in \operatorname{Irr}(B_{0}(E/Z))\). Then \(\psi(1)\) is not a prime power, a contradiction. This forces \(E\) to be quasisimple and again by Lemma 3.1, \((E/\mathbf{Z}(E),p)\) is one of the pairs from Theorem A. Furthermore, the \(p\)-complement of \(\mathbf{Z}(E)\) is a normal \(p^{\prime}\)-subgroup of \(G\), and since \(\mathbf{O}_{p^{\prime}}(G)=1\) we have that \(\mathbf{Z}(E)\) is a \(p\)-group. Now let \(C=\mathbf{C}_{G}(E)\) so that \(G/C\) is almost simple with socle \(S=E/\mathbf{Z}(E)\) (arguing as in Step 7 of [10, Theorem 2.10]). Now \(C\triangleleft G\) and \(M=CE\triangleleft G\) is a central product because \([C,E]=1\). Arguing as before, it is easy to see that \(B_{0}(C)\) can not contain nonlinear characters, so \(C/\mathbf{O}_{p^{\prime}}(C)\) is an abelian \(p\)-group by [20, Theorem 6.10]. Now since \(\mathbf{O}_{p^{\prime}}(C)\triangleleft G\) we get that \(C\) is an abelian \(p\)-group and \(\mathbf{Z}(E)=C\cap E\). Now if the Schur multiplier of \(S\) has order not divisible by \(p\) then \(\mathbf{Z}(E)=1\) and \(M=C\times S\). By [12, Tables 24.2 and 24.3], the only pair \((S,p)\) for which the Schur multiplier of \(S\) has order divisible by \(p\) is \((\mathfrak{A}_{5},2)\), and it is easily checked in [GAP] that the universal central extension \(2.\mathfrak{A}_{5}\) has a character of degree \(6\) in its principal \(2\)-block, so in all cases \(M=C\times S\). Finally, since \(G/C\) is almost simple with socle \(S\) we have that \(G/M\) is isomorphic to a subgroup of \(\operatorname{Out}(S)\). Proof of Theorem B.: If \(G\) is \(p\)-solvable then by [20, Theorem 10.20] we have that \(\operatorname{Irr}(B_{0}(G))=\operatorname{Irr}(G/\mathbf{O}_{p^{\prime}}(G))\) and therefore \(G/\mathbf{O}_{p^{\prime}}(G)\) is one of the groups from [13]. Otherwise, apply Theorem 3.2. If \(2^{n}+1\) is a prime, then it is well known that \(n\) is a power of \(2\). It is also well known that if \(2^{n}-1\) is a prime then \(n\) is a prime. Since the automorphism group of \(S=\operatorname{SL}_{2}(2^{n})\) is a cyclic group of order \(n\), if we are in case (ii) of Theorem A then \(\operatorname{Out}(S)\) has prime power order. Proof of Corollary C.: We may assume \(G\) is as in case (ii) of Theorem B. If \(\chi\in\operatorname{Irr}(B_{0}(G))\) has degree \(\chi(1)=r^{t}\) for some prime \(r\) and \(\chi_{S}\neq 1_{S}\) then \(\chi_{S}\) contains characters of degree a power of \(r\) in \(B_{0}(S)\) by [20, Theorem 9.4]. Otherwise \(\chi_{S}=1_{S}\) and \(\chi_{M}\) contains only linear characters so \(\chi(1)\) divides \(|G/M|\) by [12, Theorem 5.12]. Now, \(G/M\) is isomorphic to a subgroup of \(\operatorname{Out}(S)\) which has prime power order in all cases from Theorem A, so we are done.
2308.00238
Gamma-Bazilevic functions related with generalized telephone numbers
The purpose of this paper is to consider coefficient estimates in a class of functions $\mathfrak{G}_{\vartheta}^{\kappa}(\mathcal{X},\varkappa)$ consisting of analytic functions $f$ normalized by $f(0)=f'(0)-1=0$\ in the open unit disk $\Delta=\{ z:z\in \mathbb{C}\quad \text{and}\quad \left\vert z\right\vert <1\}$ subordinating generalized telephone numbers, to derive certain coefficient estimates $a_2,a_3$ and Fekete-Szeg\"{o} inequality for $f\in\mathfrak{G}_{\vartheta}^{\kappa}(\mathcal{X},\varkappa)$. A similar results have been done for the function $ f^{-1} $ and $\log\dfrac{f(z)}{z}.$Similarly application of our results to certain functions defined by using convolution products with a normalized analytic function is given, and in particular we state Fekete-Szeg"{o} inequalities for subclasses described through Poisson Borel and Pascal distribution series.
Gangadharan Murugusundaramoorthy, kaliappan Vijaya, Hijaz Ahmad
2023-08-01T02:36:05Z
http://arxiv.org/abs/2308.00238v1
# Gamma-Bazilevic functions related with generalized telephone numbers ###### Abstract. The purpose of this paper is to consider coefficient estimates in a class of functions \(\mathfrak{G}_{\beta}^{\kappa}(\mathfrak{X},\varkappa)\) consisting of analytic functions \(f\) normalized by \(f(0)=f^{\prime}(0)-1=0\) in the open unit disk \(\Delta=\{z:z\in\mathbb{C}\quad\text{and}\quad|z|<1\}\) subordinating generalized telephone numbers, to derive certain coefficient estimates \(a_{2},a_{3}\) and Fekete-Szego inequality for \(f\in\mathfrak{G}_{\beta}^{\kappa}(\mathfrak{X},\varkappa)\). A similar results have been done for the function \(f^{-1}\) and \(\log\dfrac{f(z)}{z}\).Similarly application of our results to certain functions defined by using convolution products with a normalized analytic function is given, and in particular we state Fekete-Szego"o inequalities for subclasses described through Poisson Borel and Pascal distribution series. **Keywords:** Analytic functions, starlike functions, convex functions, subordination, Fekete-Szego inequality, Poisson distribution series, Borel distribution series Hadamard product. **MSC(2010):** 30C80, 30C45 ## 1. Introduction, Definitions and Preliminaries ### Generalized telephone numbers (GTN)- \(\mathfrak{T}_{\varkappa}(n)\): The classical telephone numbers (TN), distinguished as involution numbers, are exact by the recurrence relation \[\mathfrak{T}(n)=\mathfrak{T}(n-1)+(n-1)\mathfrak{T}(n-2)\quad\text{for}\quad n \geq 2\] with \[\mathfrak{T}(0)=\mathfrak{T}(1)=1.\] . Associates of those numbers with symmetric groups have been perceived first time in 1800 by Heinrich August Rothe, who piercing out that \(\mathfrak{T}(n)\) is the variety of involutions (self-inverse permutations)in the symmetric groups (see,as an instance, [6, 17]). Due to the fact involutions resembles to standard younger tableaux, it is vibrant that the \(n^{th}\) involution quantity is consistently the range of young tableaux on the set \(1,2,...,n\) (for information,see [3]). it's far really worth mentioning according to John Riordan, the above recurrence relation, in reality, yields the range of production styles in a cellphone machine with \(n\) subscribers(see [34]). In 2017, Wlochand Wolowiec-Musial [43] familiarized generalized telephone numbers (GTN) via the succeeding recursion \[\mathfrak{T}(\varkappa,n)=\varkappa\mathfrak{T}(\varkappa,n-1)+(n-1) \mathfrak{T}(\varkappa,n-2)\quad n\geq 0\quad\text{and}\quad\varkappa\geq 1\] with \[\mathfrak{T}(\varkappa,0)=1,\mathfrak{T}(\varkappa,1)=\varkappa,\] and deliberated some properties. In 2019, Bednarz and Wolowiec-Musial [2] presented a new generalization of TN by \[\mathfrak{T}_{\varkappa}(n)=\mathfrak{T}_{\varkappa}(n-1)+\varkappa(n-1) \mathfrak{T}_{\varkappa}(n-2),\quad n\geq 2\quad\text{and}\quad\varkappa\geq 1\] with \[\mathfrak{T}_{\varkappa}(0)=\mathfrak{T}_{\varkappa}(1)=1.\] They provided the generating function, straight formula and matrix generators for these numbers. Moreover, they acquired clarifications and proved few properties of these numbers related with congruence's. These days, they resulting the exponential generating function and the summation method \(\mathfrak{T}_{\varkappa}(n)\) as follows: \[e^{x+\varkappa\frac{\varkappa^{2}}{2}}=\sum_{n=0}^{\infty}\mathfrak{T}_{ \varkappa}(n)\frac{x^{n}}{n!}\quad(\varkappa\geq 1)\] As we are able to observe,if \(\varkappa=1,\) then we achieve classical telephone numbers \(\mathfrak{T}(n).\) Clearly, \(\mathfrak{T}_{\varkappa}(n)\) is for a few values of \(n\) as 1. \(\mathfrak{T}_{\varkappa}(0)=\mathfrak{T}_{\varkappa}=1,\) 2. \(\mathfrak{T}_{\varkappa}(2)=1+\varkappa,\) 3. \(\mathfrak{T}_{\varkappa}(3)=1+3\varkappa\) 4. \(\mathfrak{T}_{\varkappa}(4)=1+6\varkappa+3\varkappa^{2}\) 5. \(\mathfrak{T}_{\varkappa}(5)=1+10\varkappa+15\varkappa^{2}\) 6. \(\mathfrak{T}_{\varkappa}(6)=1+15\varkappa+45\varkappa^{2}+15\varkappa^{3}.\) In [7], for\(z\in\Delta:=\{z:z\in\mathbb{C}\quad\text{and}\quad|z|<1\}\) an unit disc, Deniz consider \[\mathfrak{X}(z):=e^{(z+\varkappa\frac{\varkappa^{2}}{2})}=1+z+\frac{1+ \varkappa}{2}z^{2}+\frac{1+3\varkappa}{6}z^{3}+\frac{3\varkappa^{2}+6\varkappa +1}{24}z^{4}+\frac{1+10\varkappa+15\varkappa^{2}}{120}z^{5}+\cdots.\] ### Subclasses of analytic functions \(\mathfrak{A}\) Denote by \(\mathfrak{A}\) the class of analytic functions as given by \[f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n},z\in\Delta. \tag{1.1}\] Also, denote \(\Xi\) be the subclass of \(\mathfrak{N}\) together with univalent functions in \(\Delta\) with \(f(0)=0=f^{\prime}(0)-1\). In [31], Robertson introduced the following classes: \[\mathfrak{S}^{*}=\{f\in\mathfrak{S}:\Re\Big{(}\frac{zf^{\prime}(z)}{f(z)}\Big{)} >0,\quad\ (z\in\Delta)\} \tag{1.2}\] and \[\mathfrak{C}=\{f\in\mathfrak{S}:\Re\Big{(}\frac{(zf^{\prime}(z))^{\prime}}{f^{ \prime}(z)}\Big{)}>0,\quad\ (z\in\Delta)\}. \tag{1.3}\] The class of functions fulfilling the analytic standards given by (1.2) and (1.3) are referred to as as _starlike_ and _convex_ functions in \(\Delta\) respectively. Assuming \(f_{1},f_{2}\in\mathfrak{S}\) then we are saying that the function\(f_{1}\) is subordinate to \(f_{2}\) if there exists a Schwarz charcetrestic \(\bar{\omega}(z)\), analytic in \(\Delta\) with \(\bar{\omega}(0)=0\) and \(|\bar{\omega}(z)|<1\)\((z\in\Delta)\), such that\(f_{1}(z)=f_{1}(\bar{\omega}(z))\)\((z\in\Delta).\) We denote this subordination by means of \[f_{1}<f_{2}\quad\text{or}\quad f_{1}(z)<f_{2}(z)\quad(z\in\Delta).\] Particularly, if \(f_{2}\) is univalent in \(\Delta\), the above subordination is equal to \[f_{1}(0)=f_{2}(0)\quad\text{and}\quad f_{1}(\Delta)\subset f_{2}(\Delta).\] Let \[\mathfrak{S}^{*}(\psi)=\{f\in\mathfrak{S}:\frac{zf^{\prime}(z)}{f(z)}<\psi(z)\} \tag{1.4}\] where \(\psi(z)=1+m_{1}z+m_{2}z^{2}+m_{3}z^{3}+\cdots,m_{1}>0.\) By varying the function \(\psi\), several familiar classes can be obtained as illustrated below: 1. For \(\psi=\frac{1+Az}{1+Bz}\) (\(-1\leq B<A\leq 1\)), we get the class \(\mathfrak{S}^{*}(A,B)\), see [15]. Also by fixing \(A=1-2\alpha\) and \(B=-1\), we have \(\mathfrak{S}^{*}(\alpha)=\mathfrak{S}^{*}(1-2\alpha,-1)\)[31]. 2. In [32], by taking \(\psi=1+\frac{2}{\pi^{2}}\left(\log\frac{1+\sqrt{z}}{1-\sqrt{z}}\right)^{2}\), a new class was defined and studied. 3. Assuming \(\psi(z)=:z+\sqrt{1+z^{2}},\ z\in\Delta\), Raina and Sokol [30]and Sokol and Thomas [33] extensively discussed the geometric properties for \(f\in\mathfrak{S}^{*}_{S}(\psi)\) 4. In [35],the class \(\mathfrak{S}^{*}_{L}(\psi)=\{f\in\mathfrak{S}:\frac{zf^{\prime}(z)}{f(z)}< \sqrt{1+z}\}\), was studied and further studied in [22]. 5. The class \(\mathfrak{S}_{C}=\{f\in\mathfrak{S}:\frac{zf^{\prime}(z)}{f(z)}<1+\frac{4}{3} z+\frac{2}{3}z^{2}\}\) was introduced in and investigated in [36, 38]. 6. In [21, 37] the authors defined and discussed the class \(\mathfrak{S}_{e}^{*}(\psi)=\{f\in\mathfrak{S}:\frac{zf^{\prime}(z)}{f(z)}<e^{z}\}\). 7. For \(\psi=1+\sin{(z)}\), the class is denoted by \(\mathfrak{S}_{\sin}^{*}\), see [5]. 8. For \(\psi=\cosh{(z)}\), the class is denoted by \(\mathfrak{S}_{\cosh}^{*}\), see [1]. Lately, for \(\mathfrak{P}\geqq 0,\ \kappa\geqq 0\) and \(f\in\mathfrak{A}\) Fitri and Thomas [14] studied the a new class \(G(\mathfrak{P},\kappa)\) which holds the following: \[\mathfrak{R}\left\{\left[\frac{zf^{\prime}(z)}{(f(z)^{1-\kappa}z^{\kappa}}+ \frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}+(\kappa-1)\left(\frac{zf^{\prime}( z)}{f(z)}-1\right)\right]^{\mathfrak{P}}\left[\frac{zf^{\prime}(z)}{(f(z)^{1- \kappa}z^{\kappa}}\right]^{1-\mathfrak{P}}\right\}>0 \tag{1.5}\] and discussed its characterization results. Inspired fundamentally by the aforesaid works (see [9, 12, 23, 27, 30]), and recent work of Murugusundaramoorthy and Vijaya [26], in this paper first time we describe a new class \(6^{\kappa}_{\mathfrak{P}}(\mathcal{X},\varkappa)\) as given in Definition 1.1 which coalesces the many new subclasses of \(\mathfrak{S}^{*}\) and \(\mathfrak{E}\) in association with GTN. First,we shall find estimations of \(a_{2}\) and \(a_{3}\) for \(f\in\mathfrak{G}_{\mathfrak{P}}^{*}(\mathcal{X},\varkappa)\) of the form (1.1) also for \(f^{-1}\in\mathfrak{G}_{\mathfrak{P}}^{*}(\mathcal{X},\varkappa)\) and \(log\frac{f(z)}{z}\). Further we prove the Fekete-Szego inequality for a general class. Additionally we confer certain applications of our consequences by way of convolution to certain classes defined through Poisson, Borel and Pascal distributions. Now, we define the following class \(6^{\kappa}_{\mathfrak{P}}(\mathcal{X},\varkappa)\) : **Definition 1.1**.: For \(\mathfrak{P}\geqq 0,\ \kappa\geqq 0\) a function \(f\in\mathfrak{A}\) is in the class \(6^{\kappa}_{\mathfrak{P}}(\mathcal{X},\varkappa)\) if \[\left[\frac{zf^{\prime}(z)}{(f(z))^{1-\kappa}z^{\kappa}}+\frac{ zf^{\prime\prime}(z)}{f^{\prime}(z)}+(\kappa-1)\left(\frac{zf^{\prime}(z)}{f(z)}-1 \right)\right]^{\mathfrak{P}}\left[\frac{zf^{\prime}(z)}{(f(z))^{1-\kappa}z^ {\kappa}}\right]^{1-\mathfrak{P}}\] \[\prec e^{(z+\varkappa\frac{z^{2}}{2})}=:\mathcal{X}(z);\ z=re^{ i\theta}\in\Delta. \tag{1.6}\] By specializing the parameters \(\mathfrak{P}\) and \(\kappa\) we state the subsequent new subclasses of \(\mathfrak{S}\) as illustrated under which are not discussed sofar for for functions associated with GTN: _Remark 1.2_.: \[\begin{split}& 6^{0}_{0}(\mathcal{X},\varkappa)\equiv\mathfrak{S}^{*}( \mathcal{X},\varkappa)=\left\{f\in\mathfrak{A}:\frac{zf^{\prime}(z)}{f(z)}<e^ {(z+\varkappa\frac{z^{2}}{2})};z=re^{i\theta}\in\Delta.\right\}\\ & 2\end{split}\] \[\begin{split}& 6^{0}_{\mathfrak{P}}(\mathcal{X}, \varkappa)\equiv\mathfrak{G}_{\kappa}(\mathcal{X},\varkappa)=\left\{f \in\mathfrak{A}:\left[\frac{zf^{\prime}(z)}{f(z)}\right]^{1-\mathfrak{P}} \left[\frac{(zf^{\prime}(z))^{\prime}}{f^{\prime}(z)}\right]^{\mathfrak{P}} \prec e^{(z+\varkappa\frac{z^{2}}{2})};z=re^{i\theta}\in\Delta\right\}\\ & 3\end{split}\] \[\begin{split}& 6^{1}_{0}(\mathcal{X},\varkappa)\equiv \mathfrak{G}(\mathcal{X},\varkappa)=\left\{f\in\mathfrak{A}:\frac{(zf^{\prime} (z))^{\prime}}{f^{\prime}(z)}<e^{(z+\varkappa\frac{z^{2}}{2})};z=re^{i\theta} \in\Delta\right\}\\ & 4\end{split}\] _ 5. \(\Theta^{1}_{1}(\mathcal{X},\varkappa)\equiv\Re(\mathcal{X},\varkappa)=\left\{f \in\mathfrak{R}:f^{\prime}(z)+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}<e^{(z+ \varkappa\frac{2}{\kappa})};z=re^{i\theta}\in\Delta\right\}\)__ ### A set of Lemmas Recent years Fekete-Szego results for the class of starlike, convex and various other subclasses of analytic functions, were studied interested reader may refer to [26, 27, 39, 40, 41]. In this article also we aimed to discuss the Fekete-Szego problem for \(f\in\Theta^{\kappa}_{\delta}(\mathcal{X},\varkappa)\) in association with GTN. To prove our predominant end result, we take into account the following: Let \(\mathbf{P}\) denote class of functions with positive real part in \(\Delta\), and be assumed as of the form \[p(z)=1+c_{1}z+c_{2}z^{2}+\cdots,z\in\Delta.\] **Lemma 1.3**.: _[_20_]_ _If \(p\in\mathbf{P}\), then_ \[|c_{2}-vc_{1}^{2}|\leqq\left\{\begin{array}{ll}-4v+2,&\mbox{ if }\quad v\leqq 0,\\ 2,&\mbox{ if }\quad 0\leqq v\leqq 1,\\ 4v-2,&\mbox{ if }\quad v\geqq 1.\end{array}\right.\] _While \(v<0\) or \(v>1\), the equality holds if and only if \(p_{1}(z)\) is \(\dfrac{1+z}{1-z}\) or one in all its rotations. If \(0<v<1\), then equality holds if and only if \(p_{2}(z)=\dfrac{1+z^{2}}{1-z^{2}}\)or considered one of its rotations. If \(v=0\), the equality holds if and only if_ \[p_{3}(z)=\left(\dfrac{1}{2}+\dfrac{1}{2}\eta\right)\dfrac{1+z}{1-z}+\left( \dfrac{1}{2}-\dfrac{1}{2}\eta\right)\dfrac{1-z}{1+z}\quad(0\leqq\eta\leqq 1)\] _or one in every of its rotations. If \(v=1\), the equality holds if and best if \(p_{1}\) is the reciprocal of one of the functions such that the equality holds when \(v=0\)._ We also need the following: **Lemma 1.4**.: _[_13_]_ _If \(p\in\mathbf{P}\), then_ \[|\ c_{n}\ |\leq 2\ \ \forall n\geq 1\qquad\mbox{and}\qquad|c_{2}-\dfrac{c_{1}^ {2}}{2}|\leq 2-\dfrac{|c_{1}|^{2}}{2}.\] **Lemma 1.5**.: _[_19_]_ _If \(p\in\mathbf{P}\), and \(v\in\mathbb{C}\) ( complex numbers), then_ \[|c_{2}-vc_{1}^{2}|\leqq 2\max(1,|2v-1|).\] _The result is sharp for the functions_ \[p_{1}(z)=\dfrac{1+z^{2}}{1-z^{2}},\quad p_{2}(z)=\dfrac{1+z}{1-z}.\] **Lemma 1.6**.: _[_16_]_ _If \(p\in\mathbf{P}\), then \(\hbar\in\mathbb{C}\),_ \[\left|c_{2}-\hbar\dfrac{c_{1}^{2}}{2}\right|\leq\ \max\{2,2|\hbar-1|\}=\left\{ \begin{array}{ll}2,&0\leq\hbar\leq 2;\\ 2|\hbar-1|,&\mbox{ elsewhere.}\end{array}\right.\] _The result is sharp for the functions defined by \(p_{1}(z)=\frac{1+z^{2}}{1-z^{2}}\) or \(p_{2}(z)=\frac{1+z}{1-z}\)._ ## 2. Coefficient Estimate By making use of the Lemma 1.3, we prove the following: **Theorem 2.1**.: _Let \(\ \vartheta\geq 0\) and \(\ \kappa\geq 0.\) If \(f\in\mathfrak{G}^{\kappa}_{\vartheta}(\mathcal{X},\varkappa)\) be as in (1.1), then_ \[|a_{2}| \leq \frac{1}{(1+8)(1+\kappa)},\] \[|a_{3}| \leq \frac{1}{(1+2\vartheta)(1+2\kappa)}\max\{1,\Big{|}\frac{\big{(} M\kappa^{2}+S\kappa+Q\big{)}}{((1+8)(1+\kappa))^{2}}+\frac{1+\varkappa}{2} \big{|}\}\] _where_ \[M=\vartheta^{2}-\vartheta+1;\quad S=2\vartheta^{2}-4\vartheta+1;\quad Q= \vartheta^{2}-7\vartheta-2.\] These consequences are sharp. Proof.: Define \(P(z)\in\mathbf{P}\) by \[P(z): = \frac{1+w(z)}{1-w(z)}=1+c_{1}z+c_{2}z^{2}+\cdots.\] it is easy to see that \[w(z) = \frac{P(z)-1}{P(z)+1} \tag{1.1}\] \[= \frac{1}{2}\left[c_{1}z+\left(c_{2}-\frac{c_{1}^{2}}{2}\right)z^ {2}+\left(c_{3}-c_{1}c_{2}+\frac{c_{1}^{3}}{4}\right)z^{3}+\cdots\right].\] Since \(w(z)\) is a Schwarz function, we see that \(\mathfrak{R}(P(z))>0\) and \(P(0)=1\).Thus \[\mathcal{X}(w(z)) = e^{\frac{(P(z)-1)}{P(z)+1}+\varkappa\frac{\frac{P(z)-1}{P(z)+1} ^{2}}{2}})\] \[= 1+\frac{c_{1}}{2}z+\Big{(}\frac{c_{2}}{2}+\frac{(\varkappa-1)c_{ 1}^{2}}{8}\Big{)}z^{2}+\Big{(}\frac{c_{3}}{2}+(\varkappa-1)\frac{c_{1}c_{2}}{ 4}+\frac{(1-3\varkappa)}{48}c_{1}^{3}\Big{)}z^{3}+.....\] If \(f\in\mathfrak{G}^{\kappa}_{\vartheta}(\mathcal{X},\varkappa)\), then there is a Schwarz function \(w(z)\), analytic in \(\Delta\) with \(w(0)=0\) and \(|w(z)|<1\) in \(\Delta\) such that \[\left[\frac{zf^{\prime}(z)}{(f(z))^{1-\kappa}z^{\varkappa}}+\frac {zf^{\prime\prime}(z)}{f^{\prime}(z)}+(\kappa-1)\left(\frac{zf^{\prime}(z)}{f (z)}-1\right)\right]^{\vartheta}\left[\frac{zf^{\prime}(z)}{(f(z))^{1-\kappa }z^{\varkappa}}\right]^{1-\vartheta}\] \[=\mathcal{X}(w(z))\] \[=e^{(w(z)+\kappa\frac{[w(z)]^{2}}{2})}. \tag{2.3}\] For given \(f(z)\) of the form (1.1), a computation indicates that \[\frac{zf^{\prime}(z)}{f(z)}=1+a_{2}z+(2a_{3}-a_{2}^{2})z^{2}+(3a_{4}+a_{2}^{3} -3a_{3}a_{2})z^{3}+\cdots.\] Similarly we have \[1+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}=1+2a_{2}z+(6a_{3}-4a_{2}^{2})z^{2}+\cdots.\] Let us outline \(W(z)\) by \[W(z): = \left[\frac{zf^{\prime}(z)}{(f(z))^{1-\kappa}z^{\kappa}}+\frac{zf^ {\prime\prime}(z)}{f^{\prime}(z)}+(\kappa-1)\left(\frac{zf^{\prime}(z)}{f(z)}- 1\right)\right]^{\otimes}\left[\frac{zf^{\prime}(z)}{(f(z))^{1-\kappa}z^{\kappa }}\right]^{1-\otimes}\] An easy computation indicates that \[W(z) = 1+(1+\delta)(1+\kappa)a_{2}z+(1+2\delta)(2+\kappa)a_{3}z^{2} \tag{2.5}\] \[+ \left(\kappa^{2}(\delta^{2}-\delta+1)+\kappa(2\delta^{2}-4\delta +1)+(\delta^{2}-7\delta-2)\right)a_{2}^{2}z^{2}+\cdots\] \[= 1+b_{1}z+b_{2}z^{2}+\cdots.\] Now by (2.2) and (2.5), \[b_{1}=\frac{c_{1}}{2}\qquad\text{and}\qquad b_{2}=\frac{c_{2}}{2}+\frac{( \kappa-1)c_{1}^{2}}{8}. \tag{2.6}\] In view of the equation (2.5) and (2.6), we see that \[b_{1} = (1+\delta)(1+\kappa)a_{2}, \tag{2.7}\] \[b_{2} = (1+2\delta)(2+\kappa)a_{3}\] (2.8) \[+ \left(\kappa^{2}(\delta^{2}-\delta+1)+\kappa(2\delta^{2}-4\delta +1)+(\delta^{2}-7\delta-2)\right)a_{2}^{2}\] or equivalently, we have \[a_{2} = \frac{c_{1}}{2(1+\delta)(1+\kappa)}, \tag{2.9}\] \[a_{3} = \frac{1}{(1+2\delta)(1+2\kappa)}\left(\frac{c_{2}}{2}-\frac{c_{1 }^{2}}{8}\left[1-\varkappa-\frac{2\left(\kappa^{2}(\delta^{2}-\delta+1)+ \kappa(2\delta^{2}-4\delta+1)+(\delta^{2}-7\delta-2)\right)}{\left((1+\delta) (1+\kappa)\right)^{2}}\right]\right)\] \[= \frac{1}{2(1+2\delta)(1+2\kappa)}\left(c_{2}-\frac{c_{1}^{2}}{4} \left[1-\varkappa-\frac{2\left(\kappa^{2}(\delta^{2}-\delta+1)+\kappa(2 \delta^{2}-4\delta+1)+(\delta^{2}-7\delta-2)\right)}{\left((1+\delta)(1+ \kappa)\right)^{2}}\right]\right).\] For brevity we let \[M=\delta^{2}-\delta+1;\quad S=2\delta^{2}-4\delta+1;\quad Q= \delta^{2}-7\delta-2 \tag{2.11}\] \[a_{3} = \frac{1}{2(1+2\delta)(1+2\kappa)}\left(c_{2}-\frac{c_{1}^{2}}{4} \left[1-\varkappa-\frac{2\left(\kappa^{2}(\delta^{2}-\delta+1)+\kappa(2 \delta^{2}-4\delta+1)+(\delta^{2}-7\delta-2)\right)}{\left((1+\delta)(1+\kappa )\right)^{2}}\right]\right) \tag{2.12}\] \[= \frac{1}{2(1+2\delta)(1+2\kappa)}\left(c_{2}-\frac{c_{1}^{2}}{4} \left[1-\varkappa-\frac{2\left(M\kappa^{2}+S\kappa+Q\right)}{\left((1+\delta) (1+\kappa)\right)^{2}}\right]\right).\] Now by taking absolute on(2.9) and applying Lemma 1.4, we get \[|a_{2}|\leqq\frac{1}{(1+\vartheta)(1+\kappa)}\] and by taking absolute on (2.12) and applying Lemma 1.5 we have \[|a_{3}| \leqq\frac{1}{(1+2\vartheta)(1+2\kappa)}\] \[\times \max\{1,\big{|}2\times\frac{1}{4}\left[1-\varkappa-\frac{2\left(M \kappa^{2}+S\kappa+Q\right)}{\left((1+\vartheta)(1+\kappa)\right)^{2}} \right]-1\big{|}\}\] \[= \frac{1}{(1+2\vartheta)(1+2\kappa)}\max\{1,\frac{1}{2}\big{|}- \left(\frac{2\left(M\kappa^{2}+S\kappa+Q\right)}{\left((1+\vartheta)(1+\kappa )\right)^{2}}\right)-1-\varkappa\big{|}\}\] \[= \frac{1}{(1+2\vartheta)(1+2\kappa)}\max\{1,\Big{|}\frac{\left(M \kappa^{2}+S\kappa+Q\right)}{\left((1+\vartheta)(1+\kappa)\right)^{2}}+\frac{ 1+\varkappa}{2}\big{|}\}.\] The first two bounds are sharp for the function \(f:\Delta\longrightarrow\mathbb{C}\) given by \[f(z) =\int_{0}^{z}\mathcal{X}(t)dt\] \[=\int_{0}^{z}e^{t+\frac{2\varkappa^{2}}{2}}dt\] \[=z+\frac{z^{2}}{2}+\frac{1+\varkappa}{6}z^{3}+\frac{1+3\varkappa} {24}z^{4}+\frac{3\varkappa^{2}+6\varkappa+1}{120}z^{5}+\cdots.\] Here we have \(b_{1}=1\) and \(b_{2}=1/2\). By using (2.7) and (2.6), we get \[|a_{2}|=\frac{1}{(1+\vartheta)(1+\kappa)}\] and again by using(2.6), (2.8) we have \[c_{2}+\frac{(\varkappa-1)c_{1}^{2}}{4}=(1+2\vartheta)(1+2\kappa)a_{3}+\left(( \kappa^{2}(\vartheta^{2}-\vartheta+1)+\kappa(2\vartheta^{2}-4\vartheta+1)+( \vartheta^{2}-7\vartheta-2)\right)a_{2}^{2}.\] Substituting for \(a_{2}=\frac{1}{(1+\vartheta)(1+\kappa)}\) simple calculation and taking absolute value \[|a_{3}|=\frac{1}{2(\vartheta+2)(1+2\kappa)}\left|\frac{\vartheta^{2}+ \vartheta-2(\vartheta+3)\kappa-2}{\left((1+\vartheta)(1+\kappa)\right)^{2}}- \varkappa-1\right|.\] By way of assuming \(\vartheta=0\) and \(\kappa\geqq 0\) we state the following _Remark 2.2_.: If \(f\in\mathfrak{G}_{\kappa}(\mathcal{X},\varkappa)\) and as in (1.1) then \[|a_{2}| \leq \frac{1}{1+\kappa},\] \[|a_{3}| \leq \frac{1}{2(1+2\kappa)}\max\{1,\big{|}\frac{\kappa^{2}+8\kappa+3}{2 (1+\kappa)^{2}}+\varkappa\big{|}\}=\frac{1}{2(1+2\kappa)}\left(\frac{\kappa^{2 }+8\kappa+3}{2(1+2\kappa)(1+\kappa)^{2}}+\varkappa\right).\] By way of fixing \(\mathfrak{F}=0=\kappa\) we state the following _Remark 2.3_.: If \(f\in\mathfrak{S}^{*}(\mathcal{X},\varkappa)\) and as assumed in (1.1) then \[|a_{2}|\leq 1,\qquad\text{and}\qquad|a_{3}|\leq\frac{1}{2}\max\{1,\big{|} \frac{3}{2}+\varkappa\big{|}\}=\frac{1}{2}\Big{(}\frac{3}{2}+\varkappa\Big{)}.\] By way of assuming \(\mathfrak{F}=0\) and \(\kappa=1\) we state the following _Remark 2.4_.: If \(f\in\mathfrak{C}(\mathcal{X},\varkappa)\) and as in (1.1), then \[|a_{2}|\leq\frac{1}{2},\qquad\text{and}\qquad|a_{3}|\leq\frac{1}{6}\max\{1, \big{|}\frac{1}{2}+\varkappa\big{|}\}=\frac{1}{6}\Big{(}\frac{1}{2}+\varkappa \Big{)}.\] By way of letting \(\kappa=0\) we state the following _Remark 2.5_.: If \(f\in\mathfrak{G}^{0}_{\mathfrak{F}}(\mathcal{X},\varkappa)=\mathfrak{B}_{ \mathfrak{F}}(\mathcal{X},\varkappa)\) and as in (1.1), then \[|a_{2}| \leq \frac{1}{1+\mathfrak{F}},\] \[|a_{3}| \leq \frac{1}{\mathfrak{F}+2}\max\{1,\frac{1}{2}\Big{|}\left(\frac{ \mathfrak{F}^{2}+\mathfrak{F}-2}{(1+\mathfrak{F})^{2}}\right)-1-\varkappa \big{|}\}=\frac{1}{2(1+\mathfrak{F})}\left(\frac{\mathfrak{F}+3}{(1+ \mathfrak{F})^{2}}+\varkappa\right).\] By way of fixing \(\mathfrak{F}=1\) and \(\kappa=0\) we state the following _Remark 2.6_.: If \(f\in\mathfrak{R}(\mathcal{X},\varkappa)\) given by (1.1) then \[|a_{2}| \leq \frac{1}{2},\] \[|a_{3}| \leq \frac{1}{3}\max\{1,\frac{1}{2}\big{|}1+\varkappa\big{|}\}=\frac{ 1}{6}\left(1+\varkappa\right).\] ## 3. Fekete-Szego type problems **Theorem 3.1**.: _Let \(0\leqq\mu\leqq 1,\ \mathfrak{F}\geqq 0\) and \(\kappa\geqq 0.\) If \(f\in\mathfrak{G}^{\kappa}_{\mathfrak{F}}(\mathcal{X},\varkappa)\) and assumed as in (1.1) then_ \[|a_{3}-\mu a_{2}^{2}| \leq \left\{\begin{array}{ll}\frac{1}{2\mathrm{L}}\left(1+\varkappa +\frac{\mathbf{N}}{\mathbf{W}^{2}}\right),&\text{ if }\quad\mu\leqq\sigma_{1},\\ \frac{1}{\mathrm{L}^{\prime}},&\text{ if }\quad\sigma_{1}\leqq\mu\leqq \sigma_{2},\\ \frac{-1}{2\mathrm{L}}\left(1+\varkappa+\frac{\mathbf{N}}{ \mathbf{W}^{2}}\right),&\text{ if }\quad\mu\geqq\sigma_{2},\end{array}\right.\] _where, for convenience,_ \[\sigma_{1}=\frac{(\varkappa-1)\mathbf{W}^{2}+2(M\kappa^{2}+S\kappa+Q)}{2\mathbf{L }};\sigma_{2}=\frac{\varkappa\mathbf{W}^{2}+2(M\kappa^{2}+S\kappa+Q)}{2\mathbf{L }};\] \[\mathbf{\aleph}:=2(M\kappa^{2}+S\kappa+Q)-2\mu\mathbf{L}, \tag{3.1}\] \[\mathbf{L}:=(1+2\vartheta)(1+2\kappa), \tag{3.2}\] _and_ \[\mathbf{W}:=(1+\vartheta)(1+\kappa) \tag{3.3}\] _and \(M,S,Q\) are assumed as in (2.11)._ Proof.: Now by using (2.9) and (2.10), we get \[a_{3}-\mu a_{2}^{2} = \frac{1}{2(1+2\vartheta)(1+2\kappa)}\left(c_{2}-\frac{c_{1}^{2}} {4}\times\right.\] \[\left.\left[1-\varkappa-\frac{2(M\kappa^{2}+S\kappa+Q)-2\mu(1+2 \vartheta)(1+2\kappa)}{\left((1+\vartheta)(1+\kappa)\right)^{2}}\right]\right)\] \[= \frac{1}{2(\vartheta+2)(1+2\kappa)}\left(c_{2}-\upsilon c_{1}^{2}\right)\] where \[\upsilon: = \frac{1}{4}\left[1-\varkappa-\frac{2(M\kappa^{2}+S\kappa+Q)-2\mu (1+2\vartheta)(1+2\kappa)}{\left((1+\vartheta)(1+\kappa)\right)^{2}}\right]\] \[= \frac{1}{4}\left[1-\varkappa-\frac{2(M\kappa^{2}+S\kappa+Q)-2\mu \mathbf{L}}{\mathbf{W}^{2}}\right].\] The proclamation of Theorem 3.1 now trails by applying Lemma 1.3. Using Lemma 1.5, we directly find the following: **Theorem 3.2**.: _Let \(0\leq\vartheta\leqq 1,\)and \(0\leqq\kappa\leqq 1.\) If \(f\in\mathfrak{G}_{\vartheta}^{\kappa}(\mathfrak{X},\varkappa),\) then for \(\mu\in\mathbb{C},\) we have_ \[|a_{3}-\mu a_{2}^{2}| \leq \frac{1}{(\vartheta+2)(1+2\kappa)}\] \[\times \max\left\{1,\frac{1}{2}\left|-1-\varkappa-\frac{2(M\kappa^{2}+S \kappa+Q)-2\mu(1+2\vartheta)(1+2\kappa)}{\left((1+\vartheta)(1+\kappa) \right)^{2}}\right|\right\}\] \[\leq \frac{1}{\mathbf{L}}\max\left\{1,\frac{1}{2}\left|1+\varkappa+ \frac{2(M\kappa^{2}+S\kappa+Q)-2\mu\mathbf{L}}{\mathbf{W}^{2}}\right|\right\}.\] ## 4. Coefficient inequalities for \(f^{-1}\) **Theorem 4.1**.: _If \(f\in\mathfrak{G}_{\beta}^{\kappa}(\mathcal{X},\varkappa)\) and \(f^{-1}(w)=w+\sum\limits_{n=2}^{\infty}d_{n}w^{n}\) is the inverse function of \(f\) with \(|w|<r_{0}\) where \(r_{0}\) is greater than the radius of the Koebe domain of the class \(f\in\mathfrak{G}_{\beta}^{\kappa}(\mathcal{X},\varkappa)\),we have_ \[|d_{2}| \leq\frac{1}{2(1+\beta)(1+\kappa)}\] \[|d_{2}| \leq\frac{1}{2\mathbf{L}}max\left\{1,|\ \frac{-(1+\varkappa)\mathbf{W}^{2}-2\left(M \kappa^{2}+S\kappa+Q\right)+4\mathbf{L}}{2\mathbf{W}^{2}}\ |\ \right\}.\] _For any \(\hbar\in\mathbb{C}\), we have_ \[|\ d_{3}-\hbar d_{2}^{2}\ |\leq\frac{1}{\mathbf{L}}max\left\{1,|\ \frac{(1+ \varkappa)\mathbf{W}^{2}+2\left(M\kappa^{2}+S\kappa+Q\right)+2L(\hbar-2)}{2 \mathbf{W}^{2}}\ |\ \right\} \tag{4.1}\] _where \(M,S,Q\) are assumed as in (2.11) and \(\mathbf{L},\mathbf{W}\) are as in (3.2) and (3.3)._ Proof.: As \[f^{-1}(w)=w+\sum\limits_{n=2}^{\infty}d_{n}w^{n} \tag{4.2}\] it can be understood that \[f^{-1}(f(z))=f\{f^{-1}(z)\}=z. \tag{4.3}\] From (1.1) and (4.3), we get \[f^{-1}(z+\sum\limits_{n=2}^{\infty}a_{n}z^{n})=z. \tag{4.4}\] Considering (4.3) and (4.4), you could attain \[z+(a_{2}+d_{2})z^{2}+(a_{3}+2a_{2}d_{2}+d_{3})z^{3}+.........=z. \tag{4.5}\] By relating the coefficients of \(z\) and \(z^{2}\) from the expression (4.5), it can be understood that \[d_{2}=-a_{2} \tag{4.6}\] \[d_{3}=2a_{2}^{2}-a_{3}. \tag{4.7}\] From relations (2.9),(2.10),(4.6) and (4.7) \[d_{2}=-\frac{c_{1}}{2(1+\beta)(1+\kappa)}=-\frac{c_{1}}{2\mathbf{W}}; \tag{4.8}\] The estimate \(|d_{3}|\) follows at once by fixing \(\mu=2\) in Fekete-Szego theorem 3.2. For any \(\hbar\in\mathbb{C}\), consider \[d_{3}-\hbar d_{2}^{2}=-\frac{1}{2\mathbf{L}}\Big{(}c_{2}-\frac{(1-\varkappa) \mathbf{W}^{2}-2\left(M\kappa^{2}+S\kappa+Q\right)+2\mathbf{L}(2-\hbar)}{4 \mathbf{W}^{2}}c_{1}^{2}\Big{)} \tag{4.9}\] Taking absolute value of (4.9) and by using making use of Lemma 1.5 to the right hand aspect of (4.9), you can still derive the end result as in (4.1). _Remark 4.2_.: For the function classes given in Remark 1.2, you can still easily state above result analogues to Theorem 4.1 by means of fixing the parameters suitably in Theorem 4.1 it's miles worthy to notice they are new and no longer been studied thus far in association with telephone numbers. ## 5. logarithmic coefficients of \(f\) Then, _the logarithmic coefficients_\(\gamma_{n}\) of \(f\in\mathfrak{S}\) are demarcated with the assistance of the resulting series expansion: \[\log\frac{f(z)}{z}=2\sum_{n=1}^{\infty}\gamma_{n}(f)z^{n},\ z\in\mathbb{U}. \tag{5.1}\] Recall that we can redraft (5.1) in the series form as follows: \[2\sum_{n=1}^{\infty}\gamma_{n}z^{n}= a_{2}z+a_{3}z^{2}+a_{4}z^{3}+\cdots-\frac{1}{2}[a_{2}z+a_{3}z^{2}+a_ {4}z^{3}+\ldots]^{2}\] \[+\frac{1}{3}[a_{2}z+a_{3}z^{2}+a_{4}z^{3}+\cdots]^{3}+\cdots,\ z \in\mathbb{U},\] and seeing the coefficients of \(z^{n}\) for \(n=1,2\), it follows that \[\left\{\begin{array}{l}2\gamma_{1}=a_{2},\\ 2\gamma_{2}=a_{3}-\frac{1}{2}a_{2}^{2},\end{array}\right. \tag{5.2}\] **Theorem 5.1**.: _Let \(\ \kappa\geq 0,\ \ \ \beta\geq 0\ \ and\ \ \ if\ f\in\mathfrak{G}_{\beta}^{\kappa}(\mathcal{X}, \mathcal{X})\), be as assumed in (1.1) then_ \[|\gamma_{1}| \leq \frac{1}{2(1+\beta)(1+\kappa)},\] \[|\gamma_{2}| \leq \frac{1}{\mathbf{L}}\max\left\{1,\frac{1}{2}\left|1+\mathcal{X}+ \frac{2(M\kappa^{2}+S\kappa+Q)-\mathbf{L}}{\mathbf{W}^{2}}\right|\right\}\] _where_ Proof.: We note first that since \(a_{2}=\frac{c_{1}}{2(1+\beta)(1+\kappa)}\),(2.9) and \(|c_{1}|\leq 2\), the inequality \(|\gamma_{1}|\) is insignificant. The result for \(|\gamma_{2}|\) trails by taking \(\mu=\frac{1}{2}\) in the Fekete-Szego theorem 3.2. ## 6. Application to functions based on convolution The class of univalent functions were intensively studied with the aid of numerous researchers in exclusive prospective involving certain distributions namely Borel, Binomial, Poisson, logarithm, Pascal, hypergeometric. In this section based on convolution, we defined a new generalized class and discuss the Fekete-Szego type problems. In addition we discuss these results based on certain probability distribution series. Let \(\wp(z)=z+\sum_{n=2}^{\infty}\wp_{n}z^{n},\quad(\wp_{n}>0)\) and \(f\in\mathfrak{A}\) then \[\mathcal{F}(z) = (f*\wp)(z)=z+\sum_{n=2}^{\infty}\wp_{n}a_{n}z^{n} \tag{6.1}\] \[= z+\wp_{2}a_{2}z^{2}+\wp_{3}a_{3}z^{3}+\cdots\] We define the class \(\mathfrak{G}^{\wp}_{\delta,\kappa}(\mathcal{X},\varkappa)\) in the following way: \[\mathfrak{G}^{\wp}_{\delta,\kappa}:=\{f\in\mathfrak{A}\quad\text{and}\quad \mathcal{F}(z)\in\mathfrak{G}^{\kappa}_{\delta}(\mathcal{X},\varkappa)\}\] where \(\mathfrak{G}^{\kappa}_{\delta}(\mathcal{X},\varkappa)\) is given by Definition 1.1, \[\left[\frac{z\mathcal{F}^{\prime}(z)}{(\mathcal{F}(z))^{1-\kappa}z^{\kappa}}+ \frac{z\mathcal{F}^{\prime\prime}(z)}{\mathcal{F}^{\prime}(z)}+(\kappa-1) \left(\frac{z\mathcal{F}^{\prime}(z)}{\mathcal{F}(z)}-1\right)\right]^{ \delta}\left[\frac{z\mathcal{F}^{\prime}(z)}{(\mathcal{F}(z))^{1-\kappa}z^{ \kappa}}\right]^{1-\delta}\prec\mathcal{X}(z)\] Now, we obtain the coefficient estimate for \(f\in\mathfrak{G}^{\wp}_{\delta,\kappa}(\mathcal{X},\varkappa)\), from the corresponding estimate for \(f\in\mathfrak{G}^{\kappa}_{\delta}(\mathcal{X},\varkappa)\). Applying Theorem 3.1 for the function (6.1), we get the following Theorems 6.1 and 6.2 after an obvious change of the parameter \(\mu\).Our main result is the following: **Theorem 6.1**.: _Let \(0\leqq\kappa\leqq 1,\text{and }0\leqq\leqq 1.\) If \(f\in\mathfrak{G}^{\wp}_{\delta,\kappa}(\mathcal{X},\varkappa)\), then for \(\mu\in\mathbb{C}\), we have_ \[|a_{3}-\mu a_{2}^{2}| =\frac{2}{(1+2\mathfrak{I})(1+2\kappa)\wp_{3}}\] \[max\left\{1,\frac{1}{2}\left|-1-\varkappa+\frac{2\left(M\kappa^{ 2}+S\kappa+Q\right)}{\left((1+\mathfrak{I})(1+\kappa)\wp_{2}\right)^{2}}+ \frac{2\mu(\mathfrak{I}+2)(1+2\kappa)\wp_{3}}{((1+\mathfrak{I})(1+\kappa) \wp_{2})^{2}}\right|\right\},\] _where \(M,S,Q\) are assumed as in (2.11)._ Proof.: For \(f(z)\in\mathfrak{G}^{\wp}_{\delta,\kappa}(\mathcal{X},\varkappa)\) snd \((f*\wp)(z)=\mathcal{F}(z)\) given by (6.1) we have \[P(z): = \left[\frac{z\mathcal{F}^{\prime}(z)}{(\mathcal{F}(z))^{1-\kappa }z^{\kappa}}+\frac{z\mathcal{F}^{\prime\prime}(z)}{\mathcal{F}^{\prime}(z)}+ (\kappa-1)\left(\frac{z\mathcal{F}^{\prime}(z)}{\mathcal{F}(z)}-1\right) \right]^{\delta}\left[\frac{z\mathcal{F}^{\prime}(z)}{(\mathcal{F}(z))^{1- \kappa}z^{\kappa}}\right]^{1-\delta} \tag{6.2}\] \[= 1+b_{1}z+b_{2}z^{2}+\cdots.\] Continuing as in Theorem 2.1, we get \[\begin{split} P(z)&=1+(1+\mathfrak{I})(1+\kappa) \wp_{2}a_{2}z+(1+2\mathfrak{I})(2+\kappa)\wp_{3}a_{3}z^{2}\\ &+\left(\kappa^{2}(\mathfrak{I}^{2}-\mathfrak{I}+1)+\kappa(2 \mathfrak{I}^{2}-4\mathfrak{I}+1)+(\mathfrak{I}^{2}-7\mathfrak{I}-2)\right) \wp_{2}^{2}a_{2}^{2}z^{2}+\cdots.\end{split} \tag{6.3}\] From (2.7)- (2.10) and from this equation(6.3), we obtain \[a_{2} = \frac{c_{1}}{2(1+8)(1+\kappa)\wp_{2}} \tag{6.4}\] \[a_{3} = \frac{1}{2(1+28)(1+2\kappa)\wp_{3}}\left(c_{2}-\frac{c_{1}^{2}}{4 }\left[1-\varkappa-\frac{2\left(M\kappa^{2}+S\kappa+Q\right)}{\left((1+8)(1+ \kappa)\wp_{2}\right)^{2}}\right]\right)\] (6.5) \[a_{3}-\mu a_{2}^{2}\] \[= \frac{1}{2(1+2\wp)(1+2\kappa)\wp_{3}}\left(c_{2}-\frac{c_{1}^{2} }{4}\left[1-\varkappa-\frac{2\left(M\kappa^{2}+S\kappa+Q\right)}{\left((1+8)( 1+\kappa)\wp_{2}\right)^{2}}\right]\right)\] \[-\mu\frac{c_{1}^{2}}{4(1+8)^{2}(1+\kappa)^{2}\wp_{2}^{2}}\] \[= \frac{1}{2(1+28)(1+2\kappa)\wp_{3}}\left[c_{2}-\frac{c_{1}^{2}}{ 4}\left(1-\varkappa-\frac{2\left(M\kappa^{2}+S\kappa+Q\right)}{\left((1+8)(1+ \kappa)\wp_{2}\right)^{2}}\right.\right.\] \[+ \left.\left.\mu\frac{2(1+2\wp)(1+2\kappa)\wp_{3}}{(1+8)^{2}(1+ \kappa)^{2}\wp_{2}^{2}}\right]\right].\] Consequently, by applying Lemma 1.5 we get the desired result. The result is sharp by assuming \[\frac{z\mathcal{F}^{\prime}(z)}{\mathcal{F}(z)}\left(\frac{\mathcal{F}(z)}{z} \right)^{\wp}+\kappa\left[1+\frac{z\mathcal{F}^{\prime\prime}(z)}{\mathcal{F} ^{\prime}(z)}-\frac{z\mathcal{F}^{\prime}(z)}{\mathcal{F}(z)}+\wp\left(\frac {z\mathcal{F}^{\prime}(z)}{\mathcal{F}(z)}-1\right)\right]=\mathcal{X}(z)\] and \[\frac{z\mathcal{F}^{\prime}(z)}{\mathcal{F}(z)}\left(\frac{\mathcal{F}(z)}{z} \right)^{\wp}+\kappa\left[1+\frac{z\mathcal{F}^{\prime\prime}(z)}{\mathcal{F} ^{\prime}(z)}-\frac{z\mathcal{F}^{\prime}(z)}{\mathcal{F}(z)}+\wp\left(\frac {z\mathcal{F}^{\prime}(z)}{\mathcal{F}(z)}-1\right)\right]=\mathcal{X}(z^{2})\] **Theorem 6.2**.: _Let \(0\leqq\mu\leqq 1,\ \wp\geqq 0,\ \ \kappa\geqq 0\) and \(\wp_{n}>0.\) If \(f\in\mathfrak{G}_{\delta,\ \kappa}^{\wp}(\mathcal{X},\varkappa)\) be given by (1.1) then_ \[|a_{3}-\mu a_{2}^{2}| \leqq\left\{\begin{array}{ll}\frac{1}{2\mathds{L}\wp_{3}} \left(1+\varkappa+\frac{\mathbf{\ale}_{2}}{\mathbf{W}^{2}}\right),&\text{ if }\ \ \ \mu\leqq\sigma_{1},\\ \frac{1}{\mathds{L}\wp_{3}},&\text{ if }\ \ \ \sigma_{1}\leqq\mu\leqq \sigma_{2},\\ \frac{-1}{2\mathds{L}\wp_{3}}\left(1+\varkappa+\frac{\mathbf{\ale}_{2}}{ \mathbf{W}^{2}}\right),&\text{ if }\ \ \ \mu\geqq\sigma_{2},\end{array}\right.\] _where, for convenience,_ \[\sigma_{1} := \frac{\wp_{2}^{2}}{\wp_{3}}\,\left[\frac{(\chi-1)\mathbf{W}^{2}+2(M \kappa^{2}+S\kappa+Q)}{2\mathbf{L}}\right],\qquad\sigma_{2}=\frac{\wp_{2}^{2}}{ \wp_{3}}\left[\frac{\varkappa\mathbf{W}^{2}+2(M\kappa^{2}+S\kappa+Q)}{2 \mathbf{L}}\right],\] \[\mathbf{N}_{2} := 2(M\kappa^{2}+S\kappa+Q)-2\mu\frac{\wp_{3}}{\wp_{2}^{2}},\] \(M,S,Q\) _are assumed as in (2.11) and \(\mathbf{L},\mathbf{W}\) are as in (3.2) and (3.3)_ Proof.: By (6.4), (6.5) and proceeding as in Theorems 3.1 and 6.1 we get required result. ### Application to functions based on certain distributions A variable \(x\) is said to be Poisson distributed is given by \[P(x=r)=\frac{m^{r}e^{-m}}{r!},\ r=0,1,2,3,\cdots,\] where \(m\) is called the parameter. In [28], Porwal represented in power series given by \[P(m,z)=z+\sum_{n=2}^{\infty}\frac{m^{n-1}}{(n-1)!}e^{-m}z^{n},\qquad z\in \mathcal{U},\quad m>0.\] By ratio test the radius of convergence of above series is infinity. Using the convolution, he defined a linear operator \(\mathcal{J}^{m}(z):\mathcal{A}\to\mathcal{A}\) (see also, [28, 29, 9, 24] \[\mathcal{J}^{m}f=f(z)*P(m,z) = z+\sum_{n=2}^{\infty}\psi_{m}a_{n}z^{n},\] \[= z+\psi_{2}a_{2}z^{2}+\psi_{3}a_{3}z^{3}+\cdots\qquad z\in \Delta,\] where \(\psi_{n}=\frac{m^{n-1}}{(n-1)!}e^{-m}.\) In particular \[\psi_{2}=me^{-m}\qquad\text{a}nd\qquad\psi_{3}=\frac{m^{2}}{2}e^{-m}. \tag{6.6}\] From (6.6), by taking \(\wp_{2}=me^{-m}=\psi_{2}\) and \(\wp_{3}=\frac{m^{2}}{2}e^{-m}=\psi_{3}\) one can easily state the results( as in Theorems 6.1 and 6.2) associated with Poisson distribution. Recently various subclasses of analytic, univalent and by univalent functions are discussed based on Borel distribution [10, 11, 40] of a discrete random variable \(X\) with parameter \(\varsigma\) with probability mass function as given by \[p(X=r)=\frac{(\varsigma r)^{r-1}e^{-\varsigma r}}{r!}\quad r=1,2,3,\cdots. \tag{6.7}\] Recently, Wanas and Khuttar [42] gave a power series representation \[\mathfrak{B}(\varsigma,z)=z+\sum_{n=2}^{\infty}\frac{(\varsigma(n-1))^{n-2}e^{- \varsigma(n-1)}}{(n-1)!}z^{n}\quad(z\in\Delta,0\leq\varsigma\leq 1) \tag{6.8}\] where whose coefficients are the probabilities of Borel distribution. By way of the use of ratio test, it could be shown that the radius of convergence of the above series is infinity. Let us introduce a linear operator \[L_{\varsigma}:\mathfrak{A}\longrightarrow\mathfrak{A}\] defined by \[L_{\varsigma}f(z)=\mathfrak{B}(\varsigma,z)*f(z) = z+\sum_{n=2}^{\infty}\Lambda_{n}a_{n}z^{n} \tag{6.9}\] \[= z+\Lambda_{2}a_{2}z^{2}+\Lambda_{3}a_{3}z^{3}+\cdots,\] where \(\Lambda_{n}=\Lambda_{n}(\varsigma)=\frac{(\varsigma(n-1))^{n-2}e^{-\varsigma( n-1)}}{(n-1)!}.\) By fixing \(n=2,3\) we have \(\Lambda_{2}=e^{-\varsigma}\quad\Lambda_{3}=\varsigma e^{-2\varsigma}.\) Now by taking \(\wp_{2}=\Lambda_{2}=e^{-\varsigma}\quad\wp_{3}=\Lambda_{3}=\varsigma e^{-2\varsigma}\) as in Theorems 6.1 and 6.2 we state the results in association with Borel distribution. Lately, El-Deeb et al. [8, 4] introduced a power series whose coefficients are \[(1-q)^{s},\frac{qs(1-q)^{s}}{1!},\frac{q^{2}s(s+1)(1-q)^{s}}{2!},\frac{q^{3}s (s+1)(s+2)(1-q)^{s}}{3!}\cdots\] , respectively, probabilities of Pascal distribution is \[\Theta_{q}^{s}(z)=z+\sum_{n=2}^{\infty}\binom{n+s-2}{s-1}q^{n-1}(1-q)^{s}z^{n},\qquad z\in\mathbbm{D};s\geq 1;0\leq q\leq 1.\] whose radius of convergence is infinity by ratio test. Now, we define the linear operator \(\Lambda_{q}^{s}(z):\mathfrak{A}\rightarrow\mathfrak{A}\) \[\Lambda_{q}^{s}f(z)=\Theta_{q}^{s}(z)*f(z) = z+\sum_{n=2}^{\infty}\Phi_{n}a_{n}z^{n},\qquad z\in\mathbbm{D} \tag{6.10}\] where \(\Phi_{n}=(\genfrac{}{}{0.0pt}{}{n+s-2}{s-1}q^{n-1}(1-q)^{s}.\) Now by taking \[\wp_{2}=\Phi_{2}=\binom{s}{s-1}q(1-q)^{s}\quad\wp_{3}=\Phi_{3}=\binom{s+1}{s -1}q^{2}(1-q)^{s}\] one can easily state the results( as in Theorems 6.1 and 6.2) associated Pascal distribution. ## Conclusion We investigated coefficient estimates for \(f\in\mathfrak{G}_{\mathfrak{g}}^{\kappa}(\mathcal{X},\varkappa)\) analytic functions subordinating generalized telephone numbers, and derived initial coefficient estimates \(a_{2},a_{3}\) and Fekete-Szego inequality for \(f\in\mathfrak{G}_{\mathfrak{g}}^{\kappa}(\mathcal{X},\varkappa)\). A similar results have been done for the function \(f^{-1}\) and \(\log\frac{f(z)}{z}.\) Further application of our results to certain functions defined by convolution products. Appropriately specifying the parameters in Theorems 2.1 to 4.1 it is easy to straightforwardly state the results for the numerous new subclasses listed in Remark 1.2 which can be new and now not discussed thus far by way of subordinating with telephonic numbers. In addition we can state results as Theorems 6.1 and 6.2 the function classes connected with Poisson,Borel and Pascal distributions which also not discussed so far. Further keeping with the latest trend of research we can extended the study using Quantum calculus see [44, 45]. ## Declaration Statements **Data availability**: No data were used to support this study. **Competing interests**: We declare that we do not have any commercial or associative interests that represent conflicts of interest in connection with this manuscript. There are no professional or other personal interests that can inappropriately influence our submitted work. **Authors' Contributions**: We contributed equally to the writing of this article, and they read and approved the final manuscript for publication. **Funding**: Not applicable.
2306.00794
SlothSpeech: Denial-of-service Attack Against Speech Recognition Models
Deep Learning (DL) models have been popular nowadays to execute different speech-related tasks, including automatic speech recognition (ASR). As ASR is being used in different real-time scenarios, it is important that the ASR model remains efficient against minor perturbations to the input. Hence, evaluating efficiency robustness of the ASR model is the need of the hour. We show that popular ASR models like Speech2Text model and Whisper model have dynamic computation based on different inputs, causing dynamic efficiency. In this work, we propose SlothSpeech, a denial-of-service attack against ASR models, which exploits the dynamic behaviour of the model. SlothSpeech uses the probability distribution of the output text tokens to generate perturbations to the audio such that efficiency of the ASR model is decreased. We find that SlothSpeech generated inputs can increase the latency up to 40X times the latency induced by benign input.
Mirazul Haque, Rutvij Shah, Simin Chen, Berrak Şişman, Cong Liu, Wei Yang
2023-06-01T15:25:14Z
http://arxiv.org/abs/2306.00794v1
# SlothSpeech: Denial-of-service Attack Against Speech Recognition Models ###### Abstract Deep Learning (DL) models have been popular nowadays to execute different speech-related tasks, including automatic speech recognition (ASR). As ASR is being used in different real-time scenarios, it is important that the ASR model remains efficient against minor perturbations to the input. Hence, evaluating efficiency robustness of the ASR model is the need of the hour. We show that popular ASR models like Speech2Text model and Whisper model have dynamic computation based on different inputs, causing dynamic efficiency. In this work, we propose SlothSpeech, a denial-of-service attack against ASR models, which exploits the dynamic behavior of the model. SlothSpeech uses the probability distribution of the output text tokens to generate perturbations to the audio such that the efficiency of the ASR model is decreased. We find that SlothSpeech-generated inputs can increase the latency up to 40X times the latency induced by benign input. Mirazul Haque*\({}^{1}\), Rutvij Shah*\({}^{1}\), Simin Chen\({}^{1}\), Berrak Sisman\({}^{1}\), Cong Liu\({}^{2}\), Wei Yang\({}^{1}\)\({}^{1}\)University of Texas at Dallas, USA \({}^{2}\)University of California, Riverside, USA [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] ## 1 Introduction Deep Learning (DL) models have been popular for executing different tasks like object recognition, machine translation, sentence classification, etc, with high accuracy. With the increasing popularity, DL models are also being used in speech-related tasks like Automatic Speech Recognition (ASR). ASR is a task in which a given audio signal is transcribed to text. ASR has been significantly instrumental in tasks like caption generation of audio or using virtual speech assistants. Because of their usage in real-time scenarios, ASR models are needed to have high-efficiency robustness. Efficiency robustness evaluates if minor perturbation to the input would significantly decrease the efficiency of a model or not. However, unlike accuracy robustness [1, 2, 3, 4], efficiency robustness has not been evaluated on ASR models. To evaluate efficiency robustness, we first need to investigate if any computation in the model is dynamic w.r.t input in nature, as it will cause dynamic efficiency based on different inputs. First, we investigated the architectures of popular ASR models in Huggingface and found that there are mainly two types of decoder used in ASR models. We refer to the first type as static decoder, where the decoder generates a static number of word or character tokens and then removes unessential tokens. Popular ASR models like CTC models [5] use the static decoder. The second type is referred as dynamic decoder, where the number of generated tokens is dynamic based on input. Popular ASR models like Speech2Text [6, 7] and Whisper [8] use dynamic decoder-based mechanisms. As the dynamic decoder shows dynamic efficiency, the efficiency robustness of these models is needed to be evaluated. In this work, we focus on the evaluation of the efficiency robustness of these dynamic decoder-based ASR models. The efficiency robustness of a model ensures that the efficiency of the model is not impacted significantly by adding perturbation to the input. If a model lacks efficiency robustness, this could lead to fatalities. For example, an ASR system is used in an autonomous vehicle to recognize the driver's instructions. If the model is not efficiency-robust, the model could take a significant amount of time to respond, denying the service and leading to an accident. Hence, the efficiency robustness in a system is needed to be evaluated. To evaluate the efficiency robustness of ASR models, first, we need to establish to the relation between input and dynamic efficiency. As mentioned earlier, for the dynamic decoder, number of generated tokens is dynamic w.r.t input and the number of generated tokens is dependent on a specific end-of-sentence(\(<EOS>\)) token. If the occurrence of \(<EOS>\) is delayed, the number of generated tokens is increased. The occurrence of any token is dependent on output probability distribution of the decoder. Hence, we need to modify the input, which can modify the output probability distribution of the decoder. Based on the aforementioned relation between input and efficiency, we propose SlothSpeech, a denial-of-service attack on the ASR models. Denial-of-service attack increases the latency or the response time of the model significantly, ultimately denying the model service to the users. SlothSpeech is an iterative white-box attack that uses output probability distribution of different tokens to increase latency. We evaluate SlothSpeech1 on three popular models: Speech2Text, Whisper, and Speech2Text [6, 7, 8] and three popular datasets: LibriSpeech, OpenSLR and VCTK [9, 10, 11]. We evaluate SlothSpeech on two criteria: effectiveness and quality. Effectiveness measures the effect of SlothSpeech on the latency of the model. While quality measures the distance between adversarial input and benign input. We find that SlothSpeech-generated inputs can increase the latency up to 4000% of the latency induced by benign input. Footnote 1: Both authors contributed equally to this research. Our work makes the following contributions: * **Problem Characterization.** Our work is the first work to characterize the latency surge vulnerability in ASR models. * **Approach.** We propose a novel loss function to synthesize an iterative white-box attack. * **Experimentation.** We evaluate SlothSpeech on two criteria with three popular datasets and three popular models. The rest of the paper is organized as follows: In Section 2, we introduce the the background of dynamic-decoder-based ASR model and adversarial attack. In Section 3, we formulate the optimization problem of the denial-of-service attack on ASR models and we explain SlothSpeech approach. In Section 4, we discuss the evaluation results. ## 2 Background ### Automatic Speech Recognition Systems Given the input speech \(\mathbf{x}\), the ASR systems compute the output probability for a sequence of tokens \(\Pr(\mathbf{y}|\mathbf{x})\) through the Bayes' theorem. \[\Pr(\mathbf{y}|\mathbf{x})=\prod_{u=1}\Pr(\mathbf{y_{i}})|\mathbf{x},\mathbf{y_{1}},\cdots,\mathbf{y_{u -1}}) \tag{1}\] The computation process is shown in Equation 1, where \(\mathbf{y_{i}}\) is the \(i^{th}\) output token. In this paper, our focus is on ASR systems that are based on the dynamic decoder architecture. Such systems comprise two key components - an encoder and a decoder. As illustrated in Figure 1, the encoder neural network is responsible for encoding the input speech into a hidden representation, while the decoder begins with a special token \(SOS\) and generates subsequent output tokens iteratively by leveraging the decoder neural networks. The decoding process will continue until it reaches another token EOS. A notable observation regarding the working mechanism of dynamic-decoder-based ASR systems is that the decoder is invoked more frequently for inputs that arrive at the end of the sequence (EOS). This observation implies that the ASR system tends to allocate more computational resources to inputs that have longer output sequences, thereby prioritizing the decoding of such sequences. Therefore, longer outputs lead to wastage of computational resources on the part of the victim ASR system. ### Adversarial Attacks against DNNs Recently, several adversarial attacks [1, 2, 3, 4, 12] have been developed for targeting DNN-based systems. These attacks can create human-imperceptible adversarial perturbations, which, when applied to benign inputs, generate adversarial examples that can easily evade even the most advanced DNNs. Based on the availability of the DNNs parameters, the adversarial attacks can be categorized as white-box attacks and black-box attacks. Apart from correctness-based adversarial attacks, researchers have recently proposed denial-of-service attacks [13, 14, 15, 16, 17, 18, 19] for targeting neural networks with dynamic decision routes. The objective of these attacks is to maximize the computational cost of the victim model, thereby decreasing its availability. ## 3 SlothSpeech In this section, we discuss the proposed approach SlothSpeech. First, we formulate the problem, then we focus on how we create the objective function, and finally, we explain the iterative optimization approach. ### Problem Formulation Our objective is to produce audio that is imperceptible to humans and can decrease the efficiency of the victim's ASR model during inference. By reducing the efficiency of the ASR model, the adversary can deplete its computational resources, such as battery consumption, and make it unavailable, ultimately achieving the goal of denial-of-service. Our objective consists of two main factors: (i) reducing the efficiency of the victim's ASR model, and (ii) ensuring imperceptibility to humans. We formulate our objective as an optimization problem, as shown in Equation 2, \[\Delta=\operatorname{argmin}_{\delta}\operatorname{Efficiency}_{f}(x+\delta) \qquad s.t.\|\delta\|\leq\epsilon \tag{2}\] where \(x\) is an audio input fed to ASR system \(f(\cdot)\), our aim is to generate an audio perturbation \(\delta\) that minimizes the efficiency of the ASR system while also satisfying imperceptibility constraints." ### Differentiable Proxy of Latency As the \(\operatorname{Efficiency}_{f}\) is not a differentiable term w.r.t input, we need to find a proxy of \(\operatorname{Efficiency}_{f}\), which is differentiable. As discussed in the section 2, the \(\operatorname{Efficiency}_{f}\) is dependent on the length of the output text. Also, the length of the output text is dependent on the occurrence of end-of-sentence or \(<EOS>\) token. Our objective is to delay the occurrence of the \(<EOS>\) token. To achieve this, we will first discuss how a token is selected for the output. Formally, the ASR model's output is a sequence of probability distributions, \(f(x)=[p_{1}(x),p_{2}(x),\cdots,p_{n}(x)]\) and the output token sequences are \([t_{1}(x),t_{2}(x),\cdots,t_{n}(x)]\). Here, \(t_{i}(x)=\operatorname{argmax}(p_{i}(x))\). Also, the likelihood of the output tokens and the EOS tokens are represented as \([p_{1}(x)^{t_{1}},p_{2}(x)^{t_{2}},\cdots,p_{n}(x)^{t_{n}}]\) and \([p_{1}(x)^{EOS},p_{2}(x)^{EOS},\cdots,p_{n}(x)^{EOS}]\) respectively. To delay the occurrence of the EOS tokens, one approach would be to minimize the likelihood of the EOS tokens. But this approach would be resource-consuming because cross-entropy would be used on the large vocabulary. Instead of that, we convert the probability distribution \(p_{i}\) for the multi-classification task to a binary classification task _i.e.,_, is or not EOS token. The new probability distribution is represented as \(q_{i}=[l_{i}(x)^{EOS},\sum_{j}l_{i}(x)^{j}-l_{i}(x)^{EOS}]\), where \(j\neq EOS\). Our first objective is to decrease the likelihood of all EOS tokens in the output; hence, \(\operatorname{Efficiency}_{f}\) can be replaced by, \(\mathcal{L}_{EOS}=\frac{1}{n}\sum_{i=1}^{n}l_{i}(x+\delta)^{EOS}\). However, considering only EOS tokens during the loss function can reduce the effectiveness of the attack. For the last token of the output sequence, EOS token's likelihood would be highest, which we need to decrease. Also, if we increase the likelihood of another specific token (token with second highest likelihood), it would increase the effectiveness of the attack because the specific token's likelihood would increase, and EOS token's likelihood would decrease. Hence the proxy of \(\operatorname{Efficiency}_{f}\) will be \[\mathcal{L}_{EOS}=\Big{(}\frac{1}{n}\sum_{i=1}^{n}l_{i}(x+\delta)^{EOS}+q_{n}( x+\delta)\Big{)}\] where, \(q_{n}\) represents the probability of the token with second highest likelihood in the last (\(n^{th}\)) token. If \(P\) is the distance norm, and \(c\) is a weight value defined by attacker, then the final optimization loss function can be represented by, \[\mathcal{L}=\|\vec{\theta}\|_{P}+c\cdot\mathcal{L}_{EOS} \tag{3}\] Figure 1: Working mechanism of dynamic-decoder-based ASR ### Approach Algorithm 1 and Figure 3 show the details of SlothSpeech. The SlothSpeech approach can be divided into three parts. (i) _Initialize_. First, we initialize different variables that would be needed for synthesizing perturbation. (ii) _Calculating loss function_. Next, we calculate loss function based on equation 3. (iii) _Update adversarial input_. Based on the optimization of loss function, we update the adversarial input. Below, we explain each step. **Initialize**. First, we initialize \(\delta\) (Perturbation), \(x^{*}\) (Final Adversarial Input) and \(max_{N}\) (Maximum Number of Tokens), \(iter\) (Iteration Index) in Line 2-3.-. These values will be updated iteratively based on the optimization procedure. **Calculating loss function.** In this step, we calculate the loss function that will be optimized. Based on equation 3, we have two components to the loss function. First loss component, \(\mathcal{L}_{EOS}\) is calculated based on the likelihood of different tokens (Line 6-7), whereas the second loss component \(\mathcal{L}_{d}\) (Line 8) signifies the distance between adversarial and benign examples (_e.g._, \(L_{2}\) and \(L_{\infty}\)). Then both components are added based on the weight \(c\) (Line 9). **Update adversarial input.** Finally, we update the adversarial input \(hatx\) (Line 10), and check if the output length generated by \(\hat{x}\) is better than the saved \(max_{N}\) (Line 11-12). If yes, the final adversarial input \(x^{*}\) and \(max_{N}\) is updated. ``` 1:Input: Benign input \(x\), victim ASR \(f(\cdot)\), maximum iteration number \(T\), weight val \(c\). 2:Output: Latency-based adversarial example \(x^{*}\) 3:Initialize \(\delta,x^{*},max_{N}\) 4:\(iter\gets 0\) 5:while\(iter<T\)do 6:\(\hat{x}=x+\delta\) 7:\(H=f(\hat{x})\) 8:\(\mathcal{L}_{EOS}\leftarrow GetEOSLoss(H)\) 9:\(\mathcal{L}_{d}\gets Distance(\hat{x},x)\) 10:\(\mathcal{L}_{adv}\leftarrow\mathcal{L}_{EOS}+c\cdot\mathcal{L}_{d}\) 11:\(\hat{x}\leftarrow\hat{x}+\frac{\partial\mathcal{L}_{adv}}{\partial\hat{x}}\) 12:\(N\leftarrow\text{GetLength}(\hat{x})\) 13:\(max_{N}\), \(x^{*}\) = Update(\(max_{N}\), \(N\), \(\hat{x}\)) 14:endwhile 15:Return \(x^{*}\) ``` **Algorithm 1** SlothSpeech ## 4 Evaluation We evaluate SlothSpeech based on two criterion: effectiveness and quality. ### Experimental Setup **Datasets and Models.** For evaluation, LibriSpeech dataset [9], OpenSLR [10], and VCTK dataset [11] have been used for synthesizing the adversarial examples. We use three popular ASR models: Speech2Text [6], Whisper [8] and Speech2Text2 [5]. Figure 3: Design Overview of SlothSpeech Figure 2: Comparison of Distance between SlothSpeech and Gaussian Noise. All the pre-trained weights are gathered from Huggingface. **Baseline and Metric** As this is the first denial-of-service attack on ASR models, we use Gaussian noise as baseline. We examine two metrics to reflect the ASR models' computational costs in order to measure the effectiveness of SlothSpeech in increasing the victim ASR models' computational costs. The first is the number of tokens generated by the decoder (hardware-independent metric), and the second is the latency of ASR models (hardware-dependent metric) in handling an input. We first measure the absolute computational costs (Abs.) of the benign inputs and then generate adversarial examples. After we compute the computational cost increments (Inc.). For quality evaluation, we use the distance between adversarial and benign input as metric, while for transferability evaluation, we measure percentage increase in latency as metric. **Implementation Details.** For this work, we use \(c\) value as 1 and \(T\) value as 100 to generate adversarial inputs. We set \(max\_length\) of the tokens as 1001, and as distant norm, we use \(L_{2}\) and \(L_{\infty}\) norms. Only for Speech2Text2 model tested with OpenSLR and VCTK datasets, we use 500 as \(max\_length\), because extending that limit caused significant load on the GPU. ### Effectiveness As mentioned earlier, we measure the effectiveness of SlothSpeech by measuring computational latency and number of tokens in the output. Table 1 shows our results of the effectiveness of SlothSpeech and the Gaussian noise. We show both the mean absolute values of the metric in the table and the mean percentage increase in the metric due to perturbation. (seed being the original input). Also, we show maximum absolute values for number of tokens and latency achieved by seed input and different technique-generated inputs. It can be noticed that all three models are vulnerable against SlothSpeech and, SlothSpeech-generated inputs perform significantly better than baseline w.r.t increasing computation in ASR models For Speech2Text model, all the adversarial examples induce maximum length of the output token. The results reflect that Speech2Text model is the least efficiency-robust against SlothSpeech. Speech2Text2 model also shows low robustness against SlothSpeech, however the percentage increase in latency for Speech2Text is higher than Speech2Text2. Whisper model has shown higher robustness against SlothSpeech than the other two models. Inputs generated using both distant norms have similar effectiveness, however, for S2T-Libri pair, inputs generated through \(L_{\infty}\) norm have significantly higher effectiveness than \(L_{2}\) norm. **Summary.** SlothSpeech generated input outperforms the baseline by significantly increasing the latency of the ASR models (up to 4000%). ### Quality We evaluate quality of adversarial examples w.r.t magnitude of the perturbation added to the audio signal due to SlothSpeech and baseline. Figure 2 shows the results. We use bargraph to show the mean of different perturbations added to the input by SlothSpeech and baseline. It can be observed that mean value of L2 perturbation for baseline and SlothSpeech is similar for all case scenarios, however, for \(L_{\infty}\) norm, SlothSpeech perturbation is slightly higher than the baseline. Although except S2T2 model, the mean perturbation of SlothSpeech is always similar to or lower than baseline for both norm. Hence, it can be noted that with similar mean perturbation, SlothSpeech has a significantly higher effectiveness than baseline. **Summary.** The mean perturbation magnitude added to SlothSpeech generated input is similar to the magnitude of the Gaussian noise. ## 5 Conclusion In this work, we propose SlothSpeech2, a white-box denial-of-service attack that can decrease the efficiency of the ASR models significantly. SlothSpeech uses the likelihood of output tokens to generate adversarial inputs. We evaluate SlothSpeech on three popular datasets and three popular models. We find that SlothSpeech-generated inputs can increase the model latency up to 40 times more than the benign input. However, in this work, we do not focus on improving the efficiency robustness of the ASR models.
2302.12667
Deep active learning for nonlinear system identification
The exploding research interest for neural networks in modeling nonlinear dynamical systems is largely explained by the networks' capacity to model complex input-output relations directly from data. However, they typically need vast training data before they can be put to any good use. The data generation process for dynamical systems can be an expensive endeavor both in terms of time and resources. Active learning addresses this shortcoming by acquiring the most informative data, thereby reducing the need to collect enormous datasets. What makes the current work unique is integrating the deep active learning framework into nonlinear system identification. We formulate a general static deep active learning acquisition problem for nonlinear system identification. This is enabled by exploring system dynamics locally in different regions of the input space to obtain a simulated dataset covering the broader input space. This simulated dataset can be used in a static deep active learning acquisition scheme referred to as global explorations. The global exploration acquires a batch of initial states corresponding to the most informative state-action trajectories according to a batch acquisition function. The local exploration solves an optimal control problem, finding the control trajectory that maximizes some measure of information. After a batch of informative initial states is acquired, a new round of local explorations from the initial states in the batch is conducted to obtain a set of corresponding control trajectories that are to be applied on the system dynamics to get data from the system. Information measures used in the acquisition scheme are derived from the predictive variance of an ensemble of neural networks. The novel method outperforms standard data acquisition methods used for system identification of nonlinear dynamical systems in the case study performed on simulated data.
Erlend Torje Berg Lundby, Adil Rasheed, Ivar Johan Halvorsen, Dirk Reinhardt, Sebastien Gros, Jan Tommy Gravdahl
2023-02-24T14:46:36Z
http://arxiv.org/abs/2302.12667v1
# Deep active learning for nonlinear system identification ###### Abstract The exploding research interest for neural networks in modeling nonlinear dynamical systems is largely explained by the networks' capacity to model complex input-output relations directly from data. However, they typically need vast training data before they can be put to any good use. The data generation process for dynamical systems can be an expensive endeavor both in terms of time and resources. Active learning addresses this shortcoming by acquiring the most informative data, thereby reducing the need to collect enormous datasets. What makes the current work unique is integrating the deep active learning framework into nonlinear system identification. We formulate a general static deep active learning acquisition problem for nonlinear system identification. This is enabled by exploring system dynamics locally in different regions of the input space to obtain a simulated dataset covering the broader input space. This simulated dataset can be used in a static deep active learning acquisition scheme referred to as global explorations. The global exploration acquires a batch of initial states corresponding to the most informative state-action trajectories according to a batch acquisition function. The local exploration solves an optimal control problem, finding the control trajectory that maximizes some measure of information. After a batch of informative initial states is acquired, a new round of local explorations from the initial states in the batch is conducted to obtain a set of corresponding control trajectories that are to be applied on the system dynamics to get data from the system. Information measures used in the acquisition scheme are derived from the predictive variance of an ensemble of neural networks. The novel method outperforms standard data acquisition methods used for system identification of nonlinear dynamical systems in the case study performed on simulated data. keywords: Deep active learning, Nonlinear system identification, Neural networks ensembles, static acquisition problem, dynamic acquisition problem ## 1 Introduction Modeling dynamical systems is a cornerstone of most engineering applications where the system states of the processes change over time. Prediction models can be utilized in different ways. Models can be used in a control system setting to design safe and optimal control systems. Furthermore, if the prediction models exhibit a high degree of accuracy, they can be utilized to forecast system states over extended periods of time. Accurate simulations of the physical behavior across longer horizons can enhance our understanding of the underlying physical process and support stakeholders in informed decision making. Physics-Based Models (PBMs) are widely utilized in a range of engineering and scientific applications. PBMs are mathematical models derived from fundamental physical principles. These principles are fundamental laws that govern a particular aspect of the natural world. Moreover, these principles or fundamental laws describe the behavior of an observable phenomenon. Our understanding of these physical phenomena is primarily attained through the examination and analysis of observed occurrences of the phenomenon in question. On one side, this means that PBMs possess a high degree of interpretability, and demonstrate generalizability when modeling assumptions are sound. On the other side, the deductive nature of this modeling approach is highly biased, and potentially ignorant to unobserved or unknown physical phenomena. Increased access to abundant data, cheap computational resources, and many achievements and improvements in the Machine Learning (ML) society has created an enormous interest in a variety Data-Driven Models (DDMs) in many engineering fields including material science [1], biomechanics [2], production of biofuels [3], reservoir modeling in oil and gas industry [4], aluminum extraction [5], bioengineering [6], drug discovery [7] and more [8]. DDMs can model an underlying process directly from input-output data. In the scientific community of systems and control, the art of building mathematical models from observed input-output data is called system identification [9]. The research of identifying linear dynamics started in the late 1950's [10]. Since then, a pool of theories covering linear system identification has been developed. Linear models typically require less data and are structured and well-behaved. Thus a linear model is preferred if it can approximate a system with satisfactory accuracy. However, more complex model structures are typically required to obtain highly accurate process models of complex nonlinear systems. The inescapable nature of a wide range of real world problems and processes is a core motivation for the broad, and ever evolving field of nonlinear system identification. Determining the model structure for nonlinear problems is a fundamental problem in nonlinear system identification. Using Neural Networks (NNs) structures stands out as highly interesting, due to their remarkable abilities to model complex nonlinear phenomena. This has motivated researchers to investigate NNs as process models in various dynamical systems. In [11], NNs were used to identify the dynamics of a pressurized water nuclear reactor, authors of [12] used NNs to identify the dynamics of a purification process of bioethanol, in [13] a Deep Learning (DL) model was used to predict chemical reactions, and in [14], NNs were used to identify the dynamics of a Quadcopter. Unfortunately, NNs typically require large amounts of diverse data, which is expensive to generate and usually unavailable from dynamical systems. In [15], this issue is addressed by inducing sparsity in the NN models and shows that this reduces the data requirements for obtaining NNs with desirable accuracy. In addition to sparse NNs, [16] show that skip-connections can also contribute to increasing model accuracy when data size is limited. However, large amounts of training data is still needed to achieve satisfactory model accuracy. This may preclude the use of NNs in applications where data acquisition is expensive. Active Learning (AL) aims to maximize model accuracy with a minimal amount of data by acquiring the most informative training data [17]. AL has been utilized in many fields, including image recognition [18], text classification [19], and object detection [20] to mention a few. In these scenarios, large unlabeled datasets are available, and the AL algorithm aims to choose the most informative samples among the unlabeled data. The goal is then to reduce the costly labeling process which is performed by human domain experts. In the context of dynamical systems, labeling output data may not incur significant expenses. However, data obtained from a processes under closed-loop feedback control often lacks the necessary information to identify NN models with acceptable performance, including accuracy and generalizability to operational regions of the input space, known as the state-action space. Obtaining informative datasets of a dynamical processes is costly due to for example interruption of the production or operation, and expenses of measuring certain states. Moreover, exciting the dynamical systems to regions with high model uncertainty can induce unforeseen and potentially severe incidents to the process and its surroundings. While the safety-critical nature of using NNs in controlled processes is a major research challenge itself, approached by for example techniques within reachability analysis [21, 22], we limit the scope of this work to the informativeness of the sampled data, addressing challenges related to costs of sampling large datasets in dynamical systems. Introducing AL methods to experimental design for system identification introduces additional challenges to the AL problem. That is, most AL methods address static acquisition problems where, in principle, any location in the input space is directly accessible, or a dataset is sampled in advance. For dynamical systems, on the other hand, reaching a desired location in the state-action space requires system excitation through control inputs. The topic of optimal excitation has been addressed in the research field known as optimal experiment design [23]. In light of this, optimal experiment design can be seen as a subfield of AL, or they can be seen as related research topics. Anyways, AL, which originates from the computer science community provides a wide range of information-theoretic approaches as well as a well-defined learning framework, providing great inspiration to researchers working with system identification. Authors of [24] propose AL strategies for the identification of a Gaussian Process (GP) model inspired by information theoretic measures. The most promising work they propose suggests optimizing a sequence of control inputs that maximize the predictive differential entropy along the state trajectory; a method outperforming state-of-the-art experimental design methods. The work of identifying GP models was extended in [25], to also include global explorations. The global search for initial states is done by exploring the informativeness of short trajectories from candidate initial states. When an informative initial state is acquired, the local exploration maximizes the predictive entropy along the state trajectory as in [24]. AL is also applied to acquire data that efficiently identify linear models by solving an Optimal Control Problem (OCP) that maximize the minimal eigenvalues of covariates of states [26]. An active learning approach to identify a restricted class of nonlinear dynamical modes whose state transitions depend linearly on a known feature embedding of state-action pairs was investigated in [27]. However, the research on active learning for system identification of NN models is, to the best of the authors' knowledge, highly limited. To that end, we extend the work of AL used to acquire the most informative data for system identification to NNs. That is, * In equation (18), we formalize a general Batch Mode Deep Active Learning (BMDAL) acquisition scheme for dynamical system identification referred to in this work as global exploration. The scheme is on the static deep active learning batch acquisition form presented in equation (14) from [17]. The static acquisition scheme for dynamical systems is enabled through local explorations, obtaining a set of simulated informative candidate state-action trajectories distributed around the state-action space. The novel DeepAL scheme builds upon the AL scheme presented in [25] that iteratively searches for the single most informative state-action trajectory for identifying a GP model through local and global explorations. * The novel formulation of the static BMDAL acquisition for dynamical systems is utilized in a novel framework presented in Fig. 3. Ensembles of NNs are used to produce uncertainty estimates that assess the informativeness of single state-action trajectories. * The general nature of the proposed BMDAL formulation allows for a wide range of query strategies to be applied. This is demonstrated by using two different query strategies in the case study. The AL algorithm is showcased on the simulated dynamics of a 3 Degree Of Freedom (DOF) surface vessel with three states and three control inputs, yielding an input space of six dimensions. The simulator represents the dynamics of the MilliAmpere ferry[28], which is an experimental platform owed by NTNU. The simulation model is presented in Section 2.1. ## 2 Theory ### Physics based simulator We use the standard 3-Degrees of Freedom (3-DOF) model of a marine craft, which is a simplified model of a real vessel. The state of the vessel is described by the pose vector \(\mathbf{\eta}=[x\,y\,\psi]^{\top}\in\mathbb{R}^{3}\) and the velocity vector \(\mathbf{v}=[v_{1}\,v_{2}\,r]^{\top}\in\mathbb{R}^{3}\). The pose vector describes the position and orientation of the vessel in the North-East-Down (NED) frame with \(x\) and \(y\) being the position in the North and East directions, respectively, and \(\psi\) being the heading angle. The velocity vector describes the velocity of the vessel in the body frame with \(u\) and \(v\) being the velocity in the surge and sway directions, respectively, and \(r\) being the yaw rate. The model is formulated as a nonlinear system of ordinary differential equations (ODEs) as follows (see [29] for details): \[\dot{\mathbf{\eta}} =\mathbf{R}(\psi)\mathbf{\nu} \tag{1}\] \[\mathbf{M}\dot{\mathbf{\nu}}+\mathbf{C}(\mathbf{\nu})\mathbf{\nu}+\mathbf{D}( \mathbf{\nu})\mathbf{\nu} =\mathbf{\tau}.\] where \(\mathbf{R}(\psi)\in\mathrm{SO}(3)\) is the rotation matrix from the body frame to the NED frame. The mass matrix \(\mathbf{M}\in\mathbb{R}^{3\times 3}\), the Coriolis matrix \(\mathbf{C}(\mathbf{\nu})\in\mathbb{R}^{3\times 3}\), and the damping matrix \(\mathbf{D}(\mathbf{\nu})\in\mathbb{R}^{3\times 3}\) are given by [28] and express the inertia and Coriolis matrices as: \[\mathbf{M}=\begin{bmatrix}m_{11}&0&0\\ 0&m_{22}&m_{23}\\ 0&m_{32}&m_{33}\end{bmatrix},\quad\mathbf{C}(\mathbf{\nu})=\begin{bmatrix}0&0&c_{13}( \mathbf{\nu})\\ 0&0&c_{23}(\mathbf{\nu})\\ c_{31}(\mathbf{\nu})&c_{32}(\mathbf{\nu})&0\end{bmatrix},\quad\mathbf{D}(\mathbf{\nu})\quad= \begin{bmatrix}d_{11}(\mathbf{\nu})&0&0\\ 0&d_{22}(\mathbf{\nu})&d_{23}(\mathbf{\nu})\\ 0&d_{32}(\mathbf{\nu})&d_{33}(\mathbf{\nu})\end{bmatrix} \tag{2}\] where the elements \(c_{ij}(\mathbf{\nu})\) and \(d_{ij}(\mathbf{\nu})\) are given by \[\begin{array}{ll}c_{13}(\mathbf{\nu})=-m_{22}v_{2}-m_{23}r&d_{11}(\mathbf{\nu})=-X_{ v_{1}}-X_{v_{1}v_{1}v_{1}}v_{1}^{2}\\ c_{23}(\mathbf{\nu})=m_{11}v_{1}&d_{22}(\mathbf{\nu})=-Y_{v_{2}}-Y_{|v_{2}|v_{2}|}v_{2 }|-Y_{|v_{2}|v_{2}}|r|-Y_{v_{2}v_{2}v_{2}}v_{2}^{2}\\ c_{31}(\mathbf{\nu})=-c_{13}(\mathbf{\nu})&d_{32}(\mathbf{\nu})=-Y_{r}-Y_{|v_{2}|r|}|y_{2}|- Y_{|v_{2}|r|}|\\ c_{32}(\mathbf{\nu})=-c_{23}(\mathbf{\nu})&d_{33}(\mathbf{\nu})=-N_{r}-N_{|v_{2}|r|}|v_{2}|- N_{|v_{2}|r|}r^{2}\end{array}. \tag{3}\] The constant coefficients in Equation (3) are summarized in Table 1. The kinematics of the vessel is given by \[\dot{\mathbf{\eta}}=\mathbf{R}(\mathbf{\psi})\mathbf{v}=\begin{bmatrix}\cos\psi&-\sin \psi&0\\ \sin\psi&\cos\psi&0\\ 0&0&1\end{bmatrix}\begin{bmatrix}v_{1}\\ v_{2}\\ r\end{bmatrix}, \tag{4}\] and its dynamics are governed by \[\dot{\mathbf{\nu}}=\mathbf{M}^{-1}\left(\mathbf{\tau}-\mathbf{C}(\mathbf{\nu})\mathbf{\nu}- \mathbf{D}(\mathbf{\nu})\mathbf{\nu}\right) \tag{5}\] We are interested in learning the dynamics of the vessel in response to given forces and moments, which we consider as control input. In the remainder of the paper we use the following notation: \[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x},\mathbf{u})\quad\text{with}\quad \mathbf{x}=\mathbf{\nu}\quad\text{and}\quad\mathbf{u}=\mathbf{\tau}, \tag{6}\] i.e. \(\mathbf{x}\in\mathbb{R}^{3}\) is the state vector and \(\mathbf{u}\in\mathbb{R}^{3}\) is the control input, with dynamics described by Equation (5). \begin{table} \begin{tabular}{l l l|l l l} \hline \hline **Constant** & **Value** & **Unit** & **Constant** & **Value** & **Unit** \\ \hline \(m_{11}\) & 2389.657 & kg & \(m_{22}\) & 2533.911 & kg \\ \(m_{23}\) & 62.386 & kg & \(m_{32}\) & 28.141 & kg \\ \(m_{33}\) & 5068.910 & kg \(\cdot\) m\({}^{2}\) & \(X_{v_{1}}\) & -27.632 & kg \(\cdot\)\(s^{-1}\) \\ \(X_{|v_{1}|v_{1}}\) & -110.064 & kg \(\cdot\) s\({}^{-1}\) & \(X_{v_{1}v_{1}v_{1}}\) & -13.965 & kg \(\cdot\) s\({}^{-1}\) \\ \(Y_{v_{2}}\) & -52.947 & kg \(\cdot\) s\({}^{-1}\) & \(Y_{|v_{2}|v_{2}}\) & -116.486 & kg \(\cdot\) s\({}^{-1}\) \\ \(Y_{v_{2}|v_{2}}\) & -24.313 & kg \(\cdot\) s\({}^{-1}\) & \(Y_{|v_{2}|r}\) & -1540.383 & kg \(\cdot\) s\({}^{-1}\) \\ \(Y_{r}\) & 24.732 & kg \(\cdot\) s\({}^{-1}\) & \(Y_{v_{2}|r}\) & 572.141 & kg \(\cdot\) s\({}^{-1}\) \\ \(Y_{|r|r}\) & -115.457 & kg \(\cdot\) s\({}^{-1}\) & \(N_{v_{2}}\) & 3.5241 & kg \(\cdot\) s\({}^{-1}\) \\ \(N_{|v_{2}|v_{2}}\) & -0.832 & kg \(\cdot\) s\({}^{-1}\) & \(N_{|r|r_{2}}\) & 336.827 & kg \(\cdot\) s\({}^{-1}\) \\ \(N_{r}\) & -122.860 & kg \(\cdot\) s\({}^{-1}\) & \(N_{|r|r}\) & -874.428 & kg \(\cdot\) s\({}^{-1}\) \\ \(N_{rrr}\) & 0.000 & kg \(\cdot\) s\({}^{-1}\) & \(N_{|v_{2}|r}\) & -121.957 & kg \(\cdot\) s\({}^{-1}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Values and units of the parameters of the vessel. Data-Driven Models (DDM)s can approximate an underlying process with data from the process. The underlying process can be represented by a nonlinear mapping from input to output space. Deep feed-forward neural networks known as Multi-Layer Perceptrons (MLP) have high flexibility and, thereby ability to model nonlinear mappings. In this work, the MLP is used to model Eq. Equation (6). The trainable parameters of the NN are denoted by \(\mathbf{\theta}\in\mathbb{R}^{p}\) where \(p\) is the number of parameters, and the network is denoted as \[\hat{\mathbf{x}}=\hat{\mathbf{f}}(\mathbf{x},\mathbf{u};\ \mathbf{\theta}), \tag{7}\] where \(\hat{\mathbf{x}}\) is the vector of time derivatives of the states. Optimal model parameters \(\mathbf{\theta}^{*}\) are found according to the optimization of a cost function \(C(\cdot)\): \[\mathbf{\theta}^{*}=\operatorname*{argmin}_{\mathbf{\theta}}\left\{\frac{1}{N}\sum_{i= 1}^{N}C(\mathbf{x}_{i},\mathbf{u}_{i},\hat{\mathbf{x}}_{i},\mathbf{\theta})\right\}. \tag{8}\] \(N\) is the size of the training set \(\mathcal{D}=\{([\mathbf{x}_{i},\mathbf{u}_{i}],\hat{\mathbf{x}}_{i})\}_{i=1}^ {N}\), and cost function \(C(\cdot)\) is given by: \[C(\mathbf{x}_{i},\mathbf{u}_{i},\hat{\mathbf{x}}_{i},\mathbf{\theta})=J(\hat{ \mathbf{x}}_{i},\hat{\mathbf{f}}(\mathbf{x}_{i},\ \mathbf{u}_{i};\mathbf{\theta}))+\lambda R(\mathbf{\theta}). \tag{9}\] \(J(\cdot)\) is a proper loss function. For regression tasks, the loss function is typically chosen to be the Mean Squared Error (MSE): \[J(\hat{\mathbf{x}}_{i},\hat{\mathbf{f}}(\mathbf{x}_{i},\ \mathbf{u}_{i};\mathbf{ \theta}))=(\hat{\mathbf{x}}_{i}-\hat{\mathbf{f}}(\mathbf{x}_{i},\ \mathbf{u}_{i};\mathbf{\theta}))^{2}. \tag{10}\] Figure 1: Neural Network architecture used \(R(\mathbf{\theta})\) is a regularization term on the parameters. In this work, the \(\ell_{1}\) regularization is used due to its sparsity promoting nature. \(\lambda\) is a tunable scalar multiplied with the regularization term \(R(\mathbf{\theta})\). The error of the cost function is propagated backward from the output layer, through the hidden layers, and to the input layer in the so-called back-propagation scheme. ### Ensembles of neural networks Ensemble learning includes methods that combine multiple models in making predictions. The main premise in ensemble learning is that errors made by individual models are likely to be compensated by other models such that the overall ensemble prediction on average improves prediction accuracy over individual models [30]. Deep ensembles, consisting of multiple Deep Neural Networks (DNNs), have gained significant attention in recent years due to their improved accuracy, uncertainty estimation, and robustness to out-of-distribution data. There are two well known methods of training ensembles of NNs; by bootstrapping, where the ensemble methods are trained on different bootstrap samples of the dataset, and by multiple random seeds, where the model parameters of the members are intialized with different random parameters and then trained on the entire dataset. While the bootstrapping method has been found to hurt performance of the NNs, using random initializations turn out to be a promising approach [31, 32]. In [33], the success of random initialization in deep ensembles is explained by the fact that this method explore diverse modes of the function space. That is, Bayesian neural networks, which do not perform as good as deep ensembles, only explore the proximity of one single mode of the function space. Ensembles can provide both point estimates and uncertainty estimates. The point estimate can be calculated as an average of the predictions. Consider an ensemble of \(M\) NNs with different parameter initialization. When forecasting several timesteps ahead without feedback from measurements, we let each ensemble member estimate individual state forecast trajectories. The state forecast \(\hat{\mathbf{x}}_{t+1}\) at time \(t+1\) provided by an ensemble member \(\hat{\mathbf{f}}_{j}\) is given by forward Euler integration: \[\hat{\mathbf{x}}_{t+1}=^{j}\hat{\mathbf{x}}_{t}+\hat{\mathbf{f}}_{j}(^{j} \hat{\mathbf{x}}_{t},\mathbf{u}_{t};\mathbf{\theta})\Delta T, \tag{11}\] where \(\Delta T\) is the time step. Then, the average prediction of \(M\) predictions calculated by individual NN ensemble members at timestep \(t+1\) is given by: \[\mathbf{\mu}_{t+1}=\frac{1}{M}\sum_{j=1}^{M}\ {}^{j}\hat{\mathbf{x}}_{t+1}, \tag{12}\] _Uncertainty-based_ strategies are utilized in the proposed AL method. Therefore, the main motivation of using deep ensembles in this work is to obtain an uncertainty estimate. NN predictions have two sources of uncertainty, namely model uncertainty (also known as epistemic uncertainty), and data uncertainty (also known as aleatoric uncertainty). Usually, these types of uncertainties are modeled separately. In this study, the data is sampled from a fully observed process without process disturbances or measurement noise. Thus, only model uncertainty is considered here. The model uncertainty is caused by shortcomings in the model. This includes errors in the _training procedure_ such as bad training hyperparameters (learning rate, batch size, regularization etc.), _insufficient model structure_, or _lack of information in the data_[34]. In general, there are four different types of methods of estimating the uncertainty of a NN based on whether the NNs are deterministic or stochastic, and whether a single NN or multiple NNs are used to estimate the uncertainty. These are _single deterministic methods_, _Bayesian methods_, _ensemble methods_ and _test-time augmentation methods_[34]. Ensemble methods have proven to be attractive in quantifying uncertainty of NNs. Ensemble methods are in several works compared to Bayesian methods. In [35], it is argued that ensemble based methods preform better than Bayesian Monte-Carlo Dropout approximations in DeepAL due to more calibrated predictive uncertainties. Both [36] and [37] came to the same conclusions, particularly under dataset shift. A simple way to quantify the uncertainty of these predictions is to calculate the empirical variance of NN predicitions for each output of the network \[\mathbf{\sigma}_{t+1}^{2}=\frac{1}{n}\sum_{j}^{n}(^{j}\hat{\mathbf{x}}_{t+1}-\bm {\mu}_{t+1})^{2}. \tag{13}\] Here, \({}^{j}\hat{\mathbf{x}}_{t+1}\) is the NN prediction made by ensemble member \(j\) of \(\mathbf{x}_{t+1}\), and _boldsymbol\(\mathbf{\mu}_{t+1}\)_ is the mean ensemble prediction at time step \(t+1\). ### Deep active learning Deep Active Learning (DeepAL) has emerged as a combined approach between DL and AL, addressing DL specific challenges within AL. This mainly includes dealing with over-confident uncertainty estimates of NN predictions, efficiently data-acquisition of data batches rather than the traditional AL one-by-one query method, and the joint optimization of the NN model and AL algorithm [17]. The majority of research on DeepAL focuses on static acquisition problems. Static acquisition problems refer to scenarios where data is already available, and any point in the input space can be acquired directly. Examples of such problems are visual data processing such as image classification [38] and object detection [39], Natural Language Processing (NLP) such as machine translation [40], text classification [41] and semantic analysis [42]. The static acquisition problem imply that there exists an unlabeled dataset \(\mathcal{U}=\{\mathcal{Z}\}\) with \(c\) input samples \(\mathcal{Z}=\{\mathbf{z}_{1},\ \mathbf{z}_{2},\...,\ \mathbf{z}_{c}\}\). The goal of DeepAL in the static acquisition problem is to acquire as few as possible of the unlabeled data in \(\mathcal{U}\) for labeling by choosing the most informative samples. That includes designing a query strategy \(Q\), \(\mathcal{U}\xrightarrow{Q}\mathcal{L}\), where \(\mathcal{L}=\{\mathcal{Z},\mathcal{Y}\}\) is a labeled dataset, and \(\mathcal{Y}\) are labels corresponding to inputs \(\mathcal{Z}\). The query strategy can be expressed in terms of an acquisition function \(a_{batch}\) which acquires a batch \(\mathcal{B}^{*}=\{z_{1}^{*},\ z_{2}^{*},\...,\ z_{b}^{*}\}\) of samples to be labeled. The batch based query called BMDAL is the foundation of DeepAL. The DeepAL scheme is an iterative acquisition scheme, and one acquisition step is generally defined by: \[\mathcal{B}^{*}=\underset{\mathcal{B}\in\mathcal{U}}{\operatorname{argmax}} \quad a_{batch}(\mathcal{B},\hat{\mathbf{f}}(\mathcal{L})), \tag{14}\] Here \(\hat{\mathbf{f}}(\mathcal{L})\) is the NN. \(\mathcal{L}\) is the labeled data up until the given acquisition step, and the notation \(\hat{\mathbf{f}}(\mathcal{L})\) indicate that the NN is trained on this data. The acquisition function \(a_{batch}\) is in general a function of the NN that is trained on the currently acquired data since the informativeness of new samples can be evaluated using this NN. The acquisition function \(a_{batch}\) defines the query strategy of the AL scheme. There exists a range of different query strategies in AL. Here we will shortly describe the strategies relevant to the case study. _Uncertainty-based_ strategy is one of the most popular strategies in AL. The strategy aims to select samples in which the model predictions are most uncertain about. Uncertainty-based AL methods are typically computationally efficient and easy to implement. Moreover, these methods typically provides highly informative samples. One of the most utilized uncertainty-based methods calculates the predictive entropy \(H[\mathbf{y}|\mathbf{x},\mathcal{L}]\) for a given sample \(\mathbf{x}\). However, there are some concerns about applying uncertainty-based sampling strategies in BMDAL. Acquiring a batch of the most informative samples using an uncertainty measure can lead to a batch of very similar samples. Moreover, strategies of this type are often focused on examples close to a decision boundary, making it vulnerable to adverserial attacks [17]. Hence, a _Hybrid strategy_ is often preferred, accounting for diversity in the sampled data. A method called Diverse Mini-Batch Active Learning (DMBAL) [43] adds informativeness to the optimization of a K-means algorithm in the weights of each candidate sample. In the DMBAL algorithm, informative estimates obtained by some informative measures are assigned as weights to the corresponding candidate samples. In each acquisition step, a batch \(\mathcal{B}\) of \(b\) samples closest to the centroids of the weighted K-means algorithm is added to the training set. ### Deep Active learning in Dynamical systems Although data from a dynamical system may be readily available from production or operation, it often provides limited information and is not well-suited for the purpose of system identification. Due to the physical nature of dynamical systems, arbitrary points in the state-action space cannot be directly accessed. In order to sample given data points from the state-action space, the dynamics must be excited by control inputs. This is a dynamic acquisition Figure 2: Ensemble problem. In an attempt to maximize the information contained in this sampling process, an OCP can be defined over a finite horizon, maximizing some measure of information. This is here referred to as the _local exploration_. As the name indicates, this optimization is only efficient for shorter horizons, limiting the method to explore the dynamics in the proximity of the initial state. However, when identifying the input-output mapping of a nonlinear dynamical system, the entire operational window of the system must be explored. Assuming a set of input data is already available, AL offers a robust approach for selecting the most informative data points from the input space. In the current acquisition problem for dynamical systems, we do not have access to a pre-sampled dataset. However, by locally exploring different parts of the input space, a set of simulated state-action trajectories can be obtained. With an available simulated dataset, a static AL acquisition problem for dynamical systems can be formulated. This is referred to as _global exploration_. The global exploration acquires the batch of initial states corresponding to the batch of state-action trajectories that maximizes a global batch acquisition function. Following the acquisition of a set of initial states through global exploration, a subsequent round of local exploration is conducted for each state in the batch. This local exploration entails a longer optimization horizon compared to the initial search conducted for all candidates during the global exploration. This is because the computational complexity of the OCP increases significantly with the horizon, making it necessary to restrict the horizon to a relatively short length when optimizing for all candidates prior to the global exploration. When control trajectories are obtained from the final local explorations these trajectories are applied to the real system from the corresponding acquired initial states. This is done under the assumption that the system is driven to each initial state using a specific control law. As the system evolves under the applied control sequences, data on the system states is collected. #### 2.5.1 Local exploration Data sampled from a dynamical system should be properly excited by a control signal to obtain informative data that can be used for system identification. Local exploration refers to the dynamic AL acquisition problem of finding a control trajectory that informatively excites the system. Given an initial state \(\mathbf{x}_{0}\) from where the dynamical system is excited, the local exploration can be formulated as an open loop finite horizon OCP, which yields a sequence of control inputs \(\{\mathbf{u}\}_{k=0}^{T-1,\ast}=\{\mathbf{u}_{0},\ \mathbf{u}_{1},\...,\ \mathbf{u}_{T-1}\}^{\ast}\). In the context of active learning, the objective function is an acquisition function \(a_{local}\) that measures the informativeness of the sequence of forecasted states \(\{\mathbf{\hat{x}}_{t}\}_{t=1}^{T}=\{\mathbf{\hat{x}}_{1},\ \mathbf{\hat{x}}_{2},\...,\ \mathbf{\hat{x}}_{T}\}\) given the candidate sequence of control inputs \(\{\mathbf{u}_{k}\}_{k=0}^{T-1}=\{\mathbf{u}_{0},\ \mathbf{u}_{1},\...,\ \mathbf{u}_{T-1}\}\) and an initial state \(\mathbf{x}_{0}\): \[\begin{split}\{\mathbf{u}_{t}\}_{t=0}^{T-1,\ast}&= \underset{\{\mathbf{u}_{t}\}_{t=0}^{T-1}}{\mathrm{argmax}}\ a_{local}\left( \mathbf{x}_{0},\ \{\mathbf{u}_{t}\}_{t=0}^{T-1},\{\mathbf{\hat{x}}_{t}\}_{t=1}^{T}\right)\\ s.t.&\mathbf{\hat{x}}_{t+1}=\mathbf{\hat{f}}( \mathbf{\hat{x}}_{t},\mathbf{u}_{t};\mathbf{\theta})\end{split} \tag{15}\] where \(\mathbf{\hat{x}}_{0}=\mathbf{x}_{0}\). The standard strategy in Model Predictive Control (MPC) formulation is to only apply the first control input in the sequence and then solve the OCP again for each consecutive timestep until the end of the horizon. This scheme requires \(T-1\) optimizations to obtain one sequence of control inputs, and is therefore computational expensive. An alternative that is computationally feasible is to optimize for the entire control sequence one time and apply the control sequence obtained from that one solution of the OCP. The authors of [24] developed an active learning scheme for a GP model. They suggested to maximize the sum of differential entropy of the GP model predictions over the control horizon of \(T\) steps, such that \(a_{local}=\sum_{i=0}^{T-1}H[\mathbf{\hat{f}}(\mathbf{\hat{x}}_{i},\mathbf{u} _{i};\mathbf{\theta})]\). The differential entropy of variable \(y\) is defined by [44] \[H(y)=-\int_{y}p(y)log(p(y))dy, \tag{16}\] where \(p(y)\) is the probability density function. In this case, the probability density function represent the distribution over the predictions. If the variable \(y\) is Gaussian distributed, the differential entropy is given by \[H_{Gaussian}(y)=\frac{1}{2}log(2\pi exp(\sigma^{2}(y)), \tag{17}\] where \(\sigma^{2}\) is the variance of the given prediction. #### 2.5.2 Global exploration Exciting the system dynamics is essential to obtain informative data from a dynamical system. The local exploration formulated as an OCP in equation (15) provides a sound basis for exiting the problem locally. However, when the goal is to obtain the most informative data from the entire input space, solely depending on the optimization in equation (15) is inefficient. That is, the computational complexity increases drastically with optimization horizon. This put restrictions on how long the optimization horizon can be, and therefore also the area that a optimized state-action trajectory can span. Moreover, the uncertainty of state forecasts typically increases with each time step. This is highly relevant in the local exploration formulation since the corresponding OCP typically aims to maximize some uncertainty measure. With high levels of uncertainty, the actual states are likely to deviate from predicted states after longer horizons. Thus, the efficacy of local explorations as defined above is typically limited to exploring dynamics locally. Authors in [25] suggest partitioning the search problem into global and local explorations for actively learning a GP model. Building upon the work in [25], equation (18) provides a general formulation of the DeepAL optimization problem for dynamical systems, acquiring an optimal batch rather than single initial states. The global exploration consider a set \(\mathcal{X}=\{\mathbf{x}_{0,\;1},\;\mathbf{x}_{0,\;2},\;...,\;\mathbf{x}_{0,\;c}\}\) of \(c\) candidate initial states. For each of the candidate initial states \(\mathbf{x}_{0,\;i}\), an optimal control trajectory \((\mathbf{u}_{t=0}^{T-1,\;*})_{i}\) is obtained by optimizing the OCP in equation (15). With an initial condition \(\mathbf{x}_{0,\;i}\) and the obtained control trajectory \((\mathbf{u}_{t=0}^{T-1,\;*})_{i}\), the corresponding forecasted state trajectory \(\left(\{\mathbf{\hat{x}}_{t}\}_{t=1}^{T}\right)_{i}\) is estimated by the model. One acquisition step of the global exploration is generally described in the following DeepAL optimization formulation: \[\mathcal{B}^{*}=\underset{\mathcal{B}\in\mathcal{X}}{argmax}\;a_{global} \left(\mathcal{B},\;\left\{\left[\mathbf{u}_{t}\right]_{t=0}^{T-1,\;*},\{ \mathbf{\hat{x}}_{t}\}_{t=1}^{T}\right\}_{i=1}^{b}\right) \tag{18}\] where \(\mathcal{B}=\{\mathbf{x}_{0,\;1},\;\mathbf{x}_{0,\;2},\;...,\;\mathbf{x}_{0,\;b}\}\) is a candidate batch of initial conditions, and \(\left\{\left[\mathbf{u}_{t}\right]_{t=0}^{T-1,\;*},\{\mathbf{\hat{x}}_{t}\}_{ t=1}^{T}\right\}_{i=1}^{b}\) is the corresponding batch of simulated state-action trajectories. \(\mathcal{X}\mathcal{U}=\left\{\left[\mathbf{u}_{t}\right]_{t=0}^{T-1,\;*},\{ \mathbf{\hat{x}}_{t}\}_{t=1}^{T}\right\}_{1},\;\left[\mathbf{u}_{t}\right]_{t =0}^{T-1,\;*},\{\mathbf{\hat{x}}_{t}\}_{t=1}^{T}\right\}_{c}\) is the set of simulated candidate state-action trajectories. The acquired a batch \(\mathcal{B}^{*}=\{\mathbf{x}_{0,\;1}^{*},\;\mathbf{x}_{0,\;2},\;...,\;\mathbf{x }_{0,\;b}^{*}\}\) of initial conditions corresponds to the batch simulated state-action trajectories \(\left(\{\mathbf{u}_{t}\}_{t=0}^{T-1,\;*},\{\mathbf{\hat{x}}_{t}\}_{t=1}^{T} \right)_{i=1}^{b}\) that maximize some global batch acquisition function \(a_{global}\). Since simulated state-action trajectories are already sampled from in a local exploration scheme, the global exploration becomes a static acquisition problem on the form of the standard DeepAL scheme presented in equation (14). ## 3 Method and setup In this section, the experimental setup, as well as the methods used in the case study is presented. The data is generated by integrating the nonlinear ODEs in equation (6) with a set of initial values for the states \(\mathbf{x}_{0}\) using the fourth-order Runge-Kutta (RK4) numerical integration algorithm. In the DeepAL method presented in this work, a batch of initial states are chosen from a set of candidate states according to the optimization in equation (18). The query strategies defined by the global acquisition function \(a_{global}\) are described in Section 3.1. The control trajectories that excite the system dynamics are acquired in the local exploration scheme defined in equation (15). The local acquisition function \(a_{local}\) in this scheme is an _uncertainty-based strategy_ also described in detail in Section 3.1. The benchmark method chooses the set of initial conditions randomly. Moreover, the control input trajectory from each initial state are chosen according to the Amplitude modulated Pseudo Random Binary Signal (APRBS) used to identify nonlinear dynamics with NNs in works like [45], [16] and [46]. In each loop of the AL scheme, a batch of \(b=10\) time series \(\{\mathbf{X}_{1},\;\mathbf{X}_{2},\;...,\;\mathbf{X}_{i},\;...,\;\mathbf{X}_ {b}\}\) are obtained. The \(i^{\prime}th\) time series is obtained by simulating the dynamics from an initial condition \(\mathbf{x}_{0}=[x_{1}(0),\;x_{2}(0),\;x_{3}(0)]\) over a horizon of \(T=15\) with timesteps \(\Delta T=0.1sec\). This yields a time series \(\mathbf{X}_{i}\): \[\mathbf{X}_{i}=\left[\begin{array}{cccccc}x_{1}(0)&x_{2}(0)&x_{3}(0)&u_{1}(0)&u_ {2}(0)&u_{3}(0)\\ x_{1}(1)&x_{2}(1)&x_{3}(1)&u_{1}(1)&u_{2}(1)&u_{3}(1)\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ x_{1}(t)&x_{2}(t)&x_{3}(t)&u_{1}(t)&u_{2}(t)&u_{3}(t)\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ x_{1}(T-1)&x_{2}(T-1)&x_{3}(T-1)&u_{1}(T-1)&u_{2}(T-1)&u_{3}(T-1)\\ x_{1}(T)&x_{2}(T)&x_{3}(T)&nan&nan&nan\end{array}\right] \tag{19}\] Hence, the control inputs \(\mathbf{u}(t)\) are defined until timestep \(t=T-1\). That is, at the last step there is no need for a control input since there is no next state to be calculated. For each time series \(i\) in the batch, the output label \(\mathbf{Y}_{i}\) for training is calculated by the forward Euler formula: \[\mathbf{Y}_{i}=\left[\begin{array}{cccc}\frac{x_{1}(1)-x_{1}(0)}{\Delta T}& \frac{x_{2}(1)-x_{2}(0)}{\Delta T}&\frac{x_{3}(1)-x_{3}(0)}{\Delta T}\\ \frac{x_{1}(2)-x_{3}(1)}{\Delta T}&\frac{x_{2}(2)-x_{3}(1)}{\Delta T}&\frac{x_ {3}(2)-x_{3}(1)}{\Delta T}\\ \vdots&\vdots&\vdots&\vdots\\ \frac{x_{1}(T)-x_{1}(T-1)}{\Delta T}&\frac{x_{2}(T)-x_{3}(T-1)}{\Delta T}& \frac{x_{3}(T)-x_{3}(T-1)}{\Delta T}\end{array}\right] \tag{20}\] We define a new matrix \(\mathbf{X}_{i}^{{}^{\prime}}\) that contains all but the last row of the \(i^{\prime}\)_th_ time series \(\mathbf{X}_{i}\). Then \(\mathbf{X}_{i}^{{}^{\prime}}\) and \(\mathbf{Y}_{i}\) are paired as inputs and outputs: \[\mathcal{S}_{i}=[\mathbf{X}_{i}^{{}^{\prime}},\ \mathbf{Y}_{i}]. \tag{21}\] This is done for all simulations in the batch. Then the input-output pairs are stacked: \[\mathcal{S}_{batch}=[\mathcal{S}_{1}^{T},\ \mathcal{S}_{2}^{T},\...,\ \mathcal{S}_{b}^{T}]^{T}, \tag{22}\] before added to the training data \(\mathcal{D}_{train}\). Before the training is conducted, the inputs and outputs in the training set \(\mathcal{D}_{train}\) are normalized, shuffled, and put in mini-batches for training. In each loop of the learning scheme, an ensemble of \(M=10\) NNs are trained on all training data sampled up until that time. ### Novel DeepAL scheme for dynamical systems The DeepAL acquisition scheme comprises a global exploration scheme and a local exploration scheme. The global exploration scheme will for each acquisition step in the AL loop choose a batch of \(b\) initial states \(\mathcal{B}^{*}=\{\mathbf{x}_{0,\ 1}^{*},\ \mathbf{x}_{0,\ 2}^{*},\...,\ \mathbf{x}_{0,\ b}^{*}\}\) among a set of \(c\) candidates \(\mathcal{X}=\{\mathbf{x}_{0,\ 1}^{*},\ \mathbf{x}_{0,\ 2}^{*},\...,\ \mathbf{x}_{0,\ c}^{ *}\}\) according to the AL optimization problem defined in equation (18). For each of the candidate initial states \(\mathbf{x}_{0,\ i}\) a state-action trajectory is obtained according to the local exploration in equation (15). The query strategy is defined by the global batch acquisition function \(a_{global}\), which quantifies the informativeness of batches of state-action trajectories corresponding to initial state candidates. A simple _uncertainty_-based acquisition function that sum of predictive entropies along all candidate trajectories is given by: \[a_{global}\left(\mathcal{B},\left(\{\mathbf{u}_{t}\}_{t=0}^{T-1,\ *},\{\hat{ \mathbf{x}}_{t}\}_{t=1}^{T}\right)_{i=1}^{b}\right)=\sum_{i=1}^{b}\sum_{t=1}^{ T}H([^{j}\hat{\mathbf{x}}_{t}]_{j=1}^{M}), \tag{23}\] where \([^{j}\hat{\mathbf{x}}_{t}]_{j=1}^{M}\) is the set of ensemble forecasts at timestep \(t\). Assuming that the ensemble predictions are approximately normally distributed around the mean prediction \(\mathbf{\mu}_{t}\) given in equation (12), and that predicted states are uncorrelated, maximizing the entropy will become approximately the same as to maximizing the empirical variance given in equation (13): \[H([^{j}\hat{\mathbf{x}}_{t}]_{j=1}^{M})\approx\mathbf{\sigma}_{t}^{2}. \tag{24}\] In order to scale the optimization problem according to the magnitude of states in the state vector, the empirical variance of state \(k\), \(\sigma_{t,\ k}\), \(k\in\{1,\ 2,\ 3\}\) can be divided by the standard deviation of the \(k^{\prime}th\) state based on the currently sampled dataset. defining the vector \(\mathbf{s}_{inv}=[\frac{1}{std_{1}},\ \frac{1}{std_{2}},\ \frac{1}{std_{3}}]^{T}\), where \(std_{k}\) is the standard deviation of the \(k^{\prime}th\) state \(x_{k}\) the resulting acquisition function can be defined as: \[a_{global}\left(\mathcal{B},\left(\{\mathbf{u}_{t}\}_{t=0}^{T-1,\;*},\{\mathbf{ \hat{x}}_{t}\}_{t=1}^{T}\right)_{i=1}^{b}\right)=\sum_{i=1}^{b}\sum_{t=1}^{T} \boldsymbol{\sigma}_{t}^{2}\odot\mathbf{s}_{inv}, \tag{25}\] where \(\odot\) is the Hadamard product operator that takes the element-wise multiplication of the two vectors. The resulting acquisition function is purely uncertainty based and does not take into account the similarity between samples. Hybrid acquisition strategies takes into account both uncertainty of individual samples as well as the similarities between samples in a candidate batch \(\mathcal{B}\). An intuitive hybrid acquisition method DMBAL adds informativeness to the optimization of a weighted K-means algorithm. That is, the algorithm acquires the closest sample to each of the \(b\) centroids found by a weighted K-means, where the weight is some informative measure. A simple adaption of the algorithm to the problem at hand is given in Algorithm 1: ``` Input: Candidate initial conditions \(\mathcal{X}\), Acquired dataset \(\mathcal{D}_{train}\), pre-filter factor \(\beta\), batch size/number of clusters \(b\), Required level of model accuracy \(\alpha\) Train model on \(\mathcal{D}_{train}\) whilerequired level of model accuracy \(\alpha\) is not reacheddo Get informativeness \(\sum_{t=1}^{T}\boldsymbol{\sigma}_{t}^{2}\odot\mathbf{s}_{inv}\) for simulated state-action trajectories corresponding to initial states in \(\mathcal{X}\) Prefilter top \(\beta\cdot b\) informative samples Cluster \(\beta\cdot b\) initial states to \(b\) clusters with weighted K-means Select batch \(\mathcal{B}^{*}\) of \(b\) different initial states closest to the cluster centers Perform local exploration to obtain control input trajectories for each initial state in \(\mathcal{B}^{*}\) From initial conditions in \(\mathcal{B}^{*}\), apply obtained control trajectories on system dynamics Add sampled data to training set \(\mathcal{D}_{train}\) Train model on all samples in \(\mathcal{D}_{train}\) ``` **Algorithm 1**Diverse Mini-Batch Active Learning (DMBAL) in Global exploration In addition, the method aims to add diversity of samples by comparing the similarities of the candidate initial conditions in the modified DMBAL method. The \(c\) candidates in \(\mathcal{X}=\{\mathbf{x}_{0,\;1},\mathbf{x}_{0,\;2},\;...,\mathbf{x}_{0,c}\}\) are at each acquisition step uniformly sampled from the intervals given in Table 2: The local exploration scheme obtains a sequence of control inputs by optimizing the dynamic acquisition problem formulated as an OCP in equation (15) from a given initial state. The local acquisition function \(a_{local}\) used in the optimization defined in equation (15) is the same as the uncertainty-based global acquisition function defined in equation(25), but only for a single initial state and the corresponding trajectory. That is, the local acquisition function is: \[a_{local}=\sum_{t=1}^{T}\boldsymbol{\sigma}_{t}^{2}\odot\mathbf{s}_{inv}. \tag{26}\] A schematic illustration of the novel DeepAL scheme is presented in Fig. 3 \begin{table} \begin{tabular}{l|l} \hline \hline Variable & Initial condition interval \\ \hline \(x_{1}\) & \([-0.2,\;1.4]\) \\ \(x_{2}\) & \([-0.2,\;0.2]\) \\ \(x_{3}\) & \([-0.2,\;0.2]\) \\ \hline \hline \end{tabular} \end{table} Table 2: Initial condition intervals for the states \(\mathbf{x}\) The OCP solved to generate control sequence is solved in the optimization framework CasADi [47], while the design and training of DNNs used in the optimization is done using the DL framework PyTorch [48]. The ML-CasADi package developed by authors of [49] is used to integrate the two frameworks. ### Performance metrics We focus on the forecast error for several steps ahead in a so called _rolling forecast_ to measure the generalization error of the NNs. That is, given initial conditions \(\mathbf{x}_{0}=\mathbf{x}(t_{0})\) and a sequence of \(n\) control inputs \(\{\mathbf{u}(t_{0}),\ \mathbf{u}(t_{1}),\...,\ \mathbf{u}(t_{n-1})\}\), a NN forecast the consecutive \(n\) timesteps \(\{\mathbf{\hat{x}}(t_{1}),\ \mathbf{\hat{x}}(t_{2}),\...,\ \mathbf{\hat{x}}(t_{n})\}\). The NN \(\mathbf{\hat{f}}\) predict the time derivatives of the states at timestep \(i\), that is \(d\mathbf{x}(t_{i})/dt\), based on the current state prediction \(\mathbf{\hat{x}}(t_{i})\) and control input \(\mathbf{u}(t_{i})\): \[\frac{d\mathbf{\hat{x}}(t_{i})}{dt}=\begin{cases}\mathbf{\hat{f}}(\mathbf{ \hat{x}}(t_{i}),\mathbf{u}(t_{i})),&\text{if $\mathbf{x}(t_{i})$ is not measured,}\\ \mathbf{\hat{f}}(\mathbf{x}(t_{i}),\mathbf{u}(t_{i})),&\text{if $\mathbf{x}(t_{i})$ is measured.} \end{cases} \tag{27}\] Hence, the model either uses the previous calculated forecast of the state \(\mathbf{\hat{x}}(t_{i})\) or the true state \(\mathbf{x}(t_{i})\) depending on whether the state is measured or not. Here, we assume no measurement noise, making the measured and true state the same. In the case study, the states are measured at every 50 timestep, meaning that the model forecast for this horizon before it is corrected by measurements. After estimating the time derivative at timestep \(i\), the forecast at the Figure 3: Schematic presentation of the DeepAL scheme. Given an initial dataset \(\mathcal{D}_{init}\), NNs in an ensemble are trained. If the ensemble yields the required level of model accuracy, the AL scheme is terminated. If not, the ensemble is used in global and local explorations. First, local exploration and simulation procedure generates control trajectories for each initial state candidate in \(\mathcal{X}\). The global exploration scheme then acquires a batch \(\mathcal{B}^{*}\) of \(b\) initial states in which excitation of the dynamics would be most informative. Then, local explorations are conducted for each of the initial states in \(\mathcal{B}^{*}\), yielding a set of \(b\) control trajectories. Given that the dynamics are driven to each of the initial conditions in \(\mathcal{B}^{*}\), the corresponding control trajectories are applied to excite the dynamics. From this excitation process, time series are obtained, preprocessed and added to the training set \(\mathcal{D}_{train}\). The ensemble is then trained on this training set, and the procedure is repeated until the required model accuracy on the test set is achieved, or a sampling budget is exhausted. next timestep is calculated by the forward Euler formula: \[\mathbf{\hat{x}}(t_{i+1})=\mathbf{\hat{x}}(t_{i})+\frac{d\mathbf{\hat{x}}(t_{i})}{ dt}\cdot\Delta T. \tag{28}\] The rolling forecast accuracy measure used in for example [46], [15] and [16] called Average Normalized Rolling Forecast Mean Squared Error (AN-RFMSE) is a scalar defined by \[\text{AN-RFMSE}=\frac{1}{p}\sum_{i=1}^{p}\frac{1}{n}\sum_{j=1}^{n}\left(\frac{ \hat{x}_{i}(t_{j})-x_{i}(t_{j})}{std(x_{i})}\right)^{2}, \tag{29}\] where \(\hat{x}_{i}(t_{j})\) is the model estimate of the simulated state variable \(x_{i}\) at time step \(t_{j}\), \(std(x_{i})\) is the standard deviation of variable \(x_{i}\) in the training set \(\mathcal{D}_{train}\), \(p=3\) is the number of state variables and \(n\) is the number of time steps the normalized rolling forecast MSE is averaged over. ### Test set generation The utilization of DNNs in modeling dynamical systems is driven by their capability to represent intricate relationships with a high degree of accuracy. When proper measures are taken to address safety considerations, they have the potential to enhance the optimality of MPC. As a result, evaluating the sampling strategies on a testset generated by using an optimal control policy is a subject of significant interest. The MPC used when generating the testset solves an OCP that minimize a quadratic cost function: \[\begin{split}\{\mathbf{u}_{0},\...,\ \mathbf{u}_{n-1}\}& =\min_{\mathbf{u}}\sum_{k=0}^{n-1}(\mathbf{x}_{k}-\mathbf{x}_{ ref,\ k})^{T}\mathbf{Q}(\mathbf{x}_{k}-\mathbf{x}_{ref,\ k})+\mathbf{u}_{k}\mathbf{R}\mathbf{u}_{k}, \\ \text{s.t.}&\mathbf{\dot{x}}_{k}=\mathbf{f}( \mathbf{x}_{k},\mathbf{u}_{k}).\end{split} \tag{30}\] \(\mathbf{x}_{ref,\ k}\) is the desired reference signal, and \(\mathbf{Q}\) and \(\mathbf{R}\) are weighting matrices. The subscript \(k\) indicates the value of the variable at timestep \(k\). The function \(\mathbf{f}(\cdot,\cdot)\) is the simulation model itself. Given an initial condition, the optimization problem in equation Equation (30) is solved for \(n\) steps. Both the sequence of states \(\{\mathbf{x}_{k}\}_{k=0}^{n-1}\) and control inputs \(\{\mathbf{u}_{k}\}_{k=0}^{n-1}\) are decision variables in the optimization, and can be extracted from the solution of the optimization. The testset consist of 100 time series with initial conditions uniformly sampled from the intervals in Table 2. Each of the time series are generated by ten optimizations of equation Equation (30) each with a horizon of 50 timesteps. The final state \(\mathbf{x}_{n}\) in one optimization is then the initial state of the next time series, such that the \(i^{\prime}th\) time series in the testset can be written as: \[\mathbf{X}_{test,\ i}=\begin{bmatrix}x_{1}(0)&\dots&u_{3}(0)\\ \vdots&\ddots&\vdots\\ x_{1}(n-1)&\dots&u_{3}(n-1)\\ x_{1}(n)&\dots&u_{3}(n)\\ \vdots&\ddots&\vdots\\ x_{1}(10*n-1)&\dots&u_{3}(10*n-1)\end{bmatrix}, \tag{31}\] \begin{table} \begin{tabular}{l|l} \hline Variable & Reference values interval \\ \hline \(x_{1}\) & \([-0.3,\ 1.3]\) \\ \(x_{2}\) & \([-0.3,\ 0.3]\) \\ \(x_{3}\) & \([-0.3,\ 0.3]\) \\ \hline \end{tabular} \end{table} Table 3: Reference intervals for the states \(\mathbf{x}_{ref}\) where \(n=50\). The value of the references \(\mathbf{x}_{ref}\) are uniformly drawn from Table 3 are constant for the optimization horizon. Hence each time series has ten different references over 500 timesteps. Hence, the testset can be written as: \[\mathcal{D}_{test}=\{X_{test,\;1},\;X_{test,\;2},\;...,\;X_{test,\;100}\} \tag{32}\] ## 4 Results and discussion The case study presented in this section investigate the efficacy of global and local explorations compared to benchmark random sampling methods. Moreover, the study presents the effect of the global hybrid strategy DMBAL for different values of the prefilter hyperparameter \(\beta\), where the special case of \(\beta=1\) can be considered as an uncertainty based acquisition strategy. ### Information based and random sampling In order to quantify the efficacy of global and local explorations in the proposed DeepAL sampling scheme compared to benchmark random sampling methods, we define three fundamental data acquisition schemes. All schemes use both a global-, and a local sampling method. The three fundamental methods are based on either an information theoretic approach or a random sampling strategy for both local and global exploration. Fig. 4 shows a schematic presentation of how the different schemes are derived from the two sampling strategies. The fundamental sampling schemes are namely Global Informative Local Informative(GI-LI), Global Random Local Informative(GR-LI), and Global Random Local Random(GR-LR). The GI-LI method is described in Section 3.1, and is using information theoretic sampling strategies both locally and globally. The global exploration in GI-LI acquires initial conditions from where to conduct the local explorations. The global exploration is using the local exploration method to assign measures of informativeness to the candidate initial conditions. The GR-LI method is globally random, and a batch of initial states is acquired by uniformly sampling within the interval of states presented in Table 2. The local exploration method of GR-LI is the exact same as the local sampling method used GI-LI. Th GR-LR method uses the same random global strategy as GR-LI, and uses the APRBS method to excite the dynamics locally. Figure 4: Schematic of how fundamental sampling schemes are derived from global and local sampling strategy. Global Informative Local Informative(GI-LI), Global Random Local Informative(GR-LI), and Global Random Local Random(GR-LR) are the derived methods that are investigated in the case study. Fig. 5 presents the performance of the three sampling schemes for after all data acquisition steps. The error bound show the \(25-75\) percentile of AN-RFMSE values. The upper bound is particularly interesting since it gives an intuition about the models ability to generalize to a broader set of the test set trajectories. The results show that the GI-LI method outperforms the GR-LI and the GR-LR methods, both in terms of higher mean accuracy as well as significantly lower values for the \(75\) percentile of AN-RFMSE values. Moreover, the globally random, locally informative GR-LI method shows better performance in terms of higher mean accuracy an lower \(75\) percentile AN-RFMSE values than the purely random \(GR-LR\) method. However, the superiority of GR-LI method over the GR-LR method is not clear. Figure 5: AN-RFMSE values for NN models trained on each batch of sampled data. The rolling forecast has a prediction horizon of \(50\) timesteps. The drawn line show the mean AN-RFMSE values, and the error bound show the \(25\) prosentile and \(75\) prosentile of AN-RFMSE values for models trained on data sampled up until the AL loop specified on the x-axis. The GI-LI methods yields significantly lower mean and \(75\) prosentile of AN-RFMSE values than the two competing methods. The GR-LI method slightly outperforms the GR-LR method, but this result is not as significant. method is not as significant as the superiority of GI-LI over the two others, indicating that the globally informative step is of major importance. Fig. 6 shows the mean and uncertainty bounds of ensemble forecasts trained on data sampled with GI-LI and GR-LR methods. The plots illustrate how the GI-LI might provide data that gives improved mean predictions as well as narrower and better calibrated uncertainty bounds. Figure 6: Rolling forecast of 50 timesteps. The black drawn lines in Fig. 5(a)- 5(a) are state simulated states. The dotted lines show mean forecast values, and the uncertainty bounds show 99.7% confidence intervals. The black lines in Fig. 5(d)-5(f) are control inputs. The ensembles that forecast the states are trained on data based on the GI-LI and GR-LR methods. The plot illustrates how the GI-LI method gives better mean predictions as well as narrower and better calibrated uncertainty bounds. [MISSING_PAGE_POST] ### Uncertainty based and hybrid global strategy Figure 7: Violin plots of AN-RFMSE values corresponding to models trained on data sampled with the GI-LI method. The rolling forecast has a prediction horizon of 50 timesteps. All methods use the global DMBAL method for choosing initial conditions, but with different prefilter hyperparameter \(\beta\). The two innermost horizontal lines of each violin plot show the 5 and 95 percentile, while the outermost horizontal lines show the extreme values. Each subplot includes AN-RFMSE values for models trained on data from a range of the AL loops. That is, Fig. (a)a summarize the perfomance of the sampling methods in early stages, while Fig. (c)c-(f)f summarize the performace Figure 7: Violin plots of AN-RFMSE values corresponding to models trained on data sampled with the GI-LI method. The rolling forecast has a prediction horizon of 50 timesteps. All methods use the global DMBAL method for choosing initial conditions, but with different prefilter hyperparameter \(\beta\). The two innermost horizontal lines of each violin plot show the 5 and 95 percentile, while the outermost horizontal lines show the extreme values. Each subplot includes AN-RFMSE values for models trained on data from a range of the AL loops. That is, Fig. (a)a summarize the perfomance of the sampling methods in early stages, while Fig. (c)c-(f)f summarize the performace later in the AL scheme. Fig. 7 show the performance of GI-LI methods using the global DMBAL method with different prefilter parameter \(\beta\) for different stages of the AL scheme. The algorithm only considers the \(b\)-\(\beta\) samples with highest informativeness score according to a given informativeness measure, where a batch \(\mathcal{B}^{*}\) of \(b=10\) initial conditions are chosen. Hence, \(\beta=1\) corresponds to simply picking the \(b\) samples with the highest informativeness score. Since the chosen informative measure is uncertainty based, using \(\beta=1\) means that the global method is uncertainty based. Using \(\beta>1\) means that the global algorithm takes into account diversity by choosing the samples closest to centroid centers of a k-means algorithm, which means that the overall method is a hybrid AL strategy, combining uncertainty measures and diversity measures. Fig 7 shows that the average performance in all plots is approximately the same for all choices of \(\beta\). In Fig. 7a and 7a showing performance for models trained on data acquired after respectively \(10\) to \(30\) AL loops and \(30\) to \(40\) AL loops, it is difficult to conclude any significant differences, other than that the method using \(\beta=10\) has higher extremum AN-RFMSE values, as well as more stretched distribution towards higher AN-RFMSE values, indicating that lower choices of \(\beta\) give better results. Fig 7c-7f show the performance of the method for different \(\beta\) values for later stages of the AL scheme. The most evident result is the tendency that the upper extreme values of AN-RFMSE have a minimum for \(\beta=3\), and that both lower and higher values of \(\beta\) give higher extreme AN-RFMSE values. Moreover, it seems like the \(95\) percentile also is the lowest for \(\beta=3\). Moreover, again, using high values of \(\beta\) turns out to give worse result as \(\beta=10\) show the worst performance. The overall results from the conducted case study indicate that using the DMBAL algorithm in the global acquisition scheme is of minor importance compared to acquiring a batch of samples with the highest uncertainty scores. However, choosing the right values of the prefilter hyperparameter \(\beta\), the modeling errors in extreme cases can potentially be reduced. The limited value of the hybrid method used in this study might be explained by that it only compares the similarity of initial states rather than similarities between candidate trajectories. Moreover, the shortcomings of uncertainty based methods are known to be more present if the batch size is greater. With the currently choice of batch size \(b=10\) trajectories being sampled at each acquisition step, the power of hybrid strategies might not be present. ## 5 Conclusions and future work The main conclusions from the work can be itemized as follows: * The globally random, locally informative based GR-LI strategy show slightly better results than a globally random, locally random strategy in terms of mean accuracy an lower values of the \(75\) percentile of AN-RFMSE values, indicating better generalization. However, the novel GI-LI DeepAL scheme significantly outperform GR-LI and GR-LR schemes both in terms of mean accuracy and \(75\) percentile values of AN-RFMSE. This indicates that global explorations are of major importance with respect to achieving higher accuracy and generalization. * The DMBAL approach, which emphasizes diversity in the selection of samples, might reduce the upper bound of extreme error values, provided that the prefilter hyperparameter is carefully chosen. This method is compared to simply selecting the top \(b\) samples based on some uncertainty measure, when sampling a batch of initial conditions globally. However, the DMBAL approach does not exhibit significant improvements over the global uncertainty based method beyond this in the given case study. The method only compares the similarity between initial states of the candidate trajectories, rather than similarity between the whole candidate trajectories. This leaves out potentially important information. The novel DeepAL framework is flexible and allows for a range of AL aqcusition strategies. Conducting a comparative study including different AL acquisition functions in the framework would be highly interesting as it could increase our knowledge about efficient sampling of dynamical systems. Global hybrid strategies that can consider similarity of entire state trajectories are of particular interest since the currently tested hybrid strategy that only compares initial conditions seems to be of limited value. ## 6 Acknowledgments This work was supported by the industry partners Borregaard, Elkem, Hydro, Yara and the Research Council of Norway through the project TAPI: Towards Autonomy in Process Industries, project number 294544.
2308.07557
Character-Oriented Design for Visual Data Storytelling
When telling a data story, an author has an intention they seek to convey to an audience. This intention can be of many forms such as to persuade, to educate, to inform, or even to entertain. In addition to expressing their intention, the story plot must balance being consumable and enjoyable while preserving scientific integrity. In data stories, numerous methods have been identified for constructing and presenting a plot. However, there is an opportunity to expand how we think and create the visual elements that present the story. Stories are brought to life by characters; often they are what make a story captivating, enjoyable, memorable, and facilitate following the plot until the end. Through the analysis of 160 existing data stories, we systematically investigate and identify distinguishable features of characters in data stories, and we illustrate how they feed into the broader concept of "character-oriented design". We identify the roles and visual representations data characters assume as well as the types of relationships these roles have with one another. We identify characteristics of antagonists as well as define conflict in data stories. We find the need for an identifiable central character that the audience latches on to in order to follow the narrative and identify their visual representations. We then illustrate "character-oriented design" by showing how to develop data characters with common data story plots. With this work, we present a framework for data characters derived from our analysis; we then offer our extension to the data storytelling process using character-oriented design. To access our supplemental materials please visit https://chaorientdesignds.github.io/
Keshav Dasu, Yun-Hsin Kuo, Kwan-Liu Ma
2023-08-15T03:50:43Z
http://arxiv.org/abs/2308.07557v1
# Character-Oriented Design for Visual Data Storytelling ###### Abstract When telling a data story, an author has an intention they seek to convey to an audience. This intention can be of many forms such as to persuade, to educate, to inform, or even to entertain. In addition to expressing their intention, the story plot must balance being consumable and enjoyable while preserving scientific integrity. In data stories, numerous methods have been identified for constructing and presenting a plot. However, there is an opportunity to expand how we think and create the visual elements that present the story. Stories are brought to life by characters; often they are what make a story capturing, enjoyable, memorable, and facilitate following the plot until the end. Through the analysis of 160 existing data stories, we systematically investigate and identify distinguishable features of characters in data stories, and we illustrate how they feed into the broader concept of 'character-oriented design". We identify the roles and visual representations data characters assume as well as the types of relationships these roles have with one another. We identify characteristics of antagonists as well as define conflict in data stories. We find the need for an identifiable central character that the audience latches on to in order to follow the narrative and identify their visual representations. We then illustrate 'character-oriented design" by showing how to develop data characters with common data story plots. With this work, we present a framework for data characters derived from our analysis; we then offer our extension to the data storytelling process using character-oriented design. To access our supplemental materials please visit https://[email protected]/. Storytelling, Explanatory, Narrative visualization, Visual metaphor xxxxxx.xxxx.xxxx.xx ## 1 Introduction Information, at times, can be abstract and intangible, which may lead to difficulties in communication. The beauty of visualization is captured in its ability to make the intangible tangible, the invisible visible, and the inaccessible accessible. Through visualization, we can utilize visual representations to embody complex and often large datasets, reveal hidden insights about both known and unknown phenomena, and afford a means to showcase findings as well as share insights with broader audiences. We, as data storytellers, are concerned with presenting these findings to large audiences. Stories and visual storytelling have been shared and consumed by our earliest ancestors. Some of the earliest forms of visual storytelling [29] played a role in communicating where rich sources of food can be located or where to avoid dangerous beasts. In visualization, we have utilized storytelling for a variety of communicative needs since it is effective for engagement [41], memorability [25, 56], and showing casualty [28]. As data storytellers, we play a role in capturing and sharing the wonder we see in data with others. In our stories, we are challenged to emphasize the scientific insights of our content and simultaneously engross [40] the audience with our narrative. The challenge of ensuring our content is both consumable and enjoyable while preserving scientific integrity constrains our story design. These constraints can result in the audience having a difficult time understanding [7, 23] insights, topic relevancy, or where in the story to focus. In data stories, numerous methods have been identified for constructing and presenting a plot. A story plot [28] is a narrative of events, with the emphasis falling on causality. The data storytelling process [44] can be viewed as three stages -- identification, organization, and presentation. Typically, the first step resolves in the accumulation of a set of events (i.e., "story pieces"). These pieces are often the insights derived from either the collaborative efforts of data analysts and domain experts or the automation leveraged by statistics [62, 70]. The collection of events is guided by the shared intent of the author and analysts, which is the intention they seek to convey to the audience. This intention can take on many forms [50] (e.g., to inform, to educate, to entertain, or to explore) and centers the story. Next, in the organization stage, several narrative frameworks [72, 63, 44, 58] can assist us in sequencing these events into a cohesive story plot. During this stage, we need to ascertain several properties about these events, namely their relationship to one another and their ordering. We should end up having a structured outline of what we want to convey and the sequence in which to present them. Lastly, we have the presentation stage, where we give the look and feel to the story. There are many methodologies [34, 36, 38] at our disposal for tailoring our story for the target audience. However, it is within the presentation stage that there is an opportunity to expand how we view and design the visual elements that act out our story plots. In our work, we are interested in data-driven, visual storytelling, particularly the characters that bring them to life. Data storytellers want to create rich experiences that evoke an emotional response, draw the audience in, and leave them with something to remember. Stories can
2306.11841
Integrative analysis of ATAC-seq and RNA-seq for cells infected by human T-cell leukemia virus type 1
Human T-cell leukemia virus type 1 (HTLV-1) causes adult T-cell leukemia (ATL) and HTLV-1-associated myelopathy (HAM) after a long latent period in a fraction of infected individuals. These HTLV-1-infected cells typically have phenotypes similar to that of CD4${^+}$ T cells, but the cell status is not well understood. To extract the inherent information of HTLV-1-infected CD4$^+$ cells, we integratively analyzed the ATAC-seq and RNA-seq data of infected cells. Compared to CD4${^+}$ T cells from healthy donors, we found anomalous chromatin accessibility in HTLV-1-infected CD4${^+}$ cells derived from ATL cases in terms of location and sample-to-sample fluctuations in open chromatin regions. Further, by focusing on systematically selected genes near the open chromatin regions, all the gene expressions in ATL cases were found to be distinct from those of healthy CD4$^+$ T cells. Based on a further analysis of chromatin accessibility, we detected TLL1 (Tolloid Like 1) as one of the key genes that exhibit unique gene expressions in ATL cases. A luciferase assay indicated that TLL1 has a strong regulatory effect on TGF-$\beta$. Overall, this study provides results about the status of HTLV-1 infected cells, which are qualitatively consistent across the different scales of chromatin accessibility, transcription, and immunophenotype.
Azusa Tanaka, Yasuhiro Ishitsuka, Hiroki Ohta, Norihiro Takenouchi, Masanori Nakagawa, Ki-Ryang Koh, Chiho Onishi, Hiromitsu Tanaka, Akihiro Fujimoto, Jun-ichirou Yasunaga, Masao Matsuoka
2023-06-20T18:49:07Z
http://arxiv.org/abs/2306.11841v1
# Integrative analysis of ATAC-seq and RNA-seq for cells ###### Abstract Human T-cell leukemia virus type 1 (HTLV-1) causes adult T-cell leukemia (ATL) and HTLV-1-associated myelopathy (HAM) after a long latent period in a fraction of infected individuals. These HTLV-1-infected cells typically have phenotypes similar to that of CD4\({}^{+}\) T cells, but the cell status is not well understood. To extract the inherent information of HTLV-1-infected CD4\({}^{+}\) cells, we interagatively analyzed the ATAC-seq and RNA-seq data of infected cells. Compared to CD4\({}^{+}\) T cells from healthy donors, we found anomalous chromatin accessibility in HTLV-1-infected CD4\({}^{+}\) cells derived from ATL cases in terms of location and sample-to-sample fluctuations in open chromatin regions. Further, by focusing on systematically selected genes near the open chromatin regions, all the gene expressions in ATL cases were found to be distinct from those of healthy CD4\({}^{+}\) T cells. Based on a further analysis of chromatin accessibility, we detected TLL1 (Toloid Like 1) as one of the key genes that exhibit unique gene expressions in ATL cases. A luciferase assay indicated that TLL1 has a strong regulatory effect on TGF-\(\beta\). Overall, this study provides results about the status of HTLV-1 infected cells, which are qualitatively consistent across the different scales of chromatin accessibility, transcription, and immunophenotype. Introduction It has been statistically estimated that there are more than \(300,000\) types of mammalian host viruses [1]. Among the many viruses that have been discovered, only a few have been reported to cause cancers, such as the DNA virus human papillomavirus (HPV) and the RNA virus hepatitis C virus (HCV) [2]. One, human T-cell leukemia virus (HTLV-1), is a retrovirus, oncogenic virus, and estimated to infect approximately 10 million people worldwide [3]. Adult T-cell leukemia (ATL) and HTLV-1-associated myelopathy (HAM) are both associated with prior infection with HTLV-1. However, these two diseases have different clinical and pathological presentations [4; 5]. The genes encoded by HTLV-1, such as HBZ (HTLV-1 basic leucine zipper factor) and _Tax_, have been reported to affect important signaling pathways involved in cell proliferation, apoptosis, and infectivity [6]. In particular, HBZ is maintained in all ATL cases and functions as both a protein and RNA [7; 8; 9; 10]. Recent studies have elucidated that in ATL cells, genomic mutations are highly enriched in T cell-related pathways, such as NF-\(\kappa\)B, and typically activate the pathways [11; 12]. Furthermore, it has been frequently observed in ATL cases that the aberrant expression of programmed cell death 1-ligand 1 (PD-L1) is caused by disruption of the PD-L1 3'-untranslated region (UTR) [13]. Several questions about these diseases at the genomic scale remain, including how the chromatin structure of ATL cells differs from that of CD4\({}^{+}\) T cells derived from healthy donors and how this difference influences transcription and translation to finally cause symptoms. In general, cellular phenotypes are largely affected by gene expressions that are strongly correlated with the epigenetic mechanisms occurring in chromatin. To understand the epigenetic mechanisms, it is important to understand how human DNA is packed and chemically modified in the nucleus, which can be quantified by measuring chromatin accessibility. In this paper, we study the relationship between chromatin accessibility and transcription in HTLV-1-infected cells at the whole genome level using Assay for Transposase-Accessible Chromatin using sequencing (ATAC-seq) [14] and RNA sequencing (RNA-seq) data. We performed a comparative analysis of HTLV-1 infected CD4\({}^{+}\) cells from ATL cases, HAM cases, and CD4\({}^{+}\) T cells from healthy donors (healthy CD4\({}^{+}\) T cells) based mainly on our previously developed algorithm of systematic clustering [15]. Our analysis shows that CD4\({}^{+}\) cells derived from ATL cases have anomalous properties in terms of the locations and sample-to-sample fluctuations of open chromatin regions compared with healthy CD4\({}^{+}\) T cells. Additionally, genes selected by our systematic clustering algorithm based on the immunophenotype had distinct expressions between ATL cells and healthy CD4\({}^{+}\) T cells. Using our systematic clustering algorithm, we also found a relationship between chromatin accessibility and immunophenotype that suggests some ATL cases approach several types of myeloid cells. Finally, we detected TLL1 (Tolloid Like 1) as one of a few genes having anomalous expressions in ATL cases. A luciferase assay found TLL1 isoforms, depending on the types of its isoforms, regulate differently the maturation of TGF-\(\beta\) (transforming growth factor \(\beta\)), which is known to play important roles in cancer progression. Figure 1: The genetic (intergenic/intronic/exonic) annotation of ATAC-seq peaks, which quantify open chromatin regions. TSS(2kb +/\(-\)) corresponds to \(-2000\) to \(2000\) base pairs from a transcription start site. ## II Results ### Chromatin accessibility: whole view of the genome The landscape of chromatin accessibility provides useful information for understanding the mechanisms that govern cell-type-specific gene expressions. Preliminarily, we overview the chromatin accessibility characterized by the ATAC-seq of healthy CD4\({}^{+}\) T cells, ATL cells, and HAM cells. To obtain the chromatin accessibility landscape, we performed ATAC-seq on HTLV-1-infected CD4\({}^{+}\) cells obtained from the peripheral blood of 29 ATL and 6 HAM cases. All samples selected for the ATAC-seq library preparation were at least 98% HTLV-1-infected cells. The ATAC-seq libraries were sequenced with an average of 44 million reads, resulting in a dataset comprising of 1.3 billion and 556 million sequenced reads for ATL and HAM, respectively. The data quality was high in all cases, with mitochondrial read rates of 7.5% for ATL and 7.3% for HAM. For a comparison, we used ATAC-seq datasets of CD4\({}^{+}\) T cell samples from 5 healthy donors and downloaded from GEO accession GSE74912 [16]. Figure 2: (a) The number of ATAC-seq peaks (vertical axis) vs. indices (horizontal axis) classified by each functional annotation from 1 to 15, as shown in (b). (b) The number of ATAC-seq peaks classified into each functional annotation for healthy CD4\({}^{+}\)T cells, ATL cells, and HAM cells. \(N_{\text{CD4}^{+}\text{T}}\), \(N_{\text{ATL}}\), and \(N_{\text{HAM}}\): peak number of healthy CD4\({}^{+}\)T, ATL, and HAM, respectively. Note that each peak quantifies an open chromatin region. To identify genome-wide accessible chromatin regions, for each of the three groups, we concatenated ATAC-seq reads for the different samples, where the sample number was 29 for ATL, 6 for HAM, and 5 for healthy CD4\({}^{+}\) T cells. As explained in the Materials and Methods, we randomly selected 100M reads from the concatenated data of each group. We used the MACS2 algorithm to select the locations of peaks to quantify the open chromatin regions from the ATAC-seq datasets [17], finding a total of 178811, 89972, and 131609 peaks in ATL, HAM, and healthy CD4\({}^{+}\) T cells, respectively. The ENCODE consortium shows that 10% of peaks are localized in near transcription start sites (TSSs), whereas the remaining 90% of peaks are mapped nearly equally to intronic and intergenic regions [18]. Consistent with these data, as shown in Fig. 1, about 10% of the ATAC-seq peaks are overlapped with the TSSs and their surrounding regions, whereas the majority of ATAC-seq peaks (about 85%) of healthy CD4\({}^{+}\) T cells, ATL cells, and HAM cells reside in intergenic or intronic regions. To determine the functional roles of the peaks in HTLV-1-infected cells and healthy CD4\({}^{+}\) T cells, we computed the overlapping ratio of these regions with specific genomic features, such as active TSS, enhancers, heterochromatin, etc. To assign a genomic feature to all genomic positions, we assumed a chromHMM 15-state model obtained from ([https://egg2.wustl.edu/roadmap/web_portal/chr_state_learning.html](https://egg2.wustl.edu/roadmap/web_portal/chr_state_learning.html)). We used the data of E043 for healthy CD4\({}^{+}\) T cells and E037 for HTLV-1-infected CD4\({}^{+}\) cells. Note that E037 is the model of CD4\({}^{+}\) memory T cells because a majority of ATL cases has been reported to show CD45RO\({}^{+}\), which is consistent with CD4\({}^{+}\) memory T cells [19]. As shown in Fig. 2, for HTLV-1-infected CD4\({}^{+}\) cells, the number of peaks compared with healthy CD4\({}^{+}\) T cells was proportionally less in categories related to Enhancer \((6,7,11,12)\) and Heterochromatin \((9)\), but it was higher for the category of Quiescent/Low \((15)\) only for ATL cases. This observation suggests that compared with healthy CD4\({}^{+}\) T cells, distinct enhancer mechanisms in HTLV-infected cases are correlated with the distinct chromatin structures. ### Increased chromatin accessibility around transcriptional start sites (TSSs) in ATL To determine how the chromatin structures observed in the HTLV1-infected cases are statistically characterized depending on the positions in the genome, we examined the positions of the reads from the ATAC-seq data. We plotted a histogram \(\widetilde{\rho}_{\nu}(z)\) of the reads as a function of their positions \(z\) relative to TSSs for cell type \(\nu\), where \(\nu\) is an ATL, HAM, or healthy CD4\({}^{+}\) T cell type. Note that the positions of the TSSs and coding regions of all genes were obtained from the human genome (hg19). For technical details of the histogram, see Materials and Methods. As shown in Fig. 3a, the tail parts of the histogram for ATL cases take higher values compared with HAM and healthy CD4\({}^{+}\) T cells, whereas the latter two cases showed more similar forms as a whole. To elucidate the statistics of the chromatin structures around the TSSs, we also focused on the fragments, which can be reconstructed by the reads data; both ends of a fragment correspond to a pair of two reads. Specifically, we investigated the position-dependent accumulation of the fragments, which can be an estimate of nucleosome positioning. We plotted a heat map \(F_{\nu}^{\Delta,\ell}(z,\ell)\) in which the mid-point of each fragment relative to TSSs is placed on the horizontal axis as \(z\) and the length of each fragment is placed on the vertical axis as \(\ell\)[20]. For technical details of the heat map, see Materials and Methods. As shown in Fig. 3b, healthy CD4\({}^{+}\) T cells and HTLV-1-infected CD4\({}^{+}\) cells from HAM samples show a pattern of enriched nucleosome-free fragments (\(\ell<100\) bp) and mono-nucleosome fragments (\(\ell=180\sim 247\) bp) surrounding the TSSs, where the thresholds for the length of a fragment are 100 bp, 180 bp, and 247 bp based on a previous study [21]. ATL samples showed less enrichment of nucleosome-free and mono-nucleosome fragments. These observations suggest that the statistics of open chromatin regions and nucleosome positioning around the TSSs in ATL cases is distinct from HAM cases and healthy CD4\({}^{+}\) cells, both of which showed more similar forms again. We continue to elaborate on characteristic behaviors of ATL cells distinct from the other two cases. ### Giant sample-to-sample fluctuations of chromatin accessibility in ATL To examine the distribution of open chromatin regions, we applied a systematic clustering algorithm that robustly detects open chromatin regions relevant to classifying the immunophenotype [15]. This algorithm characterizes open/closed chromatin regions in the following way. First, MACS2 uses ATAC-seq reads data for a given sample \(s\) as the input and outputs a collection of peak data including the locations of peaks with their \(p\)-values. These peaks are considered as open regions of chromatin for sample \(s\). The peaks are ordered in ascending \(p\)-values, and the first \(M\) peaks are taken. The set of \(M\) peaks are written as \(\hat{g}_{s}^{M}=((\gamma_{k},\alpha_{k},\beta_{k}),p_{k})_{k=1}^{M}\), where the location of the \(k\)-th peak is the region \((\alpha_{k},\beta_{k})\) in the \(\gamma_{k}\)-th chromosome, and \(p_{k}\) is the \(p\)-value of the \(k\)-th peak. Then, the set of the top \(M=64000\) peaks is determined as the optimal set of open chromatin regions, which effectively classifies the immunophenotypes of the samples [15]. In this procedure, peaks with high \(p\)-values (unreliable peaks) can be treated as noise and ignored for later analysis. First, we tried to capture the genomic positions where chromatins tend to be open for at least one of the ATL, HAM, and healthy CD4\({}^{+}\)T types. To quantify such chromatin regions, we constructed a new reference set \(g_{0}\) of peaks as follows: we concatenated all the reads from all the samples with the same cell type. Then, we used MACS2 to obtain a set of peaks for a cell type of the concatenated reads. Finally, we merged all the peaks obtained from the three cell types. For the explicit construction of the reference set \(g_{0}\), see Materials and Methods. Next, we classified the reference set \(g_{0}\) into the set of all peaks overlapping gene-coding regions, which we denoted as \(\mathbb{G}\), and the set of all peaks overlapping with non-coding regions, which we denoted as \(\mathbb{G}^{c}\). To quantify open chromatin regions characterized by the peaks \(\hat{g}_{s}^{M}\) of sample \(s\) in each \(k\)-th peak from the reference set \(g_{0}\), we computed the length of overlapped peaks \(O_{k}^{\mathrm{L}}(g_{0},\hat{g}_{s}^{M})\) between \(g_{0}\) and \(\hat{g}_{s}^{M}\) in the peak location \((\alpha_{k},\beta_{k})\) picked up from the reference region \(\mathbb{L}\in\{\mathbb{G},\mathbb{G}^{c},\mathbb{G}\cup\mathbb{G}^{c}\}\). We set \(M=64000\) as the provisionally optimal number for the immunophenotype classification [15]. Then, we focused on the average and variance of \(O_{k}^{\mathrm{L}}(g_{0},\hat{g}_{s}^{M})\). For details of the calculations, see Materials and Methods. As shown in Fig. 4, healthy CD4\({}^{+}\) T, HAM, and ATL cells showed similar behaviors at large average lengths of the overlapped peaks. For small average lengths, CD4\({}^{+}\) T showed fewer peaks, compared with ATL cases. Additionally, as shown in Fig. 5a, we found that healthy CD4\({}^{+}\) T and HAM cases had similar sample-to-sample fluctuations, and ATL cases had a higher frequency at variances larger than \(10^{5}\). As shown in Fig. 5b and Fig. 5c, the larger sample-to-sample fluctuations in the ATL cases were found in both non-coding regions and coding regions. In contrast, Fig. 5b shows apparent gaps between ATL cases and the other two cases at intermediate variances only in coding regions; ATL cases had a higher frequency at intermediate variances around \(10^{3}\) compared with the other two cases. The above analysis does not clearly distinguish healthy CD4\({}^{+}\) T and HAM cells, in particular, with respect to the sample-to-sample fluctuations of the open chromatin regions. Therefore, we mixed the datasets of 5 cases of HAM cells and 5 cases of healthy CD4\({}^{+}\) T cells to give set \(\mathbb{S}\). As shown in Fig. 5d, the variance in the mixed dataset was larger than of HAM cells or healthy CD4\({}^{+}\) T cells alone. This finding indicates the distributions of open chromatin regions are different between healthy CD4\({}^{+}\) T and HAM cells, although the two distributions showed similar sample-dependence. As a comparision, Fig. 5d and Fig. 5a show that the sample-to-sample fluctuations of ATL cells are larger than the fluctuations of the mixed data except for the tail, where the samples are scarce. Thus, ATL cases have a higher frequency at larger sample-to-sample fluctuations at the whole genome level and a higher frequency at intermediate sample-to-sample fluctuations only in coding regions. On the other hand, the chromatin structures in HAM cases show less sample-dependence, which is similar to CD4\({}^{+}\) T cells, implying the existence of a certain trend common to all the HAM samples. ### mRNA: distinctively expressed histone modifications in ATL To analyze gene expressions, we examined the RNA-seq data of ATL cases. Unless noted otherwise, the samples used were 8, 10, 21, 24 in Table 1, which had common properties in terms of immunophenotypes and symptoms, as explained below. We analyzed the gene expression pattern of HTLV-1-infected CD4\({}^{+}\) cells obtained from the peripheral blood of 4 ATL cases and of 4 healthy CD4\({}^{+}\) T cells. The reads count of the RNA-seq data was normalized by TMM normalization [22] and used as the input data. For technical details, see Materials and Methods. As shown in Fig. 6a, a principal component analysis (PCA) shows that the gene expression patterns differ significantly between ATL and healthy CD4\({}^{+}\) T cells. In ATL cells, there were 1289 genes up-regulated based on the condition of \(\log_{2}\mathrm{FC}>3\) and \(p\)-value \(<0.01\) and 944 genes down-regulated based on the condition of \(\log_{2}\mathrm{FC}<-3\) and \(p\)-value \(<0.01\), where FC is the fold change of the gene expression of ATL cells relative to healthy CD4\({}^{+}\) T cells. In addition, as shown in Fig. 6b, a Gene Ontology (GO) analysis using enrichR [23] revealed that in ATL cases, the up-regulated genes are enriched in histone modifications. Note that the combined scores of the genes with down-regulated expression are lower than those of up-regulated genes enriched for histone modification. Further, as shown in Fig. 6c, many histone-related genes, such as _HIST1H2AH_, _HIST1H3C_, and _HIST1H4C_, are significantly up-regulated in ATL cases, where all genes beginning with HIST in the first 4 letters are regarded as histone-related genes. For details of the analysis, see equation (1). This observation led us to consider a correlation between the anomalous chromatin properties of ATL cells shown above and the gene expression levels found here. ### Correlation between chromatin accessibility and mRNA: exclusive mRNA expressions in ATL To gain a direct quantification of how chromatin structures are correlated with gene expressions, we performed an integrated analysis of ATAC-seq and RNA-seq data for ATL cells. We used our algorithm to classify the top \(M\) peaks into open chromatin regions, where \(M\) is originally determined as \(M=64000\), such that the clustering of the ATAC-seq samples is closest to the appropriate immunophenotype [15]. For a comparison, we also used all peaks outputted from MACS2 as \(M=\infty\). We tried to find genes for which at least one peak from a given set of all peaks is assigned. Note that each peak can be assigned to zero or more genes. Here, the RNA-seq data analyzed by edgeR [24, 25] was used to examine the expression of each gene in ATL cells and healthy CD4\({}^{+}\)T cells. For details of the calculations, see Material and Methods. We considered the set of cell types \(\mathbb{T}\) as \[\mathbb{T}=\{\text{HSC},\text{CD4}^{+}\text{T},\text{CD8}^{+}\text{T},\text{ NK},\text{Mono},\text{ATL}\},\] where HSC, NK, and Mono are hematopoietic stem cell, natural killer cell, and monocyte, respectively. We computed the fold change \(\text{FC}_{i}(t,t_{0})\) of the expression of gene \(i\) between type \(t,t_{0}\in\mathbb{T}\) as \[\text{FC}_{i}(t,t_{0}):=\frac{\overline{R}_{i}(t)}{\overline{R}_{i}(t_{0})}, \tag{1}\] Figure 6: RNA-seq statistics: (a) A PCA of RNA-seq for healthy CD4\({}^{+}\)T cells and ATL cells, where the percentages are the first and second contribution ratios. (b) A GO analysis of RNA-seq for detecting the top 5 up-regulated gene expressions (top) and the top 5 down-regulated expressions (bottom) in terms of cell function for ATL. (c) A histogram of the log-fold change calculated in (16) and (17) between the gene expressions (RNA-seq) of ATL cases and of healthy CD4\({}^{+}\) T cells for all genes (blue line) vs. 61 histone-related genes (orange line). To select relevant genes, first we chose 69 genes whose names start with HIST. Then, we removed 8 genes, including 7 genes for which healthy CD4\({}^{+}\) T cells had no peak and 1 gene for which ATL cells had no peak. Figure 7: Histograms \(F_{t,t_{0}}^{M}(P;\Delta)\) of the expression of the selected genes defined in (18), where bin width \(\Delta\) is equal to 0.3. The histograms characterize the correlation between gene expressions quantified by RNA-seq and ATAC-seq peaks selected systematically. (a) \(M=\infty\) (orange line), 64000 (blue line) for cell types with \((t,t_{0})=(\mathrm{CD4^{+}T},\mathrm{Mono})\). (b) \(M=\infty\), 64000 for cell types with \((t,t_{0})=(\mathrm{CD4^{+}T},\mathrm{ATL})\). (c) \(M=\infty\), 64000 for cell types with \((t,t_{0})=(\mathrm{Mono},\mathrm{ATL})\). (d) \(M=\infty\), 64000 for cell types with \((t,t_{0})=(\mathrm{CD4^{+}T},\mathrm{CD8^{+}T})\). (e) \(M=\infty\), 64000 for cell types with \((t,t_{0})=(\mathrm{HSC},\mathrm{ATL})\). (f) \(M=\infty\), 64000 for cell types with \((t,t_{0})=(\mathrm{HSC},\mathrm{CD4^{+}T})\). where \(\overline{R}_{i}(t)\) means the average of the normalized reads count of the RNA-seq data in gene \(i\) over all samples with type \(t\). Then, we focused on the log fold change of the gene expression \(P=\log_{2}\mathrm{FC}_{i}(t,t_{0})\), where we only take into account the expression of gene \(i\) in which at least one peak within the top \(M\) peaks from \(\hat{g}_{s}^{M}\) was located for types \(t,t_{0}\) of the samples. For details of the calculations, see Material and Methods. As shown in Fig. 7, when \(M=\infty\), the distribution is close to unimodal for almost all pairs of cell types. When \(M=64000\), the distributions significantly depend on the pairs of cell types. For example, as shown in Fig. 7b, 7c, and 7e, the distributions related to ATL cases are sharply bimodal-like. In particular, the pair ATL and CD4\({}^{+}\) T cells was found to be distinct (Fig. 7b); the events around \(P=0\) are completely undetected. Thus, the above observations suggest that ATL cells have exceptionally distinct structures in terms of the correlation between chromatin accessibility and mRNA compared with CD4\({}^{+}\) T cells. ### Immunophenotypes from chromatin accessibility and mRNA We next examined ATL cases by inferring the past cell status before infection with HTLV-1 and the current cell status in terms of immunophenotypes compared with normal hematopoietic cells. It remains unclear why most ATL cells have an immunophenotype similar to CD4\({}^{+}\) memory T cells even though HTLV-1 infects multiple hematopoietic cells [26]. Specifically, by focusing on the T-cell Receptor Excision Circles (TREC) region, we computed the number of reads mapped to the TREC region using ATAC-seq data from ATL cells and 13 human primary blood cell types from healthy donors. As shown in Fig. 8, the number of mapped reads in the TREC region in ATL/HAM cells is closer to that of CD4\({}^{+}\) T and CD8\({}^{+}\) T cells compared with other hematopoietic cell types. This finding indicates that cells before infection had already differentiated into T cells. Moreover, when combined with our findings about chromatin levels above, the results suggest a strong sample-dependent opening of the chromatin occurs in infected CD4\({}^{+}\) T cells during the development of ATL. As mentioned above, the overall chromatin structures vary greatly among ATL cases and are correlated with the gene expressions. This phenomenon presumably affects the immunophenotype. To investigate the effect of large-scale variations of chromatin structures on the immunophenotype, we used our algorithm to quantitatively evaluate differences between ATL cells and hematopoietic cells from healthy donors [15]. Using this algorithm, we calculated the Hamming distances between the peak-based binarized genome of ATL cells and of hematopoietic cells from healthy donors. For this purpose, we used ATL samples and 77 ATAC-seq datasets from 13 human primary blood cell types. As summarized in Table 1, the majority of ATL samples are close to CD4\({}^{+}\) T cells, as expected by the above analysis about the past cell status. We also found that the ATAC-seq patterns of some ATL cases are close to myeloid cells such as erythroid cells and monocytes. To ascertain whether the mRNA expression in the ATL cells reflects the characteristics of myeloid cells, we analyzed the RNA-seq data from healthy CD4\({}^{+}\) T cells and HTLV-1-infected CD4\({}^{+}\) cells from 4 ATL cases (samples 8, 10, 21, 24 in Table 1). We used the condition \(\log_{2}\mathrm{FC}>1\) and \(p\)-value \(<0.01\) to identify up-regulated genes and found two candidates: CD71 (TFRC), which is ubiquitously expressed by erythroid precursors [27, 28], and KLF4, which is highly expressed in myeloid cells and essential for monocyte differentiation [29]. Figure 8: Reads counts overlapping with the TREC region divided by reads counts overlapping with the TRA gene region. ### Chromatin-based systematic selection of key genes in ATL cases To identify genes that are specially expressed in ATL cells, we investigated whether there are characteristic open chromatin regions in gene coding regions that are common to all ATL cells but not to hematopoietic cells derived from healthy individuals. We therefore compared the chromatin accessibility between 29 ATL samples and 77 ATAC-seq datasets from 13 human primary blood cell types. We applied our algorithm to detect such key genes, using \(M\) as a parameter for clustering the ATAC-seq data corresponding to immunophenotypes [15]. First, we defined a subset of regions of top \(M\) peaks, where all ATL samples had peaks but no 13 human primary blood cell types from healthy donors did. We call this subset ATL-specific open regions. Second, we detected the genes in which at least one of ATL-specific open region was located for a given \(M\). As shown in Fig. 9, the number of key genes was 0 at \(M\)=2000, 1 at \(M=4000\), 3 to 4 between \(M=8000\) and 56000, and then dropped back to 1 for values larger than \(M=64000\). Among these genes, we picked those that stayed in intervals of M \(\geq 8000\). Concretely, TLL1 (Tolloid-like-1) gene appeared from \(M=8000\)-16000, EVC (Ellis-van Creveld) gene and CRMP1 (Collapsin response mediator protein 1) gene appeared from \(M=16000\)-48000, TNRC6A (Trinucleotide Repeat Containing Adaptor 6A) gene appeared from \(M=32000\)-48000, and UST (Uronyl-2-sulfotransferase) gene appeared from \(M=64000\). As a reference in Fig. 10a, we show the locations of the ATAC-seq reads around TLL1 gene. Note that the detected genomic region for EVC and CRMP1 is the same because the two genes overlap. Thus, by our selection, the candidates determined as key genes are few. Consistently, EVC has been reported to be overexpressed in ATL and plays an important role in cellular Hedgehog activation [30]. UST was also highly expressed in ATL cases, though the relationship between the function of UST and ATL has not been explicitly clarified [31]. On the other hand, to the best of our knowledge, TLL1 and TNRC6A have not been studied in this context. While TLL1 is known to be necessary for normal sep \begin{table} \begin{tabular}{|r|c|c|c|c|} \hline Sample & first & second & third & Clinical \\ & labels & & & subtype \\ \hline 1 & Ery & Mono & CLP & Acute \\ 2 & Mono & CD4\({}^{+}\) T & B & Chronic \\ 3 & CD4\({}^{+}\) T & CD8\({}^{+}\) T & NK & Chronic \\ 4 & CD4\({}^{+}\) T & CD8\({}^{+}\) T & B & Acute \\ 5 & Mono & Ery & CD4\({}^{+}\) T & Chronic \\ 6 & Ery & Mono & B & Chronic \\ [MISSING_PAGE_POST] hronic \\ 28 & CD4\({}^{+}\) T & CD8\({}^{+}\) T & B & Acute \\ 29 & CD4\({}^{+}\) T & CD8\({}^{+}\) T & B & Acute \\ \hline \end{tabular} \end{table} Table 1: Clustering results of ATAC-seq from ATL cases in terms of immunophenotypes obtained using the method in [15]. ATAC-seq information for ATL samples is DRR250714 for ATL8, DRR250710 for ATL10, DRR250711 for ATL21, DRR250712 for ATL24, DRR250713 for ATL4, DRR250715 for ATL2, and DRR250716 for ATL5. Other information about the sample labels will be added in next version. the heart [32], a more recent report found that it is associated with the development of hepatocellular carcinoma after the eradication of HCV [33]. Further, TLL1 is a member of the BMP1/TLD (bone morphogenetic protein1/tolloid)-like proteinase family; BMP1 controls latent TGF-\(\beta\) activation via the cleavage of LTBP1 (latent TGF-\(\beta\) binding protein-1) [34], and TGF-\(\beta\) plays important roles in cancer progression. Thus, we picked up TLL1 as a promising candidate among genes expressing ATL-specific functions. ### TLL1 can strongly regulate TGF-\(\beta\) We next considered the gene expression of TLL1 in ATL cells and TLL1's effect on the maturation process of TGF-\(\beta\) in a HepG2 line, which is a TGF-\(\beta\) responsive cell line[35]. First, as shown in Fig. 10c, we performed real-time PCR that showed TLL1 is expressed in ATL cases but not in the peripheral blood mononuclear cells of healthy donors. This result is consistent with the Human Protein Atlas, which shows that TLL1 mRNA is not detected in most adult tissues including immune cells or any hematopoietic lineage. By analyzing the RNA-seq data, we also confirmed that TLL1 mRNA is not expressed in normal hematopoietic cells, but it was expressed in all examined ATL cases. The same was not true for TNRC6A gene, which, according to RNA-seq data, did not show any systematic expression change between ATL cases and normal hematopoietic cells. Next, we considered the relationship between TLL1 and the maturation process of TGF-\(\beta\). TLL1 has two mRNA isoforms: TLL1 isoform 2 lacks many exons from the 3' end of TLL1 isoform 1. Thus, we asked if both isoforms regulate TGF-\(\beta\) in a manner similar to BMP1. To investigate this possibility, as shown in Fig. 11a, we performed a luciferase assay using the pre-mature form of TGF-\(\beta\) co-expressed with either TLL1 isoform 1 or 2 in a HepG2 cell line. As shown in Fig. 11b, we found that compared to the sample without TLL1 (Case 2), TLL1 isoform 1 (Case 3) activates the pre-mature form of TGF-\(\beta\) for maturation, but TLL1 isoform 2 (Case 4) represses the maturation. It should be noted that the difference of luciferase activity between isoform 1 (Case 3) and isoform 2 (Case 4) with pre-mature TGF-\(\beta\) approximated the difference between the TLL1-less condition with pre-mature TGF-\(\beta\) (Case 2) and without pre-mature TGF-\(\beta\) (Case 1). Thus, the results suggest that TLL1 is able to strongly regulate TGF-\(\beta\) depending on the isoform expression ratio. ## III Discussion In this paper, we statistically characterized the anomalous chromatin accessibility and gene expression of HTLV-1-infected cells and healthy CD4\({}^{+}\) T cells at the whole genome level. Our analysis suggests that compared to healthy CD4\({}^{+}\) T cells, ATL cells have the following properties: the chromatin accessibility increases near TSSs, a higher frequency of larger sample-to-sample fluctuations at the whole genome level, and a higher frequency of intermediate sample-to-sample fluctuations in gene coding regions. Consistently, histone-related genes were up-regulated. The expression of the genes systematically selected by the chromatin accessibility was Figure 9: (a) The number of selected genes over increments of 8000 for \(M\) from 8000 to 80000. (b) The width of the selected genes. found to be distinct from healthy CD4\({}^{+}\) T cells but not other hematopoietic cell types that we have studied. Further, whereas the immunophenotype determined by the systematically selected open chromatin regions was classified to be near CD4\({}^{+}\) T cells for most samples, some samples were classified as myeloid cells. Based on the above integrative analysis of chromatin accessibility and gene expression, we found that there are chromatin regions which are open in all the ATL cases but closed in all the analyzed samples of the 13 hematopoietic cell types derived from healthy donors. One of the genes overlapping with the chromatin regions that satisfy such conditions is TLL1, which was experimentally shown to have a large potential to regulate TGF-\(\beta\). Contrary to ATL cases, the statistics of the chromatins in HAM cells resembled those of healthy CD4\({}^{+}\) T cells, including sample-to-sample fluctuations. This observation implies that there is a certain sample-independent trend in the chromatin structure of HAM cases. It should be noted that we were unable to analyze large number of samples due to the difficulty in obtaining samples for given clinical conditions. Thus, some quantities, such as frequencies shown in Fig. 6 and Fig. 7, were not estimated with statistical validity. To validate the hypothesis about the uniqueness of ATL cells across different scales from chromatin and transcription to immunophenotypes, more samples should be used in future studies. Our finding about ATL samples might motivate us to consider a rather general relationship between increased chromatin accessibility and the onset of leukemia. Indeed, in a previous study of Acute Myeloid Leukemia (AML), it was reported that mutations in cohesin genes increase chromatin accessibility, which controls the activity of transcription factors leading to leukemogenesis [21]. It was also reported that HMGN1 amplification is also associated with increased chromatin accessibility; It confers a transcriptional and chromatin phenotype associated with stem cells and leukemia [36]. It remains for future studies to check how the relationship between the increased chromatin accessibility and the onset of leukemia can be generalized beyond the above cases such as ATL and AML. Figure 10: (a,b) Histogram of ATAC-seq reads around the TLL1 region. (c) Expression of TLL1 relative to GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) for ATL sample \(s\). The relative expression of sample \(s\) is defined as \(2^{\Delta C_{s}^{0}}\), where \(\Delta C_{s}^{0}\) is computed using the \(\Delta\Delta\) Ct method; for details, see (19). N.D. stands for no detection of TLL1. Let us discuss our additional findings in the following, though they are preliminary results. First, to compare ATL cases with another type of leukemia in terms of immunophenotypes, we analyzed the chromatin structure of cutaneous T-cell lymphoma (CTCL) using the ATAC-seq data of CTCL (GSE85853) [15], which is reported to have a clinical and histopathological phenotype similar to that of ATL [37]. As shown in Table 2, we found that the chromatin structures in some CTCL cases are closer to myeloid cells rather than CD4\({}^{+}\) T cells, though CTCL is conventionally classified as T-cell leukemia. Especially, in the cases of Patient 11 and 60, both of whom are romidepsin responders, the chromatin structure changed from myeloid cell-like to CD4\({}^{+}\) T-like after drug treatment on the 0th day. Although the molecular mechanisms underlying this process are not understood, these results suggest that the change in immunophenotype reflects a molecular response in treatment. This observation could shed light on finding better therapeutic targets and prediction of drug response. Second, looking at the ATAC-seq data further, a footprint analysis for the identification of differential motifs revealed that ETS1, IRF2, and RUNX2 had deeper footprints and higher DNA accessibility at the flanking locations of their motifs in healthy CD4\({}^{+}\) T cells, while NRF1, KLF4, and KLF9 had deeper footprints in ATL cells (Fig. 12) [38]. These observations suggest that transcription factors such as NRF1, KLF4, and KLF9 play an important role in ATL. Third, to understand TLL1's function, we conducted experimental studies on MT-2, an ATL cell line that does not express TLL1. We prepared three samples of MT-2: one transduced with an empty vector, another with TLL1 isoform 1, and the third with TLL1 isoform 2. We analyzed the RNA-seq data from the three samples by computing Figure 11: (a) Schematic picture of pre-pro TGF-\(\beta\) and mature TGF-\(\beta\). (b) TGF-\(\beta\) activation was measured by 3TP-Lux protein activation, which depended on the TLL1 isoform type. The \(p\)-value is 0.0028 (\(**\)) for Case 2 vs. Case 3 and 0.019 (\(*\)) for Case 2 vs. Case 4 (t-test). the gene expression ratio between them. Among the 22963 genes, we picked up the genes which have nonzero reads count in MT-2 transduced with an empty vector. A part of such genes showed significantly altered expressions depending on the type of isoform. As shown in Table 3, four genes were significantly up-regulated when TLL1-isoform 1 was transduced: CCR6 is related to the regulation of Treg migration [39], microRNA-155 (MIR155) modulates Treg cell differentiation and its expression is up-regulated in HTLV-1 transformed cells [40; 41; 42], the chemokine CCL3 regulates myeloid differentiation [43], and POSTN has been reported to be involved in TGF-\(\beta\) activation [44]. As for the four genes significantly up-regulated when TLL2-isoform2 was transduced, all were globin genes. These findings reiterate the dependence of TLL's function on its isoforms in ATL cells. As an additional note, Fig. 10b suggests that the chromatin regions around the TLL1 loci tend to be open also for the HAM cases. This observation indicates that open chromatin regions around TLL1 are not the only cause of leukemia onset. Rather, it suggests that open chromatin regions are potentially caused by the infection itself and related to the latent period or expansion of the virus. This study is the first comprehensive analysis of open chromatin structures in ATL samples. The findings will deepen understanding of the ATL pathogenesis. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline SRR & first & second & third & Patient tag & HDAC \\ Number & & & & \& Time tag & responder \\ \hline 4044872 & Ery & Mono & CLP & Patient-11 on 0-th day & \\ 4044873 & CD\({}^{4}\) T & CD\({}^{8}\) T & Ery & on 0-th day & \\ 4044874 & CD\({}^{4}\) T & CD\({}^{8}\) T & NK & on 7-th day & \(+\) \\ 4044875 & CD\({}^{4}\) T & CD\({}^{8}\) T & NK & on 7-th day & \\ 4044876 & CD\({}^{4}\) T & CD\({}^{8}\) T & B & on 14-th day & \\ 4044877 & Ery & CD\({}^{4}\) T & Mono & on 14-th day & \\ \hline 4044878 & Ery & Mono & CLP & Patient-20 on 7-th day & \(-\) \\ \hline 4044879 & Ery & Mono & CLP & Patient-39 on 0-th day & \\ 4044880 & Ery & Mono & CLP & on 0-th day & \(-\) \\ 4044881 & Ery & Mono & CLP & on 7-th day & \\ 4044882 & Ery & Mono & CLP & on 7-th day & \\ \hline 4044885 & CD\({}^{4}\) T & CD\({}^{8}\) T & B & Patient-59 on 7-th day & \(+\) \\ 4044886 & CD\({}^{4}\) T & CD\({}^{8}\) T & B & on 7-th day & \\ \hline 4044887 & Ery & Mono & CD\({}^{4}\) T & Patient-60 on 0-th day & \(+\) \\ 4044888 & CD\({}^{4}\) T & B & CD\({}^{8}\) T & on 7-th day & \\ \hline 4044889 & Ery & Mono & CLP & Patient-61 on 0-th day & \(+\) \\ \hline 4044890 & CD\({}^{4}\) T & CD\({}^{8}\) T & B & Patient-62 on 0-th day & \(+\) \\ 4044891 & CD\({}^{4}\) T & Ery & CD\({}^{8}\) T & on 0-th day & \\ \hline 4044892 & CD\({}^{4}\) T & CD\({}^{8}\) T & NK & Patient-1366 on 0-th day & \(+\) \\ \hline \end{tabular} \end{table} Table 2: Clustering results of ATAC-seq from 9 CTCL cases in terms of immunophenotype as a function of time using the method in [15]. The Histone deacetylase inhibitor (HDACi) was romidepsin. \(+\) and \(-\), positive response and negative response, respectively [37]. \begin{table} \begin{tabular}{|c|c|c|} \hline Gene symbol & RE of type 1 & RE of type 2 \\ \hline CCL3 & 14.53 & 2.94 \\ \hline CCR6 & 11.70 & 2.85 \\ \hline MIR155 & 11.30 & 2.44 \\ \hline POSTN & 11.06 & 2.49 \\ \hline HBG2 & 0.058 & 15.00 \\ \hline HBB & 0.15 & 12.29 \\ \hline HBA2 & 0.14 & 5.79 \\ \hline HBA1 & 0.09 & 5.04 \\ \hline \end{tabular} \end{table} Table 3: Relative expression (RE) of genes in TLL1-transduced MT-2 cells over MT-2 cells transduced with an empty vector. Type 1(2) corresponds to MT-2 cells transduced with TLL1-isoform 1 (isoform 2). Genes were picked up if the RE of type 1 was larger than 11 and the RE of type 2 was smaller than 3 or if the RE of type 1 was smaller than 1 and the RE of type 2 was larger than 5. Figure 12: A footprint analysis of transcription factors. (a)-(f) Distances from motifs (horizontal axis) vs. the averaged number of reads over all parts of a given motif (vertical axis) outputted from HINT-ATAC [38]. ## IV Methods and materials ### Sequencing sample preparation Peripheral blood mononuclear cells from ATL patients, HAM patients, and HTLV-1 carriers were thawed and washed with PBS containing 0.1% BSA. To discriminate dead cells, we used a LIVE/DEAD Fixable Dead Cell Stain Kit (Invitrogen). For cell surface staining, cells were stained with APC anti-human CD4 (clone: RPA-T4) (BioLegend) and anti-SynCAM (TSLC1/CADM1) mAb-FITC (MBL) antibodies for 30 minutes at 4 'C followed by washing with PBS. HTLV-1-infected cells (CAMD1\({}^{+}\) and CD4\({}^{+}\)) were purified by sorting with a FACS Aria (Beckman Coulter) to reach 98-99% purity. Data were analyzed using FlowJo software (Treestar). Soon after the sorting, 10000-50000 HTLV-1-infected cells were centrifuged and used for ATAC-seq. Total RNA was isolated from the remaining cells using an RNeasy Mini Kit (Qiagen). Library preparation and high-throughput sequencing were performed at Macrogen Inc. (Seoul, Korea). The diagnostic criteria and classification of the clinical subtypes of ATL were performed as previously described [45]. 77 ATAC-seq datasets from 13 human primary blood cell types were obtained from the Gene Expression Omnibus (GEO) with accession number GSE74912 [46], and RNA-seq datasets of CD4\({}^{+}\) T and CD8\({}^{+}\) T cells from healthy donors were obtained from GSE74246 [46]. The RNA-seq data for ATL samples can be downloaded from DBBJ (DNA Data Bank of Japan) with accession number DRR250721 for ATL8, DRR250717 for ATL10, DRR250718 for ATL21, and DRR250719 for ATL24. The ATAC-seq data for ATL samples can be downloaded from DBBJ with accession number DRR250714 for ATL8, DRR250710 for ATL10, DRR250711 for ATL21, and DRR250712 for ATL 24. ### Pre-processing of ATAC-seq High-throughput sequencing provides a set of reads as output. ATAC-seq reads were aligned using BWA version 0.7.16a with default parameters. SAMtools was used to convert SAM files into compressed BAM files and sort the BAM files by chromosome coordinates. PICARD software (v1.119) ([http://broadinstitute.github.io/picard/](http://broadinstitute.github.io/picard/)) was used to remove PCR duplicates using the MarkDuplicates options. Reads with mapping quality scores less than 30 were removed from the BAM files. For peak calling, MACS2 (v2.1.2) software was used with option --nomodel --nolambda --keep-dup all -p 0.01. ATAC-seq tracks were visualized using Integrative Genomics Viewer (IGV), and footprinting analysis was performed using HINT-ATAC [38]. Note that the paired-end output of the sequence was used to reconstruct the fragments, where paired two reads correspond to both ends of a fragment. ### Pre-processing of RNA-seq RNA-seq data were aligned to human reference genome hg19 using STAR 2.6.0c with the --quantMode GeneCounts function [47]. RNA-seq data analysis was performed using edgeR, where the reads counts of the RNA-seq data were normalized using TMM normalization [22] to be converted into pseudo reads counts. Let \(n_{i}^{0}(s)\) be the reads count of the RNA-seq data for each gene \(i\) for a given cell sample \(s\) and \(N(s):=\sum_{i}n_{i}^{0}(s)\) be the total reads count over all genes. Using the TMM normalization with \(n_{i}^{0}(s)\) for gene \(i\) and sample \(s\) as the input data, one can obtain the normalization factor \(r(s)\) for a given sample \(s\) using the command calcNormFactors of edgeR. After acquiring \(r(s)\), the pseudo reads count \(n_{i}(s)\) is calculated using the command estimateCommonDisp of edgeR, which we used as the starting point of the RNA-seq data analysis in the main text. An additional analysis was done to evaluate robustness. We computed the geometric mean of Figure 13: Pipeline of the data processing for ATAC-seq. \(\left(\prod_{s\in\mathbb{S}}r(s)N(s)\right)^{1/|\mathbb{S}|}\), where \(\mathbb{S}\) is the set of samples and \(|\mathbb{S}|\) is the number of elements of \(\mathbb{S}\). We then checked whether the pseudo reads count \(n_{i}(s)\) is close to a normalized reads count \(n_{i}^{\prime}(s):=n_{i}^{0}(s)\frac{N_{0}}{r(s)N(s)}\) for sample \(s\). We found that the maximum deviation between \(n_{i}^{\prime}(s)\) and \(n_{i}(s)\) is smaller than 5 over all genes in the case of CD4\({}^{+}\) T vs. Mono in Fig. 7a. In this case, the effects from the differences between \(n_{i}^{\prime}(s)\) and \(n_{i}(s)\) are quite small except for quantities related to genes with almost zero reads. Therefore, even if we use \(n_{i}^{\prime}(s)\) as the starting point of the analysis instead of the pseudo reads count \(n_{i}(s)\), qualitatively the same conclusion as that with \(n_{i}(s)\) is expected to be obtained. A PCA was done using the covariance matrix of \(\log_{10}(n_{i}(s)+1)\), where the first and second principal components were calculated using the pcomp command with option scale=FALSE in R. ### Cell lines and clinical samples All ATL cell lines were cultured in RPMI 1640 medium supplemented with 10% FBS and antibiotics. HepG2 was cultured in DMEM. To construct MT-2 cells stably expressing TLL1 isoform 1 or 2, the coding sequence of each isoform was transduced using a lentivirusv vector constructed as described in subsection IV.6. ### Real-time PCR cDNA products were analyzed by real-time PCR using PowerUp SYBR Green Master Mix and StepOnePlus Real-Time PCR System (Applied Biosystems) according to the manufacturer's instructions. Primer sequences for the GAPDH gene have been described previously [48], and primer sequences for the TLL1 gene were 5-TTGTTTTCTACGGGGAGCTATGG-3 and 5- ATATCGCCCCAAAATACAGCG-3. The relative quantification was calculated according to the method described in Applied Biosystems ABI prism 7700 SDS User Bulletin #2. Note that the ATL sample used for this experiment is not listed in Table 1. ### Lentiviral vector construction and transfection of recombinant lentivirus The coding region of TLL1 isoform 2 was synthesized using gBlocks Gene Fragment (Integrated DNA Technologies), which was used as the template for synthesizing TLL1 isoform 1 by PCR amplification. TLL1 isoform 1 and 2 fragments were subcloned into pCS2-EF-MCS (gift from H. Miyoshi, RIKEN Bioresource Center). An empty vector that expresses only hrGFP was used as the control for the lentiviral transduction. 293T cells at 80% confluence in a 10-cm dish were co-transfected with 10 \(\mu\)g lentivirusv vector, 10 \(\mu\)g psPAX2, 5 \(\mu\)g pMD2.G, and PEI (Polyethylenimine). 48 hours after the transfection, supernatant containing the virus was collected and concentrated by ultracentrifugation. MT-2 cells were transfected with the lentivirus, and two weeks after the transduction, GFP-positive cells were purified by sorting with FACS Canto. RNA was isolated using the Qiagen RNeasy Mini Kit and then used for the RNA-seq analysis. ### Luciferase assay The coding region of human TGF-\(\beta\), whose length is 1173 bp, was generated by PCR amplification and subcloned into a pFUSE-hIgG1-Fc2 vector. HepG2 cells were plated on 12-well plates at \(1\times 10^{5}\) cells per well. After 24 hours, the cells were transfected with 50 ng/well of luciferase reporter plasmid (p3TP-Lux) [49], 5 ng/well of Renilla luciferase control vector (phRL-TK) together with 35 ng/well of TLL1 expression plasmid, and 35 ng/well of TGF-\(\beta\)-expressing plasmid or empty vector. Plasmids were transfected using TransIT-LT1 (Mirus) according to the manufacturer's instructions. After 48 hours, the cells were collected, and luciferase activities were measured using the Dual-Luciferase Reporter Assay Kit (Promega). Relative luciferase activity was calculated as the ratio of firefly to Renilla luciferase activity. Three independent experiments, each with triplicate transfections, were performed, and typical results are shown. ### Explicit definitions of the computed quantities We explicitly define quantities discussed in the main text. First, we assume that the set of reads from DNA of an ATAC-seq sample and the reads count for each gene from an RNA-seq sample are given. Also, a set of fragments for the ATAC-seq sample is also given by using a pair of reads; both ends of a fragment correspond to a pair of two reads. The positions of TSSs and coding regions of all genes were obtained from the human genome (hg19) as a set of intervals on the genome. Therefore, a read from the ATAC-seq data is interval \([x_{1},x_{2}]\) on the genome, which corresponds to a region including an edge of a fragment. A fragment has length \(\ell\) and location \(x\) as the mid-point of the two edges on the genome [15]. The reads from the RNA-seq data provide the read count \(n_{i}(s)\) for each gene \(i\), where \(s\in\mathbb{S}\) is the sample index. We denote by \(\mathbb{S}_{\nu}\) the set of all analyzed samples with type \(\nu\). #### iv.2.1 The normalized number of reads in Fig. 3a: Let us consider the number \(\rho_{s}(z)\) of ATAC-seq reads from a sample \(s\in\mathbb{S}\) located on position \(z\) from the nearest TSS. Then, we take the sample average among type \(\nu\) as \[\overline{\rho}_{\nu}(z):=\frac{1}{|\mathbb{S}_{\nu}|}\sum_{s\in\mathbb{S}_{ \nu}}\rho_{s}(z), \tag{2}\] where \(\nu\in\{\text{CD4}^{+}\text{T},\text{HAM},\text{ATL}\}\). In Fig. 3a, we plot the normalized quantity \(\widetilde{\rho}_{\nu}(z)\) obtained after dividing \(\overline{\rho}_{\nu}(z)\) by the value at the TSS (\(z=0\)) such that \[\widetilde{\rho}_{\nu}(z):=\frac{\overline{\rho}_{\nu}(z)}{\overline{\rho}_{ \nu}(0)}. \tag{3}\] #### iv.2.2 The averaged number of fragments in Fig. 3b: Let us consider the number of fragments \(\phi_{s}(z,\ell)\) from sample \(s\in\mathbb{S}\) satisfying the following two conditions: (1) their centers are located at \(z\), and (2) they have length \(\ell\). \(\overline{\phi}_{\nu}(z,\ell)\) describes the sample average among type \(\nu\) as \[\overline{\phi}_{\nu}(z,\ell):=\frac{1}{|\mathbb{S}_{\nu}|}\sum_{s\in\mathbb{ S}_{\nu}}\phi_{s}(z,\ell), \tag{4}\] where \(\nu\in\{\text{CD4}^{+}\text{T},\text{HAM},\text{ATL}\}\). In Fig. 3b, we plot the histogram \(F_{\nu}^{\Delta,\xi}(z,\ell)\) for bin width \(\Delta\) and \(\xi\) for \(z\) and \(\ell\), respectively, as \[F_{\nu}^{\Delta,\xi}(z,\ell):=\sum_{z-\Delta/2\leq s^{\prime}<z+\Delta/2}\sum_ {\ell\leq\ell^{\prime}<\ell+\xi}\overline{\phi}_{\nu}(z^{\prime},\ell^{\prime}). \tag{5}\] #### iv.2.3 The reference set of peaks: To analyze open chromatin regions, we used MACS2 with the input of reads from the ATAC-seq data. Concretely, we used MACS2 with the option --nomodel --nolambda --keep-dup all -p 0.01, which corresponds to \(p_{G}=10^{-2}\)[15]. This algorithm outputs the collection of peaks \(\hat{g}_{s}\) for a given sample \(s\) as candidates of open chromatin regions, which can be described as \[\hat{g}_{s}:=((\gamma_{k},\alpha_{k},\beta_{k}),p_{k})_{k\geq 1}, \tag{6}\] where \(\gamma_{k}\) is the chromosome number, \(\alpha_{k}\) is the starting point, and \(\beta_{k}\) is the ending point in terms of genome position with \(p_{k}\) as the \(p\)-value of the \(k\)-th peak. As in [15], \(p_{k}\leq p_{k^{\prime}}\) for \(k<k^{\prime}\). In particular, we consider the set of the top \(M\) peaks and denote it by \[\hat{g}_{s}^{M}:=\begin{cases}((\gamma_{k},\alpha_{k},\beta_{k}),p_{k})_{k=1}^ {M}&(\text{if }|\hat{g}_{s}|\geq M),\\ \hat{g}_{s}&(\text{otherwise}).\end{cases} \tag{7}\] Next, we concatenate the data of all reads from all ATAC-seq samples with cell type \(\nu\in\{\text{CD4}^{+}\text{T},\text{HAM},\text{ATL}\}\). Then, we randomly extract 100 million reads from the concatenated data for type \(\nu\) as the input for the MACS2 algorithm to obtain the collection of peaks, \[g_{\nu}:=((\gamma_{k},\alpha_{k},\beta_{k}),p_{k})_{k\geq 1}. \tag{8}\] Using a coalescing process of \(g_{ATL},g_{HAM},g_{CD4^{+}T}\), we construct a new reference set of peaks \(g_{0}\) as follows. Operationally, the coalescing of two peaks is done as follows. If two peaks \((\gamma,\alpha,\beta)\) and \((\gamma^{\prime},\alpha^{\prime},\beta^{\prime})\) for \(\gamma=\gamma^{\prime}\) satisfy \(\alpha^{\prime}\leq\alpha\leq\beta^{\prime}\), the two peaks become one peak as \((\gamma,\alpha^{\prime},\max\{\beta,\beta^{\prime}\})\). This operation is repeated for a newly obtained set of peaks until no more coalescing processes occur. #### iv.2.4 The length of overlapped peaks in Fig. 4, 5:: To quantify the similarity between two collection of peaks, \(g_{0}\) and \(\hat{g}_{s}^{M}\), first, we fix a set of peaks \(\mathbb{L}\) as any of (1) the set \(\mathbb{G}\) of all peaks in \(g_{0}\) overlapping gene coding regions, (2) the set \(\mathbb{G}^{c}\) of all peaks in \(g_{0}\) corresponding to non-coding regions, and (3) the union \(\mathbb{G}\cup\mathbb{G}^{c}\) of two sets \(\mathbb{G},\mathbb{G}^{c}\). Note that for a given peak, its center was calculated by the command of annotatePeaks.pl in the HOMER algorithm and used to judge whether the peak joins \(\mathbb{G}\) or \(\mathbb{G}^{c}\)[50]. Then, we focus on the length of the overlapped peaks \(O_{k}^{\mathrm{L}}(g_{0},\hat{g}_{s}^{M})\), which is the number of base pairs in a peak of \(\hat{g}_{s}^{M}\) inside the \(k\)-th peak \((\alpha_{k},\beta_{k})\) of \(\mathbb{L}\subset g_{0}\). Then, we compute the average and variance of \(O_{k}^{\mathrm{L}}(g_{0},\hat{g}_{s}^{M})\) as follows: \[\overline{O}_{k}(\mathbb{L},\mathbb{S}):=\frac{1}{|\mathbb{S}|} \sum_{s\in\mathbb{S}}O_{k}^{\mathrm{L}}(g_{0},\hat{g}_{s}^{M}), \tag{9}\] \[V_{k}(\mathbb{L},\mathbb{S}):=\frac{1}{|\mathbb{S}|}\sum_{s\in \mathbb{S}}\big{(}O_{k}^{\mathrm{L}}(g_{0},\hat{g}_{s}^{M})-\overline{O}_{k}( \mathbb{S},\mathbb{L})\big{)}^{2}. \tag{10}\] We set \(M=64000\) as the provisionally optimal number for immunophenotype classification [15]. The following functions describe the frequency of the average and variance of \(O_{k}^{\mathrm{L}}(g_{0},\hat{g}_{s}^{M})\): \[\rho_{\mathbb{L},\mathbb{S}}^{(1)}(O):=\sum_{k\in\mathbb{L}} \delta(O,\overline{O}_{k}(\mathbb{L},\mathbb{S})), \tag{11}\] \[\rho_{\mathbb{L},\mathbb{S}}^{(2)}(V):=\sum_{k\in\mathbb{L}} \delta(V,V_{k}(\mathbb{L},\mathbb{S})), \tag{12}\] where \(\delta(a,b)=1\) for \(a=b\), otherwise \(0\). Lastly, in Fig. 4 and Fig. 5, the histograms are defined as \[F_{\mathbb{L},\mathbb{S}}^{(1)}(O;\Delta):=\sum_{O-\Delta/2\leq O ^{\prime}<O+\Delta/2}\rho_{\mathbb{L},\mathbb{S}}^{(1)}(O^{\prime}), \tag{13}\] \[F_{\mathbb{L},\mathbb{S}}^{(2)}(V;\Delta):=\sum_{V-\Delta/2\leq V ^{\prime}<V+\Delta/2}\rho_{\mathbb{L},\mathbb{S}}^{(2)}(V^{\prime}). \tag{14}\] #### iv.2.5 Fold change of selected gene expression in Fig. 7: For the RNA-seq data, we consider the set of cell types \(\mathbb{T}\) as \[\mathbb{T}=\{\mathrm{HSC},\mathrm{CD4^{+}T},\mathrm{CD8^{+}T}, \mathrm{NK},\mathrm{Mono},\mathrm{ATL}\}. \tag{15}\] We compute the fold change in gene \(i\), \(\mathrm{FC}_{i}(t,t_{0})\) between type \(t,t_{0}\in\mathbb{T}\) as \[\mathrm{FC}_{i}(t,t_{0}):=\frac{\overline{R}_{i}(t)}{\overline{R}_{i}(t_{0})}, \tag{16}\] where \(\overline{R}_{i}(t)\) is the average normalized expression \(n_{i}(s)\) of the RNA-seq data for gene \(i\) over all samples with type \(t\). Here, we consider only the genes where \(\log_{2}\mathrm{FC}_{i}(t,t_{0})\) is well-defined; in other words, we consider the genes \(i\) satisfying the following conditions: (1) a peak of sample type \(t\) that intersects the coding region of the gene, and (2) the same condition for type \(t_{0}\), where the corresponding \(k\)-th peak satisfies \(k\leq M\). We denote \(\mathbb{G}_{t,t_{0}}(M)\) as the set of all the genes satisfying these conditions. Then, we focus on the following function quantifying the frequency of the log-fold change \[\rho_{t,t_{0}}^{M}(P):=\frac{1}{|\mathbb{G}_{t,t_{0}}(M)|}\sum_{i \in\mathbb{G}_{t,t_{0}}(M)}\delta(P,\log_{2}\mathrm{FC}_{i}(t,t_{0})). \tag{17}\] In Fig. 7, we plot the histogram of the frequency of the log-fold change \[F_{t,t_{0}}^{M}(P;\Delta):=\sum_{P-\Delta/2\leq P^{\prime}<P+\Delta/2}\rho_{t,t_ {0}}^{M}(P^{\prime}). \tag{18}\] #### iv.2.6 \(\Delta\Delta C_{t}\) method in Fig. 10c: To obtain the threshold cycle for real-time PCR of an mRNA sample (see "Real-time PCR" for details), for sample \(s\in\mathbb{S}\), we denote \(C_{s}^{TL11}\) as the threshold cycle for gene TLL1 and \(C_{s}^{GAPDH}\) for the gene GADPH. Then, we define the difference \(\Delta C_{s}:=C_{s}^{TL11}-C_{s}^{GAPDH}\) and consider the normalized difference as \[\Delta C_{s}^{0}:=\Delta C_{s}-\min_{s\in\mathbb{S}}\Delta C_{s}. \tag{19}\] ### Data availability All ATAC-seq and RNA-seq data needed to reproduce this study will be deposited at the DNA Data Bank of Japan (DDBJ) under the accession number XXXXX. ### Ethics approval and consent to participate Experiments using clinical samples were conducted according to the principles expressed in the Declaration of Helsinki and approved by the Institutional Review Board of Kyoto University (permit numbers G310 and G204). ATL patients provided written informed consent for the collection of samples and subsequent analysis. ### Author's contributions A. Tanaka: Conceptualization, NGS sample preparation, NGS data analysis, investigation, performing experiments, project administration, generating figures and tables, funding acquisition, and writing original draft. J.I. Yasunaga: Collecting clinical samples, data investigation, funding acquisition and experimental advices. H. Ohta and Y. Ishitsuka: Data investigation, methodology, generating figures, and writing original draft. C. Onishi and H. Tanaka: Assisting plasmid preparation, experiments and analyses. N. Takenouchi, M. Nakagawa and K. Koh: Collecting clinical samples. A. Fujimoto: Assisting NGS data analysis. M. Matsuoka: Collecting clinical samples, supervision, funding acquisition, project administration, and writing original draft. All authors participated in discussions and interpretation of the data and results. ###### Acknowledgements. We thank P. Karagiannis for proofreading the manuscript and many valuable comments. This research was supported by JSPS KAKENHI Grant Numbers JP19K16740 (AT), JP18J40119 (AT), XXXX (MM), and XXXX (JiY) and by a grant from the Naito Foundation (AT).
2303.11047
Practical Realization of Bessel's Correction for a Bias-Free Estimation of the Auto-Covariance and the Cross-Covariance Functions
To derive the auto-covariance function from a sampled and time-limited signal or the cross-covariance function from two such signals, the mean values must be estimated and removed from the signals. If no a priori information about the correct mean values is available and the mean values must be derived from the time series themselves, the estimates will be biased. For the estimation of the variance from independent data the appropriate correction is widely known as Bessel's correction. Similar corrections for the auto-covariance and for the cross-covariance functions are shown here, including individual weighting of the samples. The corrected estimates then can be used to correct also the variance estimate in the case of correlated data. The programs used here are available online at http://sigproc.nambis.de/programs.
Holger Nobach
2023-03-20T11:58:40Z
http://arxiv.org/abs/2303.11047v1
Practical Realization of Bessel's Correction for a Bias-Free Estimation of the Auto-Covariance and the Cross-Covariance Functions ###### Abstract To derive the auto-covariance function from a sampled and time-limited signal or the cross-covariance function from two such signals, the mean values must be estimated and removed from the signals. If no _a priori_ information about the correct mean values is available and the mean values must be derived from the time series themselves, the estimates will be biased. For the estimation of the variance from independent data the appropriate correction is widely known as Bessel's correction. Similar corrections for the auto-covariance and for the cross-covariance functions are shown here, including individual weighting of the samples. The corrected estimates then can be used to correct also the variance estimate in the case of correlated data. The programs used here are available online at [http://sigproc.nambis.de/programs](http://sigproc.nambis.de/programs). ## 1 Introduction The processing of measured data often requires mean-free data sets to emphasize the dynamic characteristics of the observed process. Since the mean value often is unknown beforehand, the standard procedure is to estimate the mean value from the measured data set and then remove this estimated mean value from the measured values before further data processing. For the following investigations a set of \(N\) measured data samples \(x_{i},i=0\ldots N-1\), taken at their measurement times \(t_{i}=i\Delta t\) with the regular sampling interval \(\Delta t\) is assumed. The samples can have individual weights \(w_{i}\), which can be used to correct systematic errors due to an askance distribution of the data values or to mask invalid data samples. The estimate of the mean value from the available data samples then looks \[\bar{x}=\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}}{\sum\limits_{i=0}^{N-1}w_{i}}, \tag{1}\] which then is subtracted from all samples, yielding the new, mean-free samples \(\tilde{x}_{i}=x_{i}-\bar{x}\) taken for the following data analysis. Higher-order trend removal, outliers or superimposed noise are not investigated here. Let the mean estimator have the estimation variance \(\sigma_{\bar{x}}^{2}\). Since the variance of a sum of correlated variables is the sum of all pair-wise covariances, the variance of the mean estimator is1 Footnote 1: For all weights being constant, the expression reduces to \[\sigma_{\bar{x}}^{2}=\frac{1}{N^{2}}\sum\limits_{k=-(N-1)}^{N-1}\left(N-|k| \right)C_{k}.\] \[\sigma_{\bar{x}}^{2}=\frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i }w_{j}C_{j-i}}{\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^{2}}, \tag{2}\] involving the unknown true auto-covariance function \(C\). If the variance of the data set is obtained from the mean-subtracted values \(\tilde{x}_{i}\) as \[s^{2}=\frac{\sum\limits_{i=0}^{N-1}w_{i}\tilde{x}_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}, \tag{3}\] then this estimate will have a systematic error due to the fact that the estimation of the mean value before with its estimation variance \(\sigma_{\bar{x}}^{2}\) will reduce the remaining power in the investigated data sequence after removing the estimated mean. The expectation of the variance estimation with the estimated mean subtracted from the data samples is \[\mathrm{E}\{s^{2}\}=\sigma_{x}^{2}-\frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j =0}^{N-1}w_{i}w_{j}C_{j-i}}{\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^{2}} \tag{4}\] with the true variance \(\sigma_{x}^{2}\) of the data and again with the true auto-covariance function \(C\). The deviation from the correct variance is exactly the variance of the mean estimator \(\sigma_{\bar{x}}^{2}\). If the variance of the mean estimation is known beforehand, then a bias-free estimate of the data variance is \[\hat{s}^{2}=s^{2}+\sigma_{\tilde{x}}^{2}. \tag{5}\] For \(N\) independent data samples \(x_{i}\) with their weights \(w_{i}\), the variance of the mean estimation can be predicted as \[\sigma_{\tilde{x}}^{2}=\frac{\sum\limits_{i=0}^{N-1}w_{i}^{2}}{\left(\sum \limits_{i=0}^{N-1}w_{i}\right)^{2}}\cdot\sigma_{x}^{2}. \tag{6}\] Requesting that the variance estimate \(\hat{s}^{2}\) becomes bias free without knowing the true variance \(\sigma_{x}^{2}\) beforehand leads to the estimate \[\hat{s}^{2}=\frac{\sum\limits_{i=0}^{N-1}w_{i}}{\left(\sum\limits_{i=0}^{N-1} w_{i}\right)^{2}-\sum\limits_{i=0}^{N-1}w_{i}^{2}}\cdot\sum\limits_{i=0}^{N-1}w_{i} \tilde{x}_{i}^{2}. \tag{7}\] For all weights being constant (including that the samples are independent) this reduces to the expression \[\hat{s}^{2}=\frac{1}{N-1}\sum\limits_{i=0}^{N-1}\tilde{x}_{i}^{2}, \tag{8}\] where the division by \(N-1\) instead of \(N\) is widely known as Bessel's correction for the variance estimate for independent data samples, even if it is more likely attributed to Gauss (Kenney and Keeping, 1951, p. 125). Similar corrections can be made to estimates of the auto-covariance function or the cross-covariance function derived from two different data sets. Unfortunately, this requires considering that the data samples are correlated -- why one would otherwise calculate the covariance function? It seems that in the past not much research has been made to investigate or solve this particular problem, even if it seems to be a logical step. A literature research reflects the low interest by no appropriate articles in the past decades. The more surprising it was, that very recently a paper was published by Vogelsang and Yang (2016), using exactly the here proposed idea of deriving a prediction matrix, mapping the true covariance function onto the expectation of the estimated one and using the inverse of this matrix to obtain a corrected covariance function from the estimated one. Considering this coincidence, the notation of the matrix has been adjusted accordingly and the title also takes this into account by introducing now a "practical realization" of the method. Otherwise, the present article uses its own derivations. Different to Vogelsang and Yang (2016), here weighted averages are used in the estimation of the statistical properties. Furthermore, the investigations have been extended to the case of estimating the cross-covariance function between two data sets. Note, that in the present derivations, the primary covariance estimates are based on the normalization considering the decreasing overlap of the observed signals for increasing lag time instead of a constant normalization factor. Furthermore, the two-sided (symmetric) auto-covariance function is used instead of the one-sided, because this better corresponds to the cross-covariance function and it may accelerate the computation by allowing the usage of the fast Fourier transform. Finally, the bias-corrected estimation of the covariance function can be used to obtain an appropriate correction of the variance estimate under the condition of correlated data samples. The following sections introduce the procedures to derive bias-free estimates of the auto- and the cross-covariance function from equidistantly sampled, time-limited data sets, where the mean values are derived and subtracted from the data as described above. All required quantities are derived directly from the observed data. No further _a priori_ information is needed. The programs used here are available online at [http://sigproc.nambis.de/programs](http://sigproc.nambis.de/programs). ## 2 Auto-covariance case The auto-covariance \(C_{k}\) of a data sequence, at the time instance \(\tau_{k}=k\Delta t\), is defined as \[C_{k}=\langle(x_{i}-\mu)(x_{i+k}-\mu)\rangle \tag{9}\] with the true mean value \(\mu\) and the expectation \(\langle\cdot\rangle\). Assuming a data set of \(N\) samples \(\tilde{x}_{i},i=0\ldots N-1\) after removing the estimated mean value \(\bar{x}\), measured at time instances \(t_{i}=i\Delta t\) and appropriate individual weights \(w_{i}\), an estimator of the auto-covariance function of an aperiodic signal could look like \[c_{k}=\frac{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}\tilde{x}_{ i}\tilde{x}_{i+k}}{\min(N,N-k)-1}=\frac{X_{k}}{Y_{k}}. \tag{10}\] Assuming a zero padding of \(N\) concatenated zeros, the appropriate sums in the numerator (\(X\)) and in the denominator (\(Y\)) can also be calculated by means of the (fast) discrete Fourier transform (FFT) and its inverse (IFFT) as \[X = \mathrm{IFFT}\left\{\left|\mathrm{FFT}\left\{w_{i}^{\prime}\tilde {x}_{i}^{\prime}\right\}\right|^{2}\right\} \tag{11}\] \[Y = \mathrm{IFFT}\left\{\left|\mathrm{FFT}\left\{w_{i}^{\prime} \right\}\right|^{2}\right\}, \tag{12}\] where \(\{w_{i}^{\prime}\tilde{x}_{i}^{\prime}\}\) and \(\{w_{i}^{\prime}\}\) are the zero-padded sets of weighted data values (after mean removal) and that of the weights respectively. This estimator has a similar systematic error as the variance estimator above (see example in Fig. 1b). An appropriate estimation of the expectation of the covariance function is \[\mathrm{E}\{c_{k}\}=C_{k}+\varepsilon_{k}, \tag{13}\] with the true auto-covariance function \(C_{k}\) at lag time \(\tau_{k}\) and the bias \[\varepsilon_{k}=\frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j} C_{j-i}}{\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^{2}}-\frac{\sum\limits_{i= \max(0,-k)}^{\min(N,N-k)-1}\sum\limits_{j=0}^{N-1}w_{i}w_{i+k}w_{j}(C_{j-i}+C_{ i+k-j})}{\left(\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}\right) \left(\sum\limits_{i=0}^{N-1}w_{i}\right)}, \tag{14}\] which is constant for uncorrelated data, otherwise it varies with \(k\). The first term again is the variance \(\sigma_{\bar{x}}^{2}\) of the mean estimator. Since the true covariance function \(C\) is unknown in real measurements, the prediction cannot be made directly. However, the relation between the true covariance function and its estimate is linear. Therefore, one can built a matrix2 Footnote 2: The notation has been chosen with respect to Vogelsang and Yang (2016). \[a_{kj}=\delta_{k-j}-2\frac{N-\max[|j|\,,|k|\,,\min(N,|k-j|)]}{N(N-|k|)}+\frac {N-|j|}{N^{2}}\hskip 14.226378pt|j|\,,|k|<N.\] (15) If the matrix \(\mathbf{A}\) has the elements \(a_{kj}\) then the prediction of the estimated covariance at lag time \(\tau_{k}\) is \[\mathrm{E}\{c_{k}\}=\sum\limits_{j=K_{1}}^{K_{2}}a_{kj}C_{j}. \tag{16}\] The range \(K_{1}\ldots K_{2}\) of covariances considered should include the full range of occurring correlations, such that all true covariance outside this interval can be neglected. The elements of this matrix are3 Footnote 3: If all \(w_{i}\) are constant, then the elements of this matrix become \[a_{kj} = \delta_{k-j}-2\frac{N-\max[|j|\,,|k|\,,\min(N,|k-j|)]}{N(N-|k|)}+ \frac{N-|j|}{N^{2}}\hskip 14.226378pt|j|\,,|k|<N.\] (17) with \[\delta_{i}=\left\{\begin{array}{ll}1&\mbox{for $i=0$}\\ 0&\mbox{otherwise}\end{array}\right. \tag{18}\] or \[a_{kj}=\delta_{k-j}+\frac{Y_{j}}{\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^{2}} -\frac{G_{kj}+H_{kj}}{Y_{k}\left(\sum\limits_{i=0}^{N-1}w_{i}\right)} \tag{19}\] with \[G_{k} = \mbox{IFFT}\left\{\mbox{FFT}\left\{w^{\prime}_{i}w^{\prime}_{i+k} \right\}^{*}\mbox{FFT}\left\{w^{\prime}_{i}\right\}\right\} \tag{20}\] \[H_{k} = \mbox{IFFT}\left\{\mbox{FFT}\left\{w^{\prime}_{i}\right\}^{*} \mbox{FFT}\left\{w^{\prime}_{i}w^{\prime}_{i-k}\right\}\right\}, \tag{21}\] with the conjugate complex \(\cdot^{*}\), involving again the (fast) discrete Fourier transform (FFT) and its inverse (IFFT). The inverse of the matrix \(\mathbf{A}^{-1}\) applied to the estimate \(c\) yields an improved, bias-free estimate \(\hat{c}\) of the covariance \[\hat{c}=\mathbf{A}^{-1}\emph{c}. \tag{22}\] For given \(N\) samples \(x_{i}\), the covariance function after zero padding has \(2N-1\) non-zero values \(c_{k}\) in the range \(-(N-1)\ldots N-1\). Unfortunately, the appropriate matrix \(\mathbf{A}\) then has some linear dependent equations and a direct inverse cannot be calculated. The inverse can be calculated only, if the covariance function is limited to the range \(K_{1}\ldots K_{2}\) with \(-(N-1)<K_{1}\leq K_{2}<N-1\). The improved covariance estimate then is bias free, as long as the true covariance of the original signal is zero outside the reduced interval of lag times \(\tau_{K_{1}}\ldots\tau_{K_{2}}\). This coincides with the requirement that the interval of investigated lag times is larger than the longest correlation lasts and the observation interval of the signal is at least a little longer than the largest lag time investigated. The improved estimate \(\hat{c}\) of the covariance function then can be used to derive the estimation variance of the mean estimator \(\sigma_{\bar{x}}^{2}\) following Eq. (2), where the true covariance \(C\) is replaced by the improved estimate \(\hat{c}\), and finally to improve the variance estimation \(\hat{s}^{2}\) following Eq. (5). ## 3 Cross-covariance case The cross-covariance \(C_{k}\) of two data sequences \(x_{1,i}\) and \(x_{2,i}\), at the time instance \(\tau_{k}=k\Delta t\), is defined as \[C_{k}=\langle(x_{1,i}-\mu_{1})(x_{2,i+k}-\mu_{2})\rangle \tag{23}\] with the true mean values \(\mu_{1}\) and \(\mu_{2}\) and the expectation \(\langle\cdot\rangle\). Assuming data sets of \(N_{1}\) samples \(\bar{x}_{1,i},i=0\ldots N_{1}-1\) and \(N_{2}\) samples \(\bar{x}_{2,i},i=0\ldots N_{2}-1\) after removing the estimated mean values \(\bar{x}_{1}\) and \(\bar{x}_{2}\), measured at time instances \(t_{i}=i\Delta t\) and appropriate individual weights \(w_{1,i}\) and \(w_{2,i}\), an estimator of the cross-covariance function of an aperiodic signal could look like \[c_{k}=\frac{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k} \tilde{x}_{1,i}\tilde{x}_{2,i+k}}{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k )-1}w_{1,i}w_{2,i+k}}=\frac{X_{k}}{Y_{k}}. \tag{24}\] Assuming a zero padding of \(N_{2}\) concatenated zeros to the sequence \(x_{1,i}\) and \(N_{1}\) concatenated zeros to the sequence \(x_{2,i}\), the appropriate sums in the numerator (_X_) and in the denominator (_Y_) can also be calculated by means of the (fast) discrete Fourier transform as \[X = \mathrm{IFFT}\left\{\mathrm{FFT}\left\{w_{1,i}^{\prime}\tilde{x}_ {1,i}^{\prime}\right\}^{*}\mathrm{FFT}\left\{w_{2,i}^{\prime}\tilde{x}_{2,i}^{ \prime}\right\}\right\} \tag{25}\] \[Y = \mathrm{IFFT}\left\{\mathrm{FFT}\left\{w_{1,i}^{\prime}\right\} ^{*}\mathrm{FFT}\left\{w_{2,i}^{\prime}\right\}\right\}, \tag{26}\] with the conjugate complex \(\cdot^{*}\) and where \(\left\{w_{1,i}^{\prime}\tilde{x}_{1,i}^{\prime}\right\}\) and \(\left\{w_{1,i}^{\prime}\right\}\) are the zero-padded sets of weighted data values (after mean removal) of the first data series and that of the weights respectively and \(\left\{w_{2,i}^{\prime}\tilde{x}_{2,i}^{\prime}\right\}\) and \(\left\{w_{2,i}^{\prime}\right\}\) those of the second data series and its appropriate weights. This estimator has a similar systematic error as the variance estimator and the auto-covariance estimator above (see example in Fig. 1c). An appropriate estimation of the expectation of the cross-covariance function is \[\mathrm{E}\{c_{k}\}=C_{k}+\varepsilon_{k}, \tag{27}\] with the true cross-covariance function \(C_{k}\) at lag time \(\tau_{k}\) and the bias \[\varepsilon_{k} = \frac{\sum\limits_{i=0}^{N_{1}-1}\sum\limits_{j=0}^{N_{2}-1}w_{1,i}w_{2,j}C_{j-i}}{\left(\sum\limits_{i=0}^{N_{1}-1}w_{1,i}\right)\left(\sum \limits_{i=0}^{N_{2}-1}w_{2,i}\right)}-\frac{\sum\limits_{i=\max(0,-k)}^{\min (N_{1},N_{2}-k)-1}\sum\limits_{j=0}^{N_{2}-1}w_{1,i}w_{2,i+k}w_{2,j}C_{j-i}}{ \left(\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k} \right)\left(\sum\limits_{i=0}^{N_{2}-1}w_{2,i}\right)} \tag{28}\] \[-\frac{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}\sum \limits_{j=0}^{N_{1}-1}w_{1,i}w_{2,i+k}w_{1,j}C_{i+k-j}}{\left(\sum\limits_{i= \max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}\right)\left(\sum\limits_{i =0}^{N_{1}-1}w_{1,i}\right)},\] which is constant for uncorrelated data and only if the weights are identical for the two data sets, otherwise it varies with \(k\). The matrix \(\mathbf{A}\), mapping a hypothetical covariance function \(C\) onto the estimated one \(c\) via \[\mathrm{E}\{c\}=\mathbf{A}\,C, \tag{29}\] can be used to predict the estimated covariance at time lag \(\tau_{k}\) as \[\mathrm{E}\{c_{k}\}=\sum_{j=K_{1}}^{K_{2}}a_{kj}C_{j} \tag{30}\] with the elements \(a_{kj}\) of the matrix \(\mathbf{A}\). The range \(K_{1}\ldots K_{2}\) of covariances considered should include the full range of occurring correlations, such that all true covariance outside this interval can be neglected. The elements of this matrix are4 Footnote 4: If all \(w_{i}\) are constant, then the elements of this matrix become \[a_{kj} = \delta_{k-j}-\frac{\min(N_{1},N_{2}-j,N_{2}-k)-\max(0,-j,-k)}{N_{ 2}\left[\min(N_{1},N_{2}-k)-\max(0,-k)\right]} \tag{31}\] \[-\frac{\min\left[N_{1},N_{2}-j,\max(0,N_{1}+k-j)\right]-\max\left[ 0,-j,\min(N_{1},k-j)\right]}{N_{1}\left[\min(N_{1},N_{2}-k)-\max(0,-k)\right]}\] \[+\frac{\min(N_{1},N_{2}-j)-\max(0,-j)}{N_{1}N_{2}}\quad-N_{1}<j, k<N_{2}.\] again with \[\delta_{i}=\left\{\begin{array}{ll}1&\mbox{for }i=0\\ 0&\mbox{otherwise}\end{array}\right. \tag{32}\] or \[a_{kj}=\delta_{k-j}+\frac{Y_{j}}{\left(\sum\limits_{i=0}^{N_{1}-1}w_{1,i} \right)\left(\sum\limits_{i=0}^{N_{2}-1}w_{2,i}\right)}-\frac{G_{kj}}{Y_{k} \left(\sum\limits_{i=0}^{N_{2}-1}w_{2,i}\right)}-\frac{H_{kj}}{Y_{k}\left( \sum\limits_{i=0}^{N_{1}-1}w_{1,i}\right)} \tag{33}\] with \[G_{k} = \mathrm{IFFT}\left\{\mathrm{FFT}\left\{w_{1,i}^{\prime}w_{2,i+k}^ {\prime}\right\}^{*}\mathrm{FFT}\left\{w_{2,i}^{\prime}\right\}\right\} \tag{34}\] \[H_{k} = \mathrm{IFFT}\left\{\mathrm{FFT}\left\{w_{1,i}^{\prime}\right\}^ {*}\mathrm{FFT}\left\{w_{2,i}^{\prime}w_{1,i-k}^{\prime}\right\}\right\}, \tag{35}\] involving again the (fast) discrete Fourier transform (FFT) and its inverse (IFFT). The inverse of the matrix \(\mathbf{A}^{-1}\) applied to the estimate \(c\) yields an improved, bias-free estimate \(\hat{c}\) of the cross-covariance \[\hat{c}=\mathbf{A}^{-1}c. \tag{36}\] For given \(N_{1}\) samples \(x_{1,i}\) and \(N_{2}\) samples \(x_{2,i}\), the covariance function after zero padding has \(N_{1}+N_{2}-1\) non-zero values \(c_{k}\) in the range \(-(N_{1}-1)\ldots N_{2}-1\). Unfortunately, the appropriate matrix \(\mathbf{A}\) then has some linear dependent equations and a direct inverse cannot be calculated. The inverse can be calculated only, if the covariance function is limited to the range \(K_{1}\ldots K_{2}\) with \(-(N_{1}-1)<K_{1}\leq K_{2}<N_{2}-1\). The improved covariance estimate then is bias free, as long as the true covariance of the original signal is zero outside the reduced interval of lag times \(\tau_{K_{1}}\ldots\tau_{K_{2}}\). This coincides with the requirement that the interval of investigated lag times is larger than the longest correlation lasts and the observation interval of the signal is at least a little longer than the largest lag time investigated. ## 4 Numerical simulation To demonstrate the effect of Bessel's correction two linear random processes (moving average of order 10, all coefficients 0.1) with \(\Delta t=0.2\,\mathtt{atu}\) (\(\mathtt{atu}\) - arbitrary time unit) have been simulated, each with a normal distribution with a variance of \(4\,\mathtt{aau}^{2}\) (\(\mathtt{aau}\) - arbitrary amplitude unit) and a mean of \(8\,\mathtt{aau}\). The two series have been coupled, yielding a cross-covariance of \(3\,\mathtt{aau}^{2}\) and one series has been time shifted to obtain a delay of \(2\,\mathtt{atu}\) between the two time series, which finally are limited to \(N_{1}=N_{2}=50\) samples each. The weights have been random values from a uniform distribution between zero and one. To obtain the empirical mean of the auto-covariance and the cross-covariance estimation, \(10\,000\) individual realizations (Fig. 1a) have been simulated and analyzed (calculation of the mean values, mean removal and estimation of the auto-covariance function of one of the data sets and the cross-covariance function between the two data sets with \(K_{1}=-25\) and \(K_{2}=24\)). Fig. 1b and c compare the empirical mean of the auto-covariance estimate and the cross-covariance estimates respectively without and with the proposed correction. Without the correction, the bias is obvious, all covariance values are underestimated here, while additionally a drift can be observed in the cross-covariance case, which in other cases may also lead to an over-estimation at certain lag times. The introduced correction efficiently removes the bias and yields bias-free estimates of the auto-covariance function and the cross-covariance function. ## 5 Conclusion The removal of the estimated mean values from sampled, time-limited data sets causes a bias in the estimates of the auto-covariance and the cross-covariance functions. Based on the true covariance function, a prediction of the bias has Figure 1: a) Single realization of the data set from simulation. b) Estimate of the auto-covariance function (empirical mean from \(10\,000\) realizations) without and with Bessel’s correction for auto-covariance in comparison to the expected auto-covariance function according to the simulation process c) Estimate of the cross-covariance function (empirical mean from \(10\,000\) realizations) without and with Bessel’s correction for cross-covariance in comparison to the expected cross-covariance function according to the simulation process (atu - arbitrary time unit, aau - arbitrary amplitude unit) been derived for such data sets with correlated samples including individual weighting of the samples. From the linear equations of the bias prediction an inverse matrix has been derived, which can be applied to the initial estimates of the covariance function to obtain an improved, bias-free estimate of the respective functions. The corrected estimates then can be used to correct also the variance estimate in the case of correlated data. Numerical simulations have shown the improvements in estimating the covariance functions by the introduced procedures. The findings well agree with the derivations of Vogelsang and Yang (2016), especially the linear dependencies of the respective system of equations and the feasibility of the inversion of an appropriate sub-matrix. The findings have been extended by the implementation of weighted averages in the estimation procedures, the investigation of the cross-covariance between different data sets, the implementation of the fast Fourier transform to accelerate the calculations and the bias-free estimation of the variance under the condition of correlated data samples. ## Acknowledgement The author gratefully acknowledges the fruitful discussion with Annette Witt. ## Appendix A Derivation of Eq. (4) and (5) From Eq. (3) follows \[s^{2} = \frac{\sum\limits_{i=0}^{N-1}w_{i}\bar{x}_{i}^{2}}{\sum\limits_{ i=0}^{N-1}w_{i}}=\frac{\sum\limits_{i=0}^{N-1}w_{i}\left(x_{i}-\bar{x}\right)^{2}}{ \sum\limits_{i=0}^{N-1}w_{i}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\bar{x}}{\sum\limits_{i=0}^{ N-1}w_{i}}+\frac{\sum\limits_{i=0}^{N-1}w_{i}\bar{x}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{i} \left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{ j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{i} \left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{ j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{i} \left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{ j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{i} \left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{ j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{i} \left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{ j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{i} \left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{ j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{ j}\left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{ j}\left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{ i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{ i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{ i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{j}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x _{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{j}}+\frac{\sum\limits_{i=0}^{N-1}w_{ i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j}x_{j}\right)^{2}}{\sum\limits_{i=0}^{N-1}w_{i}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}\left(\sum\limits_{j=0}^{N-1}w_{j \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{N-1}w _{i}}-2\frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j}x_{i}x_{j}} {\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^{2}}+\left(\frac{\sum\limits_{j=0}^{N -1}w_{j}x_{j}}{\sum\limits_{j=0}^{N-1}w_{j}}\right)^{2}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-2\frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j}x_{i}x_{j}} {\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^{2}}+\frac{\sum\limits_{i=0}^{N-1} \sum\limits_{j=0}^{N-1}w_{i}w_{j}x_{i}x_{j}}{\left(\sum\limits_{i=0}^{N-1}w_{i }\right)^{2}}\] \[= \frac{\sum\limits_{i=0}^{N-1}w_{i}x_{i}^{2}}{\sum\limits_{i=0}^{ N-1}w_{i}}-\frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j}x_{i}x_{j}} {\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^{2}}\] \[= \frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j} \left(x_{i}^{2}-x_{i}x_{j}\right)}{\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^ {2}}. \tag{43}\] The expectation of \(s^{2}\) then is \[\mathrm{E}\{s^{2}\} = \frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j} \left[\left(\sigma_{x}^{2}+\mu^{2}\right)-\left(C_{j-i}+\mu^{2}\right)\right]} {\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^{2}} \tag{44}\] \[= \frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j} \sigma_{x}^{2}}{\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^{2}}-\frac{\sum \limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j}C_{j-i}}{\left(\sum \limits_{i=0}^{N-1}w_{i}\right)^{2}}\] (45) \[= \sigma_{x}^{2}-\sigma_{\bar{x}}^{2}. \tag{46}\] ## Appendix B Derivation of Eqs. (13) and (14) From Eq. (10) follows \[c_{k} = \frac{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}\tilde {x}_{i}\tilde{x}_{i+k}}{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}} \tag{47}\] \[= \frac{\min(N,N-k)-1}{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{ i+k}\left(x_{i}-\bar{x}\right)\left(x_{i+k}-\bar{x}\right)}{\sum\limits_{i=\max(0,-k)}^{ \min(N,N-k)-1}w_{i}w_{i+k}}\] \[= \frac{\min(N,N-k)-1}{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{ i}w_{i+k}}-\bar{x}\frac{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k} \left(x_{i}+x_{i+k}\right)}{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{ i+k}}+\bar{x}^{2}\] \[= \frac{\min(N,N-k)-1}{\min(N,N-k)-1}w_{i}w_{i+k}x_{i}x_{i+k}\] \[= \frac{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}x_{i} x_{i+k}}{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}}\] \[-\left(\frac{\sum\limits_{j=0}^{N-1}w_{j}x_{j}}{\sum\limits_{j=0 }^{N-1}w_{j}}\right)\frac{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{ i+k}\left(x_{i}+x_{i+k}\right)}{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{ i+k}}+\left(\frac{\sum\limits_{j=0}^{N-1}w_{j}x_{j}}{\sum\limits_{j=0}^{N-1}w_{j}} \right)^{2}\] \[= \frac{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}x_{i} x_{i+k}}{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}}\] \[-\frac{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}\sum\limits_{j= 0}^{N-1}w_{i}w_{i+k}w_{j}\left(x_{i}+x_{i+k}\right)x_{j}}{\left(\sum\limits_{ i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}\right)\left(\sum\limits_{j=0}^{N-1}w_{j} \right)}+\frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j}x_{i}x_ {j}}{\left(\sum\limits_{j=0}^{N-1}w_{j}\right)^{2}}\] \[= \frac{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}x_{i} x_{i+k}}{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}}+\frac{\sum \limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j}x_{i}x_{j}}{\left(\sum \limits_{i=0}^{N-1}w_{i}\right)^{2}}\] \[-\frac{\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}\sum\limits_{j= 0}^{N-1}w_{i}w_{i+k}w_{j}\left(x_{i}+x_{i+k}\right)x_{j}}{\left(\sum\limits_{ i=\max(0,-k)}^{\min(N,N-k)-1}w_{i}w_{i+k}\right)\left(\sum\limits_{i=0}^{N-1}w_{i} \right)}. \tag{52}\] The expectation of \(c_{k}\) then is \[\mathrm{E}\{c_{k}\} = \frac{\sum\limits_{\begin{subarray}{c}i=\max(0,-k)\\ i=\max(0,-k)\end{subarray}}^{\min(N,N-k)-1}w_{i}w_{i+k}\left(C_{k}+\mu^{2} \right)}{\sum\limits_{\begin{subarray}{c}i=\max(0,-k)\\ i=\max(0,-k)\end{subarray}}^{\min(N,N-k)-1}w_{i}w_{i+k}}+\frac{\sum\limits_{i=0 }^{N-1}\sum\limits_{j=0}^{N-1}w_{i}w_{j}\left(C_{j-i}+\mu^{2}\right)}{\left( \sum\limits_{i=0}^{N-1}w_{i}\right)^{2}} \tag{53}\] \[-\frac{\sum\limits_{\begin{subarray}{c}i=\max(0,-k)\\ i=\max(0,-k)\end{subarray}}^{\min(N,N-k)-1}\sum\limits_{j=0}^{N-1}w_{i}w_{i+k} w_{j}\left(C_{j-i}+C_{j-(i+k)}+2\mu^{2}\right)}{\left(\sum\limits_{i=\max(0,-k)}^{ \min(N,N-k)-1}w_{i}w_{i+k}\right)\left(\sum\limits_{i=0}^{N-1}w_{i}\right)}\] \[= C_{k}+\frac{\sum\limits_{i=0}^{N-1}\sum\limits_{j=0}^{N-1}w_{i} w_{j}C_{j-i}}{\left(\sum\limits_{i=0}^{N-1}w_{i}\right)^{2}}\] \[-\frac{\sum\limits_{\begin{subarray}{c}i=\max(0,-k)\\ i=\max(0,-k)\end{subarray}}^{\min(N,N-k)-1}\sum\limits_{j=0}^{N-1}w_{i}w_{i+k} w_{j}\left(C_{j-i}+C_{i+k-j}\right)}{\left(\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i} w_{i+k}\right)\left(\sum\limits_{i=0}^{N-1}w_{i}\right)}\] \[= C_{k}+\sigma_{\vec{x}}^{2}-\frac{\sum\limits_{\begin{subarray}{ c}i=\max(0,-k)\\ i=\max(0,-k)\end{subarray}}^{\min(N,N-k)-1}\sum\limits_{j=0}^{N-1}w_{i}w_{i+k} w_{j}\left(C_{j-i}+C_{i+k-j}\right)}{\left(\sum\limits_{i=\max(0,-k)}^{\min(N,N-k)-1}w_{i} w_{i+k}\right)\left(\sum\limits_{i=0}^{N-1}w_{i}\right)}. \tag{55}\] ## Appendix C Derivation of Eqs. (27) and (28) From Eq. (24) follows \[c_{k} = \frac{\sum\limits_{\begin{subarray}{c}i=\max(0,-k)\\ i=\max(0,-k)\end{subarray}}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}\widetilde{ x}_{1,i}\widetilde{x}_{2,i+k}}{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}} \tag{56}\] \[= \frac{\sum\limits_{\begin{subarray}{c}i=\max(0,-k)\\ i=\max(0,-k)\end{subarray}}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}\left(x_{1,i}-\bar{x}_{1}\right)\left(x_{2,i+k}-\bar{x}_{2}\right)}{\sum\limits_{i=\max (0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}} \tag{57}\] \[= \frac{\min(N_{1},N_{2}-k)-1}{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}x _{1,i}x_{2,i+k}\] \[= \frac{\min(N_{1},N_{2}-k)-1}{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k} x_{1,i}x_{2,i+k}\] \[-\bar{x}_{1}\frac{\min(N_{1},N_{2}-k)-1}{\min(N_{1},N_{2}-k)-1}+ \bar{x}_{1}\bar{x}_{2}\] \[= \frac{\min(N_{1},N_{2}-k)-1}{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+ k}x_{1,i}x_{2,i+k}\] \[-\frac{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w _{2,i+k}x_{2,i+k}}{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w _{2,i+k}}+\bar{x}_{1}\bar{x}_{2}\] \[= \frac{\min(N_{1},N_{2}-k)-1}{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+ k}x_{1,i}x_{2,i+k}\] \[-\left(\frac{\sum\limits_{j=0}^{N_{2}-1}w_{2,j}x_{2,j}}{\sum \limits_{j=0}^{N_{2}-1}w_{2,j}}\right)\frac{\sum\limits_{i=\max(0,-k)}^{\min(N _{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}x_{1,i}}{\min(N_{1},N_{2}-k)-1}\] \[-\left(\frac{\sum\limits_{j=0}^{N_{1}-1}w_{1,j}x_{1,j}}{\sum \limits_{j=0}^{N_{1}-1}w_{1,j}}\right)\frac{\sum\limits_{i=\max(0,-k)}^{\min(N _{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}x_{2,i+k}}{\sum\limits_{i=\max(0,-k)}^{\min(N_ {1},N_{2}-k)-1}w_{1,i}w_{2,i+k}}\] \[+\left(\frac{\sum\limits_{j=0}^{N_{1}-1}w_{1,j}x_{1,j}}{\sum \limits_{j=0}^{N_{1}-1}w_{1,j}}\right)\left(\frac{\sum\limits_{j=0}^{N_{2}-1} w_{2,j}x_{2,j}}{\sum\limits_{j=0}^{N_{2}-1}w_{2,j}}\right)\] \[= \frac{\min(N_{1},N_{2}-k)-1}{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+ k}x_{1,i}x_{2,i+k}\] \[-\frac{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w _{2,i+k}x_{1,i}x_{2,j}}{\left(\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1 }w_{1,i}w_{2,i+k}\right)\left(\sum\limits_{j=0}^{N_{2}-1}w_{2,j}\right)}\] \[\begin{array}{rcl}&\min(N_{1},N_{2}-k)-1&\sum\limits_{j=0}^{N_{1}-1}w_{1,i}w_{ 2,i+k}w_{1,j}x_{2,i+k}x_{1,j}\\ &-\frac{\sum\limits_{i=\max(0,-k)}\sum\limits_{j=0}^{N_{1}-1}w_{1,i}w_{2,i+k} \left(\sum\limits_{j=0}^{N_{1}-1}w_{1,j}\right)}{\left(\sum\limits_{i=\max(0,- k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}\right)\left(\sum\limits_{j=0}^{N_{1}-1}w_{1,j} \right)}\\ &+\frac{\sum\limits_{i=0}^{N_{1}-1}\sum\limits_{j=0}^{N_{2}-1}w_{1,i}w_{2,j}x _{1,i}x_{2,j}}{\left(\sum\limits_{i=0}^{N_{1}-1}w_{1,i}\right)\left(\sum \limits_{j=0}^{N_{2}-1}w_{2,j}\right)}\\ &=\frac{\min(N_{1},N_{2}-k)-1}{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k )-1}w_{1,i}w_{2,i+k}x_{1,i}x_{2,i+k}}{\left(\sum\limits_{i=0}^{N_{1}-1}w_{1,i }\right)\left(\sum\limits_{i=0}^{N_{2}-1}w_{2,i}\right)}\\ &-\frac{\min(N_{1},N_{2}-k)-1}{\left(\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_ {2}-k)-1}w_{1,i}w_{2,i+k}\right)\left(\sum\limits_{i=0}^{N_{2}-1}w_{2,i} \right)}\\ &-\frac{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}\sum\limits_{j=0}^{ N_{1}-1}w_{1,i}w_{2,i+k}w_{1,j}x_{1,j}x_{2,i+k}}{\left(\sum\limits_{i=\max(0,-k)}^{ \min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}\right)\left(\sum\limits_{i=0}^{N_{1}-1} w_{1,i}\right)}.\end{array} \tag{61}\] The expectation of \(c_{k}\) then is \[\begin{array}{rcl}\mathrm{E}\{c_{k}\}&=&\frac{\min(N_{1},N_{2}-k)-1}{\sum \limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}\left(C_{k}+\mu_ {1}\mu_{2}\right)}{\min(N_{1},N_{2}-k)-1}\\ &\sum\limits_{i=\max(0,-k)}^{N_{1}-1}w_{1,i}w_{2,i+k}\\ &+\frac{\sum\limits_{i=0}^{N_{1}-1}\sum\limits_{j=0}^{N_{2}-1}w_{1,i}w_{2,j} \left(C_{j-i}+\mu_{1}\mu_{2}\right)}{\left(\sum\limits_{i=0}^{N_{1}-1}w_{1,i} \right)\left(\sum\limits_{i=0}^{N_{2}-1}w_{2,i}\right)}\\ &-\frac{\min(N_{1},N_{2}-k)-1}{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k )-1}w_{1,i}w_{2,i+k}w_{2,j}\left(C_{j-i}+\mu_{1}\mu_{2}\right)}{\left(\sum \limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}\right)\left( \sum\limits_{i=0}^{N_{2}-1}w_{2,i}\right)}\end{array}\] \[-\frac{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}\sum \limits_{j=0}^{N_{1}-1}w_{1,i}w_{2,i+k}w_{1,j}\left(C_{i+k-j}+\mu_{1}\mu_{2} \right)}{\left(\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+ k}\right)\left(\sum\limits_{i=0}^{N_{1}-1}w_{1,i}\right)} \tag{62}\] \[= C_{k}+\frac{\sum\limits_{i=0}^{N_{1}-1}\sum\limits_{j=0}^{N_{2}- 1}w_{1,i}w_{2,j}C_{j-i}}{\left(\sum\limits_{i=0}^{N_{1}-1}w_{1,i}\right)\left( \sum\limits_{i=0}^{N_{2}-1}w_{2,i}\right)}\] \[-\frac{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}\sum \limits_{j=0}^{N_{2}-1}w_{1,i}w_{2,i+k}w_{2,j}C_{j-i}}{\left(\sum\limits_{i= \max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}\right)\left(\sum\limits_{ i=0}^{N_{2}-1}w_{2,i}\right)}\] \[-\frac{\sum\limits_{i=\max(0,-k)}^{\min(N_{1},N_{2}-k)-1}\sum \limits_{j=0}^{N_{1}-1}w_{1,i}w_{2,i+k}w_{1,j}C_{i+k-j}}{\left(\sum\limits_{i= \max(0,-k)}^{\min(N_{1},N_{2}-k)-1}w_{1,i}w_{2,i+k}\right)\left(\sum\limits_{ i=0}^{N_{1}-1}w_{1,i}\right)}.\]
2301.10520
Ultra-NeRF: Neural Radiance Fields for Ultrasound Imaging
We present a physics-enhanced implicit neural representation (INR) for ultrasound (US) imaging that learns tissue properties from overlapping US sweeps. Our proposed method leverages a ray-tracing-based neural rendering for novel view US synthesis. Recent publications demonstrated that INR models could encode a representation of a three-dimensional scene from a set of two-dimensional US frames. However, these models fail to consider the view-dependent changes in appearance and geometry intrinsic to US imaging. In our work, we discuss direction-dependent changes in the scene and show that a physics-inspired rendering improves the fidelity of US image synthesis. In particular, we demonstrate experimentally that our proposed method generates geometrically accurate B-mode images for regions with ambiguous representation owing to view-dependent differences of the US images. We conduct our experiments using simulated B-mode US sweeps of the liver and acquired US sweeps of a spine phantom tracked with a robotic arm. The experiments corroborate that our method generates US frames that enable consistent volume compounding from previously unseen views. To the best of our knowledge, the presented work is the first to address view-dependent US image synthesis using INR.
Magdalena Wysocki, Mohammad Farid Azampour, Christine Eilers, Benjamin Busam, Mehrdad Salehi, Nassir Navab
2023-01-25T11:02:09Z
http://arxiv.org/abs/2301.10520v2
# Ultra-NeRF: Neural Radiance Fields for Ultrasound Imaging ###### Abstract We present a physics-enhanced implicit neural representation (INR) for ultrasound (US) imaging that learns tissue properties from overlapping US sweeps. Our proposed method leverages a ray-tracing-based neural rendering for novel view US synthesis. Recent publications demonstrated that INR models could encode a representation of a three-dimensional scene from a set of two-dimensional US frames. However, these models fail to consider the view-dependent changes in appearance and geometry intrinsic to US imaging. In our work, we discuss direction-dependent changes in the scene and show that a physics-inspired rendering improves the fidelity of US image synthesis. In particular, we demonstrate experimentally that our proposed method generates geometrically accurate B-mode images for regions with ambiguous representation owing to view-dependent differences of the US images. We conduct our experiments using simulated B-mode US sweeps of the liver and acquired US sweeps of a spine phantom tracked with a robotic arm. The experiments corroborate that our method generates US frames that enable consistent volume compounding from previously unseen views. To the best of our knowledge, the presented work is the first to address view-dependent US image synthesis using INR. U Utra-NeRF: Neural Radiance Fields for Ultrasound Imaging t1Computer Aided Medical Procedures & Augmented Reality, Technische Universitat Munchen ulrasound, neural radiance fields, implicit neural representation ## 1 Introduction 3D visualization of an anatomy significantly improves our understanding of the underlying pathology, however, most US machines used in practice deliver only a single cross-sectional view of an anatomy at a time. Sonographers, through extensive training and clinical expertise, fuse these 2D scans into a 3D model in their minds. The anisotropic nature of US imaging contributes to the increased difficulty of this task. Since an image of a specific region in the patient's body depends on the probe position, a mental 3D model is constantly updated with images that may carry contradicting information for the same region. Nevertheless, a trained operator approaches this problem effortlessly owing to the consciousness of the anatomy and the effect of a probe position on its 2D representation. However, this manual visual analysis remains expensive and error-prone. A system that can reconstruct an US volume could reduce the cost of US acquisition and error rate. Much research has recently been devoted to utilizing 3D US in diagnostic applications, as well as interventional radiology. 3D US volumes are conventionally generated using special wobbler probes, 2D transducers, or tracked probes to compound a 3D volume from 2D slices (Busam et al., 2018). In the last decade, new approaches such as computational sonography (Hennersperger et al., 2015), sensorless 3D US (Prevost et al., 2017), and deep learning-based image formation techniques (Simson et al., 2018) have aimed at improving the 3D compounding quality of this portable and affordable modality. Our proposed approach focuses on learning the 3D structure of an anatomy using 2D US images scanned from different viewpoints. This method enables us to generate isotropic 3D US volumes and introduces a new implicit 3D US representation for the medical image processing community to explore. Although viewing-direction dependency is a prominent characteristic of US imaging, it is not a unique property of US. To some extent, a similar phenomenon characterizes natural images. For instance, since the non-Lambertian assumption does not hold for most real world objects, appearance due to reflections might be inconsistent between views (Gao et al., 2022). 3D scene reconstruction from a set of 2D view-dependent observations has been hence extensively studied (Seitz and Dyer, 1999; Niemeyer et al., 2020). An important aspect of any reconstruction method is a scene representation, which can be either explicit (e.g. volumetric grids), or implicit (e.g. implicit functions). Implicit scene representations, such as (truncated) signed distance functions ((T)SDFs) (Newcombe et al., 2011) represent a 3D scene as a function. Since neural networks are universal function approximators, they can be used to parametrize an implicit representation (Tewari et al., 2022). This fact has been a basis of a recent development in neural volumetric representation. In particular, Neural Radiance Field (NeRF) emerged as a new, potent method for generating photorealistic, view-dependent images of a static 3D scene from a collection of pose-annotated images (Mildenhall et al., 2022). In computer vision, NeRF became a baseline for for various research directions such as dynamic scenes (Park et al., 2021), large scale scenes (Rematas et al., 2022), or scene generalization (Yu et al., 2021). Moreover, as presented in iNeRF (Yen-Chen et al., 2021) representing a 3D model as a neural network provides a reference for 6DoF pose estimation, which potentially can find an application in US tracking. The idea behind NeRF, however, was primarily developed for the purpose of natural image synthesis and takes advantage of established methods from computer graphics. In this paper, we propose an implicit neural representation for US exemplified with NeRF that facilitates the synthesis of B-mode images from novel viewpoints. Our contributions are as follows: * a method that synthesises accurate B-mode images by learning the view-dependent appearance and geometry of a scene from multiple US sweeps; * a physically sound rendering formulation based on a ray-tracing model, which considers the isotropic tissue characteristics important to US; * open source datasets1 comprising multiple tracked 2D US sweeps with highly accurate pose annotations and different viewpoints. Footnote 1: The link to the data and implementation will be provided upon acceptance. In our experiments, we use synthetic liver and spine phantom data. We evaluate our method quantitatively and qualitatively. In particular, we reason about the shortcomings of learning geometry without taking into account the rendering based on the physics behind US. To the best of our knowledge, this paper presents a new implicit neural representation for US that for the first time considers the anisotropic characteristics of US. ## 2 Related Work Implicit representations in the form of (T)SDFs have been used for implicit geometric reconstruction (Newcombe et al., 2011). Recently, INR has been proposed to express signals as a neural network (Sitzmann et al., 2020) which can be seen as a universal function approximator, which represents a scene as a continuous function parameterised by its weights. As a consequence, it allows for a mapping from a 3D continuous coordinate space to intensity to store information about a 3D scene. As presented by Gu et al. (2022), we can exploit INR models to represent a 3D US volume learnt from a set of 2D US images. However, parametrizing an US volume using a 3D continuous coordinate space does not address a viewing direction impact on the observation. The progress in neural continuous shape representation sparked interest in their application to photorealistic novel view synthesis. In particular, in a seminal work introducing NeRF (Mildenhall et al., 2022), the authors propose a framework that combines neural representation of a scene and fully differentiable volumetric rendering. In NeRF, the representation of a scene is expressed by a fully-connected neural network. The network maps a 5D vector (a spatial location \(\mathbf{x}\) and viewing direction \(\mathbf{d}\)) to volume density \(\sigma\), and radiance \(\mathbf{c}\). To learn this mapping a per-pixel camera ray is defined as \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\) with the camera origin \(\mathbf{o}\) in the center of the pixel Figure 1: a) Each ray \(r\) corresponds to a single scan-line with origin \(\mathbf{o}\) at top of the image plane and direction \(\mathbf{d}\) pointing along the scan-line. Query points are defined by their spatial location \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}.\)b) White intensities are the result of max-value compounding of images from all available angles. Red/Blue/Green intensities show the composition of the white intensities based on their view angle. defining the near plane. The final colour value of each pixel is defined by following formulas: \[C(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\sigma(t)c(t)dt \tag{1}\] \[\text{where }T(t)=\exp-\int_{t_{n}}^{t_{f}}\sigma(\mathbf{r}(s))ds \tag{2}\] The volume rendering integral in Equation (1) accumulates a radiance field along a ray, therefore each sampled position contributes to the final colour of a pixel. The input of each sample is controlled by the transmittance factor \(T(t)\) (Equation (2)). Finally, the rendered pixel value is compared with a value in an image using a photometric loss. Since its introduction, NeRF-based methods have demonstrated impressive results in various fields including medical imaging. For instance, MedNeRF propose a NeRF framework to reconstruct CT-projections from X-ray (Corona-Figueroa et al., 2022), and EndoNeRF adopts NeRF for surgical scene 3D reconstruction (Wang et al., 2022). Yet, surprisingly little investigation has been done to explore the potential of neural volumetric implicit representations for medical US. One of a few studies (Yeung et al., 2021; Gu et al., 2022; Song et al., 2022) focuses on reconstruction of a spine using the NeRF algorithm (Li et al., 2021). In this paper, the authors demonstrate that NeRF can render high-quality US images. However, they apply NeRF without considering a volumetric rendering method, which respects US physics. To address this shortcoming, we reformulate the rendering step to include the underlying US physics and incorporate it into the NeRF framework. ## 3 Method ### Background: US Physics US images are generated by mapping reflected sounds from the tissue within a thin transversal slice of the body. Intrinsic acoustic parameters such as travelling speed of sound, Figure 2: a) For a query point \(q\in\mathbb{R}^{3}\) sampled along a ray, the MLP extracts a parameter vector \(\theta\in\mathbb{R}^{5}\) from an implicit volume representation, b) from parameters at queried and preceding positions along the ray the rendering computes a per-query intensity. Resulting intensities compose an US image. The output and target frame are compared using a weighted sum of Structural Similarity Index Measure (SSIM) (Wang et al., 2004) and Squared Error Loss (L2). acoustic impedance, attenuation coefficient, and spatial distribution of sound scattering micro-structures are the main contributing factors affecting the sound reflection within the tissue. By knowing the mapping of these parameters in space, one would be able to simulate renderings of 3D US in arbitrary views (Salehi et al., 2015). ### Ultrasound NeRF Figure 2 presents our framework in the single-frame case. The method follows the original NeRF w.r.t its two components: a neural network (Figure 1(a)) and volumetric rendering (Figure 1(b)). The network represents a volume as a 3D vector-valued continuous function that maps a position \(q=(x,y,z)\) in a Cartesian coordinate space into a parameter vector \(\theta\in\mathbb{R}^{5}\) which elements correspond to attenuation \(\alpha\), reflectance \(\beta\), border probability \(\rho_{b}\), scattering density \(\rho_{s}\), and scattering intensity \(\phi\) and compose a final pixel intensity as outlined in Section 3.3. The parameter vector consists of isotropic physical tissue properties hence we do not provide explicit viewing directions to the network. This ensures that the regressed physical properties remain consistent between views, whereas the view-dependent changes are enforced by the rendering. Figure 1 illustrates definition of a ray and query points. We encourage the reader to refer to Appendix A for the network details. ### Ultrasound Volume Rendering Our US volume rendering model builds upon a formulation presented by Salehi et al. (2015) that proposes a ray-tracing-based simulation model. The advantage of this model is its flexibility in representing US artifacts coming from backscattering effects. For each scan-line \(r\), Equation 3 defines a recorded US echo \(E(r,t)\), measured at distance \(t\) from the transducer, as a sum of reflected \(R(r,t)\) and backscattered \(B(r,t)\) energy: \[E(r,t)=R(r,d)+B(r,t) \tag{3}\] The reflected energy is defined by: \[R(r,t)=|I(r,t)\cdot\beta(r,t)|\cdot PSF(r)\otimes G(r^{\prime},t^{\prime}) \tag{4}\] Where the term \(I(r,t)\) is the remaining energy at the distance \(t\), \(\beta(r,t)\) represents the reflection coefficient, and \(PSF(r)\) is a predefined 2D point-spread function. G(r, t) admits 1 for points at the boundary and 0 otherwise. We compute it by sampling from a Bernoulli distribution parameterized by the border probability \(\rho_{b}\). A probabilistic approach to the border definition reflects network's uncertainty about interaction of the ray with a tissue border. The energy loss is traced along each scan-line, and the remaining energy \(I(r,t)\) is modelled using the loss of the energy due to reflection \(\beta\) at the boundaries and attenuation compensated by applying an unknown time-gain compensation (TGC) function. The final formulation for \(I(r,t)\) assumes an initial unit intensity \(I_{0}(r,0)\) and loss of energy at each step \(dt\). We can further simplify the resulting equation by modeling the compensated attenuation \(\alpha\) by a single parameter since TGC is a scaling factor: \[I(r,t)=I_{0}\cdot\prod_{n=0}^{t-1}\left[(1-\beta(r,n))\cdot G(r,n)\right] \cdot\exp^{\left(-\int_{n=0}^{t-1}\left(\alpha\cdot f\cdot dt\right)\right)} \tag{5}\] Consequently, \(\alpha^{\prime}s\) correspond to the physical attenuation only up to an unknown scaling factor. The backscattered energy \(B(r,t)\) from the scattering medium is a function of remaining energy \(I(r,t)\) and a 2D map of scattering points \(T(r,t)\): \[B(r,t)=I(r,t)\cdot PSF(r)\otimes T(r^{\prime},t^{\prime}) \tag{6}\] The map \(T(r,t)\) is learnt using a generative model inspired by (Zhang et al., 2020): \[T(r,t)=H(r,t)\cdot\phi(r,t) \tag{7}\] In this model, \(H(r,t)\) admits 1 for a query point being a scattering point and 0 otherwise. This function is sampled from the Bernoulli distribution parameterized by scattering density \(\rho_{s}\). It represents the uncertainty of whether the scattering effect of a scattering point is observed. The intensity of a scattering point is controlled by its amplitude \(\phi\) which models sampling from a normal distribution with the mean \(\phi\) and unit variance. ## 4 Experiments & Results **Data.** We acquired two types of data: synthetic and phantom B-mode images. For both datasets sweeps were recorded with different, constant perpendicular and tilt angles w.r.t acquisition direction (Figure 0(b)). We tested our method on 6 sweeps covering views not present in the training set. We encourage the reader to refer to Appendix B for details about the datasets. Figure 3: Our method infers novel views in phantom and synthetic data (bottom row). However, it does not produce artifacts inconsistent with our ray-based model, such as reverberations (blue), and it fails at representing complex structures (yellow). Quantitative Results.Table 1 presents evaluation of the quality of novel view synthesis as measured in terms of SSIM (Wang et al., 2004) between synthetic and reference testing data. To analyze the effect of rendering, we compared Ultra-NeRF to an implicit neural representation model without rendering. With rendering, we achieve better or similar results on our phantom data (\(SSIM_{median}=0.54\) for tilted and \(SSIM_{median}=0.58\) for perpendicular views), whereas the method without rendering attains higher SSIM values on our synthetic dataset (\(SSIM_{median}=0.50\) - tilted, \(SSIM_{median}=0.54\) - perpendicular). **Qualitative Results.** Figure 3 illustrates examples of synthetic B-mode images generated with Ultra-NeRF while Figure 4 demonstrates the significance of rendering. We evaluated the quality of novel views by comparing volumes compounded from generated US images using Ultra-NeRF with and without the rendering function. We compounded volumes using compounding algorithm of the ImFusion 2. Footnote 2: ImFusion GmbH, Munich, Germany, software version 2.42 ## 5 Discussion & Conclusion In this paper, we present Ultra-NeRF, a volumetric INR of 3D US from a set of 2D US images. Unlike prior methods, our approach considers the anisotropic characteristics of US and addresses US volumetric rendering in a way that follows the physics of US. The experiments corroborate that Ultra-NeRF incorporates information about the viewing direction into a volumetric INR, which allows for the view-dependent synthesis of US frames, resulting in high-quality B-mode images. Decomposition of a rendered B-mode in the parameter space shown in Figure 5 further illustrates that Ultra-NeRF identifies tissue characteristics leading to differences in observed intensities. For example, it correctly determines a strongly reflective structure (a rib) by regressing a region with a higher reflectance and therefore produces acoustic shadow. We propose a physically sound rendering method, however, further progress towards more realistic B-mode rendering requires addressing ray interactions and the Fresnel Effect. As shown in Figure 3, although the method learns accurate geometry, it does not allow to render complex US artifacts, such as reverberations. Additionally, to improve rendering results, future work may involve using deep learning techniques to establish a point spread function that reflects the underlying backscattering pattern. Another potential area for future research is regularization; the decomposition into parameter space is under-constrained, thus the outcome highly depends on the initial network configuration. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{with rendering} & \multicolumn{4}{c}{w/o rendering} \\ \cline{3-10} dataset type & median & mean & min & max & median & mean & min & max \\ \hline \multirow{2}{*}{\begin{tabular}{c} liver synthetic \\ perpendicular \\ \end{tabular} } & tilted & 0.47 & 0.45 & 0.41 & 0.60 & **0.50** & 0.51 & 0.46 & 0.59 \\ & 0.49 & 0.49 & 0.44 & 0.57 & **0.54** & 0.54 & 0.47 & 0.62 \\ \multirow{2}{*}{ \begin{tabular}{c} spine phantom \\ perpendicular \\ \end{tabular} } & tilted & **0.54** & 0.51 & 0.36 & 0.60 & 0.50 & 0.48 & 0.36 & 0.59 \\ & **0.58** & 0.54 & 0.42 & 0.65 & **0.58** & 0.54 & 0.41 & 0.64 \\ \hline \hline \end{tabular} \end{table} Table 1: SSIM between synthetic and reference B-mode images. To the best our knowledge, this is the first work that explores the potential of implicit neural representations for medical US by addressing a rendering method specially designed for US. Therefore, it supports progress towards integrating the implicit 3D US represen Figure 4: Compounded volumes: without rendering (middle row) the model is not aware of the viewing direction hence occluded parts of the lamina are reconstructed (red). By adding the rendering function, we introduce view-direction dependency needed to reconstruct anisotropic phenomena. Figure 5: Intermediate maps illustrating each element of rendering parameter vector \(\theta\) corresponding to a tissue’s physical property. tation exemplified with NeRF into medical applications. We believe that this work will inspire further exploration of implicit representations in US imaging for medical purpose.
2310.16610
Consensus-Based Optimization with Truncated Noise
Consensus-based optimization (CBO) is a versatile multi-particle metaheuristic optimization method suitable for performing nonconvex and nonsmooth global optimizations in high dimensions. It has proven effective in various applications while at the same time being amenable to a theoretical convergence analysis. In this paper, we explore a variant of CBO, which incorporates truncated noise in order to enhance the well-behavedness of the statistics of the law of the dynamics. By introducing this additional truncation in the noise term of the CBO dynamics, we achieve that, in contrast to the original version, higher moments of the law of the particle system can be effectively bounded. As a result, our proposed variant exhibits enhanced convergence performance, allowing in particular for wider flexibility in choosing the noise parameter of the method as we confirm experimentally. By analyzing the time-evolution of the Wasserstein-$2$ distance between the empirical measure of the interacting particle system and the global minimizer of the objective function, we rigorously prove convergence in expectation of the proposed CBO variant requiring only minimal assumptions on the objective function and on the initialization. Numerical evidences demonstrate the benefit of truncating the noise in CBO.
Massimo Fornasier, Peter Richtárik, Konstantin Riedl, Lukang Sun
2023-10-25T13:07:34Z
http://arxiv.org/abs/2310.16610v2
# Consensus-Based Optimization ###### Abstract Consensus-based optimization (CBO) is a versatile multi-particle metaheuristic optimization method suitable for performing nonconvex and nonsmooth global optimizations in high dimensions. It has proven effective in various applications while at the same time being amenable to a theoretical convergence analysis. In this paper, we explore a variant of CBO, which incorporates truncated noise in order to enhance the well-behavedness of the statistics of the law of the dynamics. By introducing this additional truncation in the noise term of the CBO dynamics, we achieve that, in contrast to the original version, higher moments of the law of the particle system can be effectively bounded. As a result, our proposed variant exhibits enhanced convergence performance, allowing in particular for wider flexibility in choosing the parameters of the method as we confirm experimentally. By analyzing the time-evolution of the Wasserstein-2 distance between the empirical measure of the interacting particle system and the global minimizer of the objective function, we rigorously prove convergence in expectation of the proposed CBO variant requiring only minimal assumptions on the objective function and on the initialization. Numerical evidences clearly demonstrate the benefit of truncating the noise in CBO. **Keywords:** global optimization, derivative-free optimization, nonsmoothness, nonconvexity, metaheuristics, consensus-based optimization, truncated noise **AMS subject classifications:** 65K10, 90C26, 90C56, 35Q90, 35Q84 Introduction The search for a global minimizer \(v^{*}\) of a potentially nonconvex and nonsmooth cost function \[f:\mathbb{R}^{d}\to\mathbb{R}\] holds significant importance in a variety of applications throughout applied mathematics, science and technology, engineering, and machine learning. Historically, a class of methods known as metaheuristics [3, 5] has been developed to address this inherently challenging and, in general, NP-hard problem. Examples of such include evolutionary programming [19], genetic algorithms [31], particle swarm optimization (PSO) [36], simulated annealing [1], and many others. These methods work combining local improvement procedures and global strategies by orchestrating deterministic and stochastic advances, with the aim of creating a method capable of robustly and efficiently finding the globally minimizing argument \(v^{*}\) of \(f\). However, despite their empirical success and widespread adoption in practice, most metaheuristics lack a solid mathematical foundation that could guarantee their robust convergence to global minimizers under reasonable assumptions. Motivated by the urge to devise algorithms which converge provably, a novel class of metaheuristics, so-called consensus-based optimization (CBO), originally proposed by the authors of [42], has recently emerged in the literature. Due to the inherent simplicity in the design of CBO, this class of optimization algorithms lends itself to a rigorous theoretical analysis, as demonstrated in particular in the works [11, 13, 23, 24, 27, 28, 39]. However, this recent line of research does not just offer a promising avenue for establishing a thorough mathematical framework for understanding the numerically observed successes of CBO methods [13, 15, 21, 24, 44], but beyond that allows to explain the effective use of conceptually similar and wide-spread methods such as PSO as well as at first glance completely different optimization algorithms such as stochastic gradient descent (SGD). While the first connection is to be expected and by now made fairly rigorous [17, 34, 26] due to CBO indisputably taking PSO as inspiration, the second observation is somewhat surprising, as it builds a bridge between derivative-free metaheuristics and gradient-based learning algorithms. Despite CBO solely relying on evaluations of the objective function, recent work [45] reveals an intrinsic SGD-like behavior of CBO itself by interpreting it as a certain stochastic relaxation of gradient descent, which provably overcomes energy barriers of nonconvex function. These perspectives, and, in particular the already well-investigated convergence behavior of standard CBO, encourage the exploration of improvements to the method in order to allow overcoming the limitations of traditional metaheuristics mentioned at the start. For recent surveys on CBO we refer to [47, 25]. While the original CBO model [42] has been adapted to solve constrained optimizations [4, 9, 14], optimizations on manifolds [20, 21, 22, 37, 29], multi-objective optimization problems [7, 38, 8], saddle point problems [33] or the task of sampling [12], as well as has been extended to make use of memory mechanisms [6, 44, 48], gradient information [44, 46], momentum [16], jump-diffusion processes [35] or localization kernels for polarization [10], we focus in this work on a variation of the original model, which incorporates a truncation in the noise term of the dynamics. More formally, given a time horizon \(T>0\), a time discretization \(t_{0}=0<\Delta t<\dots<K\Delta t=t_{K}=T\) of \([0,T]\), and user-specified parameters \(\alpha,\lambda,\sigma>0\) as well as \(v_{b},R>0\), we consider the interacting particle system \[V^{i}_{(k+1)\Delta t}-V^{i}_{k\Delta t}= -\Delta t\lambda\left(V^{i}_{k\Delta t}-\mathcal{P}_{v_{b},R} \left(v_{\alpha}(\widehat{\rho}^{N}_{k\Delta t})\right)\right)+\sigma\left( \left\|V^{i}_{k\Delta t}-v_{\alpha}(\widehat{\rho}^{N}_{k\Delta t})\right\|_ {2}\wedge M\right)B^{i}_{k\Delta t}, \tag{1}\] \[V^{i}_{0}\sim\rho_{0}\quad\text{for all }i=1,\dots,N, \tag{2}\] where \(((B^{i}_{k\Delta t})_{k=0,\dots,K-1})_{i=1,\dots,N}\) are independent, identically distributed Gaussian random vectors in \(\mathbb{R}^{d}\) with zero mean and covariance matrix \(\Delta t\text{Id}_{d}\). Equation (1) originates from a simple Euler Maruyama time discretization [30, 43] of the system of stochastic differential equations (SDEs), expressed in Ito's form as \[dV_{t}^{i} =-\lambda\left(V_{t}^{i}-\mathcal{P}_{v_{b},R}\left(v_{\alpha}( \widehat{\rho}_{t}^{N})\right)\right)dt+\sigma\left(\left\|V_{t}^{i}-v_{\alpha} (\widehat{\rho}_{t}^{N})\right\|_{2}\wedge M\right)dB_{t}^{i} \tag{3}\] \[V_{0}^{i} \sim\rho_{0}\quad\text{for all }i=1,\ldots,N. \tag{4}\] where \(((B_{t}^{i})_{t\geq 0})_{i=1,\ldots,N}\) are now independent standard Brownian motions in \(\mathbb{R}^{d}\). The empirical measure of the particles at time \(t\) is denoted by \(\widehat{\rho}_{t}^{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{V_{t}^{i}}\) and \(\mathcal{P}_{v_{b},R}\) is the projection map defined as \[\mathcal{P}_{v_{b},R}\left(v\right):=\begin{cases}v,&\text{if }\left\|v-v_{b} \right\|_{2}\leq R,\\ v_{b}+R\frac{v-v_{b}}{\left\|v-v_{b}\right\|_{2}},&\text{if }\left\|v-v_{b} \right\|_{2}>R.\end{cases} \tag{5}\] As a crucial assumption in this paper, the projection map \(\mathcal{P}_{v_{b},R}\) depends on \(R\) and \(v_{b}\), in such a way that \(v^{*}\in B_{R}(v_{b})\). Setting such parameters can be feasible under specific circumstances, as exemplified by the regularized optimization problem \(f(v):=\operatorname{Loss}(v)+\lambda\left\|v\right\|_{2}\), wherein \(v^{*}\in B_{\operatorname{Loss}(0)/\lambda}(0)\). In the absence of prior knowledge regarding \(v_{b}\) and \(R\), a practical approach is to designate \(v_{b}=0\) and assign a sufficiently large value to \(R\). The first terms in (3) and (1), respectively, impose a deterministic drift of each particle towards the possibly projected momentaneous consensus point \(v_{\alpha}(\widehat{\rho}_{t}^{N})\), which is a weighted average of the particles' positions and computed according to \[v_{\alpha}(\widehat{\rho}_{t}^{N}):=\int v\frac{\omega_{\alpha}(v)}{\left\| \omega_{\alpha}\right\|_{L_{1}(\widehat{\rho}_{t}^{N})}}\,d\widehat{\rho}_{t} ^{N}(v). \tag{6}\] The weights \(\omega_{\alpha}(v):=\exp(-\alpha f(v))\) are motivated by the well-known Laplace principle [18], which states for any absolutely continuous probability distribution \(\varrho\) on \(\mathbb{R}^{d}\) that \[\lim_{\alpha\to\infty}\left(-\frac{1}{\alpha}\log\left(\int\omega_{\alpha}(v) \,d\varrho(v)\right)\right)=\inf_{v\in\operatorname{supp}(\varrho)}f(v) \tag{7}\] and thus justifies that \(v_{\alpha}(\widehat{\rho}_{t}^{N})\) serves as a suitable proxy for the global minimizer \(v^{*}\) given the currently available information of the particles \((V_{t}^{i})_{i=1,\ldots,N}\). The second terms in (3) and (1), respectively, encode the diffusion or exploration mechanism of the algorithm, where, in contrast to standard CBO, we truncate the noise by some fixed constant \(M>0\). We conclude and re-iterate that both the introduction of the projection \(\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\widehat{\rho}_{t}^{N})\right)\) of the consensus point and the employment of truncation of the noise variance \(\left(\left\|V_{t}^{i}-v_{\alpha}(\widehat{\rho}_{t}^{N})\right\|_{2}\wedge M\right)\) are main innovations to the original CBO method. We shall explain and justify these modifications in the next section. Motivation for using truncated noise.In what follows we provide a heuristic explanation of the theoretical benefits of employing a truncation in the noise of CBO as in (1), (3) and (18). Let us therefore first recall that the standard variant of CBO [42] can be retrieved from the model considered in this paper by setting \(v_{b}=0,R=\infty\) and \(M=\infty\). For instance, in place of the mean-field dynamics (18), we would have \[d\overline{V}_{t}^{\text{CBO}}=-\lambda\left(\overline{V}_{t}^{\text{CBO}}-v_{ \alpha}(\rho_{t}^{\text{CBO}})\right)dt+\sigma\left\|\overline{V}_{t}^{\text{ CBO}}-v_{\alpha}(\rho_{t}^{\text{CBO}})\right\|_{2}dB_{t}.\] Attributed to the Laplace principle (7) we have \(v_{\alpha}(\rho_{t}^{\text{CBO}})\approx v^{*}\) for \(\alpha\) sufficiently large, i.e., as \(\alpha\to\infty\), the former dynamics converges to \[d\overline{Y}_{t}^{\text{CBO}}=-\lambda\left(\overline{Y}_{t}^{\text{CBO}}-v^{* }\right)dt+\sigma\left\|\overline{Y}_{t}^{\text{CBO}}-v^{*}\right\|_{2}dB_{t}. \tag{8}\] Firstly, observe that here the first term imposes a direct drift to the global minimizer \(v^{*}\) and thereby induces a contracting behavior, which is on the other hand counteracted by the diffusion term, which contributes a stochastic exploration around this point. In particular, with \(\overline{Y}_{t}^{\text{CBO}}\) approaching \(v^{*}\), the exploration vanishes so that \(\overline{Y}_{t}^{\text{CBO}}\) converges eventually deterministically to \(v^{*}\). Conversely, as long as \(\overline{Y}_{t}^{\text{CBO}}\) is far away from \(v^{*}\), the order of the random exploration is strong. By Ito's formula we have \[\frac{d}{dt}\mathbb{E}\left[\left\|\overline{Y}_{t}^{\text{CBO}}-v^{*}\right\| _{2}^{p}\right]=p\left(-\lambda+\frac{\sigma^{2}}{2}\left(p+d-2\right)\right) \mathbb{E}\left[\left\|\overline{Y}_{t}^{\text{CBO}}-v^{*}\right\|_{2}^{p}\right] \tag{9}\] and thus \[\mathbb{E}\left[\left\|\overline{Y}_{t}^{\text{CBO}}-v^{*}\right\|_{2}^{p} \right]=\exp\left(p\left(-\lambda+\frac{\sigma^{2}}{2}\left(p+d-2\right) \right)t\right)\mathbb{E}\left[\left\|\overline{Y}_{0}^{\text{CBO}}-v^{*} \right\|_{2}^{p}\right] \tag{10}\] for any \(p\geq 1\). Denoting with \(\mu_{t}^{\text{CBO}}\) the law of \(\overline{Y}_{t}^{\text{CBO}}\), this means that, given any \(\lambda,\sigma>0\), there is some threshold exponent \(p^{*}(\lambda,\sigma,d)\), such that \[\begin{split} W_{p}\left(\mu_{t}^{\text{CBO}},\delta_{v^{*}} \right)&=\lim_{t\to\infty}\left(\mathbb{E}\left[\left\| \overline{Y}_{t}^{\text{CBO}}-v^{*}\right\|_{2}^{p}\right]\right)^{1/p}\\ &=\lim_{t\to\infty}\exp\left(\left(-\lambda+\frac{\sigma^{2}}{2} \left(p+d-2\right)\right)t\right)\left(\mathbb{E}\left[\left\|\overline{Y}_{0 }^{\text{CBO}}-v^{*}\right\|_{2}^{p}\right]\right)^{1/p}=0\end{split} \tag{11}\] for \(p<p^{*}\), while for \(p>p^{*}\) it holds \[\begin{split} W_{p}\left(\mu_{t}^{\text{CBO}},\delta_{v^{*}} \right)&=\lim_{t\to\infty}\left(\mathbb{E}\left[\left\| \overline{Y}_{t}^{\text{CBO}}-v^{*}\right\|_{2}^{p}\right]\right)^{1/p}\\ &=\lim_{t\to\infty}\exp\left(\left(-\lambda+\frac{\sigma^{2}}{2} \left(p+d-2\right)\right)t\right)\left(\mathbb{E}\left[\left\|\overline{Y}_{0 }^{\text{CBO}}-v^{*}\right\|_{2}^{p}\right]\right)^{1/p}=\infty.\end{split} \tag{12}\] These computations suggest that the distribution of \(\mu_{t}^{\text{CBO}}\) exhibits characteristics of heavy tails, thereby increasing the likelihood of encountering outliers in a sample drawn from \(\mu_{t}^{\text{CBO}}\). On the contrary, for CBO with truncated noise (18), we get, thanks once again to the Laplace principle as \(\alpha\to\infty\), that (18) converges to \[d\overline{Y}_{t}=-\lambda\left(\overline{Y}_{t}-v^{*}\right)dt+\sigma\left\| \overline{Y}_{t}-v^{*}\right\|_{2}\wedge MdB_{t}, \tag{13}\] for which we can compute \[\begin{split}\frac{d}{dt}\mathbb{E}\left[\left\|\overline{Y}_{t}- v^{*}\right\|_{2}^{p}\right]&\leq-p\lambda\mathbb{E}\left[\left\| \overline{Y}_{t}-v^{*}\right\|_{2}^{p}\right]+p\frac{\sigma^{2}}{2}M^{2}\left(p +d-2\right)\mathbb{E}\left[\left\|\overline{Y}_{t}-v^{*}\right\|_{2}^{p-2} \right]\\ &\leq-\lambda\mathbb{E}\left[\left\|\overline{Y}_{t}-v^{*}\right\| _{2}^{p}\right]+\lambda\frac{\sigma^{p}M^{p}(d+p-2)^{\frac{p}{2}}}{\lambda^{ \frac{p}{2}}},\end{split} \tag{14}\] for any \(p\geq 2\). To obtain the second inequality we used Young inequality: \[ab\leq\frac{p-2}{p}a^{\frac{p}{p-2}}+\frac{2}{p}b^{\frac{p}{2}},\quad\text{ with}\quad a=\lambda^{\frac{p-2}{p}}\mathbb{E}\left[\left\|\bar{Y}_{t}-v^{*} \right\|^{p-2}\right],b=\frac{\sigma^{2}M^{2}(d+p-2)}{\lambda^{\frac{p-2}{p}}}, \tag{15}\] as well as Jensen inequality. By means of Gronwall inequality, we have \[\mathbb{E}\left[\left\|\overline{Y}_{t}-v^{*}\right\|_{2}^{p}\right]\leq\exp \left(-\lambda t\right)\mathbb{E}\left[\left\|\overline{Y}_{0}-v^{*}\right\|_ {2}^{p}\right]+\frac{\sigma^{p}M^{p}(d+p-2)^{\frac{p}{2}}}{\lambda^{\frac{p}{2} }} \tag{16}\] and therefore, denoting with \(\mu_{t}\) the law of \(\overline{Y}_{t}\), \[\lim_{t\to\infty}W_{p}\left(\mu_{t},\delta_{v^{*}}\right)\leq\frac{\sigma M\sqrt{ d+p-2}}{\lambda^{\frac{1}{2}}}<\infty \tag{17}\] for any \(p\geq 2\). In conclusion, we observe from Equation (10) that the standard CBO dynamics as described in Equation (8) diverges in the setting \(2\lambda<\sigma^{2}d\) when considering the Wasserstein-2 distance \(W_{2}\). Contrarily, according to Equation (14), the CBO dynamics with truncated noise as presented in Equation (13) converges with exponential rate towards a neighborhood of \(v^{*}\), with radius \(\sigma M\sqrt{d}/\sqrt{\lambda}\). This implies that for a relatively small value of \(M\) the CBO dynamics with truncated noise exhibits greater robustness in relation to the parameter \(\sigma^{2}d/\lambda\). This effect is confirmed numerically in Figure 1. **Remark 1** (Sub-Gaussianity of truncated CBO).: _An application of Ito's formula allows to show that, for some \(K>0\), \(\mathbb{E}\left[\exp\left(\left\|\overline{Y}_{t}-v^{*}\right\|_{2}^{2}/K^{2} \right)\right]<\infty\), provided \(\mathbb{E}\left[\exp\left(\left\|\overline{Y}_{0}-v^{*}\right\|_{2}^{2}/K^{2} \right)\right]<\infty\). Thus, by incorporating a truncation in the noise term of the CBO dynamics, we ensure that the resulting distribution \(\mu_{t}\) exhibits sub-Gaussian behavior and therefore we enhance the regularity and well-behavedness of the statistics of \(\mu_{t}\). As a consequence, more reliable and stable results when analyzing the properties and characteristics of the dynamics are to be expected._ Contributions.In view of the aforementioned enhanced regularity and well-behavedness of the statistics of CBO with truncated noise compared to standard CBO [42] together with the numerically observed improved performance as depicted in Figure 1, a rigorous convergence analysis of the implementable CBO algorithm with truncated noise as given in (1) is of theoretical interest. In this work we provide theoretical guarantees of global convergence of (1) to the global minimizer \(v^{*}\) for possibly nonconvex and nonsmooth objective functions \(f\). The approach to analyze the convergence behavior of the implementable scheme (1) follows a similar route, initiated and explored in [13, 23, 11, 24]. In particular, we first investigate the mean-field behavior [23, 32] of the system (1). More precisely, we study the macroscopic behavior of the agent density \(\rho\in\mathcal{C}([0,T],\mathcal{P}(\mathbb{R}^{d}))\), where \(\rho_{t}=\text{Law}(\overline{V}_{t})\) with \[d\overline{V}_{t}=-\lambda\left(\overline{V}_{t}-\mathcal{P}_{v_{b},R}\left( v_{\alpha}(\rho_{t})\right)\right)dt+\sigma\left(\left\|\overline{V}_{t}-v_{ \alpha}(\rho_{t})\right\|_{2}\wedge M\right)dB_{t} \tag{18}\] and initial data \(\overline{V}_{0}\sim\rho_{0}\). Then, by establishing a quantitative estimate on the mean-field approximation, i.e., the proximity of the mean-field system (18) to the interacting particle system (3), we obtain a convergence result for the CBO algorithm (1) with truncated noise. Our proving technique nevertheless differs in crucial parts from the one in [23, 24] as, on the one side, we do take advantage of truncations, on the other side we require additional technical effort to exploit and deal with the enhanced flexibility of the truncated model. Specifically, the very crucial innovation can be identified in the proof of sub-Gaussianity of the process, see Lemma 8. ### Organization In Section 2 we present and discuss our main theoretical contribution about the global convergence of CBO with truncated noise in probability and expectation. Section 3 collects the necessary proof details for this result. In Section 4 we numerically demonstrate the benefits of using truncated noise, before we provide a conclusion of the paper in Section 5. For the sake of reproducible research, in the GitHub repository [https://github.com/KonstantinRiedl/CBOGlobalConvergenceAnalysis](https://github.com/KonstantinRiedl/CBOGlobalConvergenceAnalysis) we provide the Matlab code implementing CBO with truncated noise. ### Notation We use \(\|\cdot\|_{2}\) to denote the Euclidean norm on \(\mathbb{R}^{d}\). Euclidean balls are denoted as \(B_{r}(u)\!:=\{v\in\mathbb{R}^{d}:\|v-u\|_{2}\leq r\}\). For the space of continuous functions \(f:X\to Y\) we write \(\mathcal{C}(X,Y)\), with \(X\subset\mathbb{R}^{n}\) and a suitable topological space \(Y\). For an open set \(X\subset\mathbb{R}^{n}\) and for \(Y=\mathbb{R}^{m}\) the spaces \(\mathcal{C}^{k}_{c}(X,Y)\) and \(\mathcal{C}^{k}_{b}(X,Y)\) contain functions \(f\in\mathcal{C}(X,Y)\) that are \(k\)-times continuously differentiable and have compact support or are bounded, respectively. We omit \(Y\) in the real-valued case. All stochastic processes are considered on the probability space \((\Omega,\mathscr{F},\mathbb{P})\). The main objects of study are laws of such processes, \(\rho\in\mathcal{C}([0,T],\mathcal{P}(\mathbb{R}^{d}))\), where the set \(\mathcal{P}(\mathbb{R}^{d})\) contains all Borel probability measures over \(\mathbb{R}^{d}\). With \(\rho_{t}\in\mathcal{P}(\mathbb{R}^{d})\) we refer to a snapshot of such law at time \(t\). Measures \(\varrho\in\mathcal{P}(\mathbb{R}^{d})\) with finite \(p\)-th moment \(\int\|v\|_{2}^{p}\,d\varrho(v)\) are collected in \(\mathcal{P}_{p}(\mathbb{R}^{d})\). For any \(1\leq p<\infty\), \(W_{p}\) denotes the Wasserstein-\(p\) distance between two Borel probability measures \(\varrho_{1},\varrho_{2}\in\mathcal{P}_{p}(\mathbb{R}^{d})\), see, e.g., [2]. \(\mathbb{E}\left[\cdot\right]\) denotes the expectation. ## 2 Global Convergence of CBO with Truncated Noise We now present the main theoretical result of this work about the global convergence of CBO with truncated noise for objective functions that satisfy the following conditions. **Definition 2** (Assumptions).: _Throughout we are interested in functions \(f\in\mathcal{C}(\mathbb{R}^{d})\), for which_ * _there exist_ \(v^{*}\in\mathbb{R}^{d}\) _such that_ \(f(v^{*})=\inf_{v\in\mathbb{R}^{d}}f(v)=:\underline{f}\) _and_ \(\underline{\alpha},L_{u}>0\) _such that_ \[\sup_{v\in\mathbb{R}^{d}}\left\|ve^{-\alpha(f(v)-\underline{f})}\right\|_{2}=: L_{u}<\infty\] (19) _for any_ \(\alpha\geq\underline{\alpha}\) _and any_ \(v\in\mathbb{R}^{d}\)_,_ Figure 1: A comparison of the success probabilities of isotropic CBO with (left phase diagrams) and without (right separate columns) truncated noise for different values of the truncation parameter \(M\) and the noise level \(\sigma\). (Note that standard CBO as investigated in [11, 23, 42] is retrieved when choosing \(M=\infty\), \(R=\infty\) and \(v_{b}=0\) in (1)). In both settings **(a)** and **(b)** the depicted success probabilities are averaged over 100 runs and the implemented scheme is given by an Euler-Maruyama discretization of Equation (3) with time horizon \(T=50\), discrete time step size \(\Delta t=0.01\), \(R=\infty\), \(v_{b}=0\), \(\alpha=10^{5}\) and \(\lambda=1\). We use \(N=100\) particles, which are initialized according to \(\rho_{0}=\mathcal{N}((1,\ldots,1),2000)\). In both figures we plot the success probability of standard CBO (right separate column) and the CBO variant with truncated noise (left phase transition diagram) for different values of the truncation parameter \(M\) and the noise level \(\sigma\), when optimizing the Ackley (**(a)**) and Rastrigin (**(b)**) function, respectively. We observe that truncating the noise term (by decreasing \(M\)) consistently allows a wider flexibility when choosing the noise level \(\sigma\) and thus increasing the likelihood of successfully locating the global minimizer. * _there exist_ \(f_{\infty},R_{0},\nu,L_{\nu}>0\) _such that_ \[\left\|v-v^{*}\right\|_{2} \leq\frac{1}{L_{\nu}}(f(v)-\underline{f})^{\nu}\quad\text{ for all }v\in B_{R_{0}}(v^{*}),\] (20) \[f_{\infty} <f(v)-\underline{f}\quad\text{ for all }v\in\big{(}B_{R_{0}}(v^{*}) \big{)}^{c},\] (21) * _there exist_ \(L_{\gamma}>0,\gamma\in[0,1]\) _such that_ \[\left|f(v)-f(w)\right| \leq L_{\gamma}(\left\|v-v^{*}\right\|_{2}^{\gamma}+\left\|w-v^{* }\right\|_{2}^{\gamma})\left\|v-w\right\|_{2}\quad\text{ for all }v,w\in\mathbb{R}^{d},\] (22) \[f(v)-\underline{f} \leq L_{\gamma}\left(1+\left\|v-v^{*}\right\|_{2}^{1+\gamma} \right)\quad\text{ for all }v\in\mathbb{R}^{d}.\] (23) A few comments are in order: Condition A1 establishes the existence of a minimizer and requires a certain growth of the function \(f\). Condition A2 ensures that the value of the function \(f(v)\) at a point \(v\) can locally be indicator of the distance between \(v\) and the minimizer \(v^{*}\). This error bound condition was first introduced in [23]. Condition A3 sets controllable bounds on the local Lipschitz constant of \(f\) and on the growth of \(f\), which is required to be at most quadratic. A similar requirement appears also in [11, 23], but there also a quadratic lower bound was imposed. ### Main Result We can now state the main result of the paper. Its proof is deferred to Section 3. **Theorem 3**.: _Let \(f\in\mathcal{C}(\mathbb{R}^{d})\) satisfy A1, A2 and A3. Moreover, let \(\rho_{0}\in\mathcal{P}_{4}(\mathbb{R}^{d})\) with \(v^{*}\in\operatorname{supp}(\rho_{0})\). Let \(V^{i}_{0\Delta t}\) be sampled i.i.d. from \(\rho_{0}\) and denote by \(((V^{i}_{k\Delta t})_{k=1,\dots,K})_{i=1,\dots,N}\) the iterations generated by the numerical scheme (1). Fix any \(\epsilon\in(0,W^{2}_{2}\left(\rho_{0},\delta_{v^{*}}\right))\), define the time horizon_ \[T^{*}:=\frac{1}{\lambda}\log\left(\frac{2W^{2}_{2}\left(\rho_{0},\delta_{v^{*} }\right)}{\epsilon}\right)\] _and let \(K\in\mathbb{N}\) and \(\Delta t\) satisfy \(K\Delta t=T^{*}\). Moreover, let \(R\in(\left\|v_{b}-v^{*}\right\|_{2}+\sqrt{\epsilon/2},\infty)\), \(M\in(0,\infty)\) and \(\lambda,\sigma>0\) such that \(\lambda\geq 2\sigma^{2}d\) or \(\sigma^{2}M^{2}d=\mathcal{O}(\epsilon)\). Then we can choose \(\alpha\) large enough and \(N\geq(16\alpha L_{\gamma}\sigma^{2}M^{2})/\lambda\) such that_ \[\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}V^{i}_{K\Delta t}-v^{*} \right\|_{2}^{2}\right]\leq\mathcal{O}\left(C_{\mathrm{NA}}(\Delta t)^{2m}+ \frac{C_{\mathrm{MFA}}}{N}+\epsilon\right). \tag{24}\] _Here, \(C_{\mathrm{NA}}\) depends linearly on the dimension \(d\) and the number of particles \(N\) and exponentially on the time horizon \(T^{*}\), \(m\) is the order of accuracy of the numerical scheme (for the Euler-Maruyama scheme \(m=1/2\)), and \(C_{\mathrm{MFA}}=C_{\mathrm{MFA}}(\lambda,\sigma,d,\alpha,L_{\nu},\nu,L_{\gamma },L_{u},T^{*},R,v_{b},v^{*},M)\)._ **Remark 4**.: _In the statement of Theorem 3 the parameters \(R\) and \(v_{b}\) play a crucial role and we already mentioned how they can be chosen in the text after Formula (5). The role of these parameters is bolstered in particular in the proof of Theorem 3, where it is demonstrated that, by selecting a sufficiently large \(\alpha\) depending on \(R,v_{b}\), dynamic (18) can be equated to_ \[d\bar{V}_{t}=-\lambda\left(\bar{V}_{t}-\mathcal{P}_{v^{*},\delta}(v_{\alpha}( \rho_{t}))\right)dt+\sigma\left(\left\|\bar{V}_{t}-v_{\alpha}(\rho_{t})\right\| _{2}\wedge M\right)dB_{t},\quad\forall t\in[0,T^{*}], \tag{25}\] _here, \(\delta\) represents a relatively small value. For dynamic (3), we can also establish its equivalence to_ \[dV^{i}_{t}=-\lambda\left(V^{i}_{t}-\mathcal{P}_{v^{*},\delta}(v_{\alpha}( \hat{\rho}^{N}_{t}))\right)dt+\sigma\left(\left\|V^{i}_{t}-v_{\alpha}(\hat{ \rho}^{N}_{t})\right\|_{2}\wedge M\right)dB^{i}_{t},\quad\text{for }i\in[N], \tag{26}\] _with high probability, contingent upon the selection of sufficiently large values for both \(\alpha\) and \(N\)._ **Remark 5**.: _The convergence result in form of Theorem 3 obtained in this work differs from the one presented in [23, Theorem 14] in the sense that we obtain convergence is in expectation, while in [23] convergence with high probability is established. This distinction arises from the truncation of the noise term employed in our algorithm._ ## 3 Proof Details for Section 2 ### Well-Posedness of Equations (1) and (3) Since the projection map \(\mathcal{P}_{v_{b},R}\) is \(1\)-Lipschitz, existence and uniqueness of a strong solution to the SDEs (1) and (3) are assured by essentially analogous proofs as in [11, Theorem 2.1, Theorem 3.1, Theorem 3.2]. The details shall be omitted. Let us remark, however, that due to the presence of the truncation and the projection map, we do not require the function \(f\) is bounded from above or exhibits quadratic growth outside a ball, as required for [11, Theorem 2.1, Theorem 3.1, Theorem 3.2]. ### Proof Details for Theorem 3 **Remark 6**.: _Since adding some constant to \(f\) does not affect the dynamics of Equation (18) and Equation (3), so in the proof, we will assume \(\underline{f}=0\) for simplicity._ Here is the sketch of the proof for the above theorem. To begin, let's consider the left-hand side of Equation (24). We can upper bound it by dividing it into three parts: \[\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}V_{J\Delta t}^{i}-v^{*} \right\|_{2}^{2}\right] \leq 3\left(\underbrace{\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i= 1}^{N}\left(V_{J\Delta t}^{i}-V_{T^{*}}^{i}\right)\right\|_{2}^{2}\right]}_{I }+\underbrace{\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}\left(V_{T^{*}}^ {i}-\bar{V}_{T^{*}}^{i}\right)\right\|_{2}^{2}\right]}_{II}\right. \tag{27}\] \[\left.\hskip 14.226378pt+\underbrace{\mathbb{E}\left[\left\|\frac{1 }{N}\sum_{i=1}^{N}\bar{V}_{T^{*}}^{i}-v^{*}\right\|_{2}^{2}\right]}_{III} \right).\] We will analyze each term separately: term \(I\) can be bounded by \(C_{\text{NA}}\left(\Delta t\right)^{2m}\) using classical results on the convergence of numerical schemes for stochastic differential equations (SDEs), as mentioned in [43]; for the second and third terms, we will analyze them in separate subsections, providing detailed explanations and bounds for each term. Let us provide now a guidance to the reading of the proofs. As the proofs are quite technical, for reader convenience we start presenting the main building blocks of the result, and collect the most technical steps in subsequent lemmas. This arrangement should hopefully allow to grasp more easily the structure of the proof, and to dig deeper into details along with the reading. #### 3.2.1 Upper Bound of the Second Term For the second term, we have the following upper bound. **Proposition 7**.: _Let A1, A2 and A3 hold, \(R,M\) be finite, \(R\geq\left\|v_{b}-v^{*}\right\|_{2}\) and \(N\geq(16\alpha L_{\gamma}\sigma^{2}M^{2})/\lambda\) _then we have_ \[\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}\left(V_{T^{*}}^{i}-\bar{V}_{T^{*} }^{i}\right)\right\|_{2}^{2}\right]\leq\frac{C_{\text{MFA}}}{N}, \tag{28}\] _where \(C_{\text{MFA}}=C_{\text{MFA}}(\lambda,\sigma,d,\alpha,L_{\nu},\nu,L_{\gamma},L _{u},T^{*},R,v_{b},v^{*},M)\)._ Proof.: By definition, for \(i\in[N]\), we have \[d\bar{V}_{t}^{i} =-\lambda\left(\bar{V}_{t}^{i}-\mathcal{P}_{v_{b},R}\left(v_{ \alpha}(\rho_{t})\right)\right)dt+\sigma\left(\left\|\bar{V}_{t}^{i}-v_{\alpha }(\rho_{t})\right\|_{2}\wedge M\right)dB_{t}^{i}, \tag{29}\] \[dV_{t}^{i} =-\lambda\left(V_{t}^{i}-\mathcal{P}_{v_{b},R}\left(v_{\alpha}( \widehat{\rho}_{t}^{N})\right)\right)dt+\sigma\left(\left\|\bar{V}_{t}^{i}-v_ {\alpha}(\widehat{\rho}_{t}^{N})\right\|_{2}\wedge M\right)dB_{t}^{i}, \tag{30}\] where the Brownian motions in the above two equations are the same, \(\text{Law}(\bar{V}_{t}^{i})=\rho_{t}\) and \(\widehat{\rho}_{t}^{N}=1/N\sum_{i=1}^{N}\delta_{V_{t}^{i}}\). By Ito formula, we have \[\begin{split} d\left\|\bar{V}_{t}^{i}-V_{t}^{i}\right\|_{2}^{2}& =\left[-2\lambda\left\langle\bar{V}_{t}^{i}-V_{t}^{i},\left(\bar{V }_{t}^{i}-V_{t}^{i}\right)-\left(\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{ t})\right)-\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\widehat{\rho}_{t}^{N}) \right)\right)\right\rangle\\ +&\sigma^{2}d\left(\left\|\bar{V}_{t}^{i}-v_{\alpha}( \rho_{t})\right\|_{2}\wedge M-\left\|V_{t}^{i}-v_{\alpha}(\widehat{\rho}_{t}^ {N})\right\|_{2}\wedge M\right)^{2}\right]dt\\ &+2\sigma\left(\left\|\bar{V}_{t}^{i}-v_{\alpha}(\rho_{t})\right\| _{2}\wedge M-\left\|V_{t}^{i}-v_{\alpha}(\widehat{\rho}_{t}^{N})\right\|_{2} \wedge M\right)\left(\bar{V}_{t}^{i}-V_{t}^{i}\right)^{\top}dB_{t}^{i},\end{split} \tag{31}\] take expectation from both sides, we have \[\begin{split}\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_ {t}^{i}\right\|_{2}^{2}\right]&=-2\lambda\mathbb{E}\left[\left \langle\bar{V}_{t}^{i}-V_{t}^{i},\left(\bar{V}_{t}^{i}-V_{t}^{i}\right)-\left( \mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t})\right)-\mathcal{P}_{v_{b},R} \left(v_{\alpha}(\widehat{\rho}_{t}^{N})\right)\right)\right\rangle\right]\\ &+\sigma^{2}d\mathbb{E}\left[\left(\left\|\bar{V}_{t}^{i}-v_{ \alpha}(\rho_{t})\right\|_{2}\wedge M-\left\|V_{t}^{i}-v_{\alpha}(\widehat{\rho }_{t}^{N})\right\|_{2}\wedge M\right)^{2}\right]\\ &\leq-2\lambda\mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_{t}^{i} \right\|_{2}^{2}\right]+\sigma^{2}d\mathbb{E}\left[\left\|\left(\bar{V}_{t}^{i }-V_{t}^{i}\right)-\left(v_{\alpha}(\rho_{t})-v_{\alpha}(\widehat{\rho}_{t}^{N })\right)\right\|_{2}^{2}\right]\\ &+2\lambda\mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_{t}^{i}\right\| _{2}\left\|\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t})\right)-\mathcal{P}_{v _{b},R}\left(v_{\alpha}(\widehat{\rho}_{t}^{N})\right)\right\|_{2}\right]\\ &\leq-2\lambda\mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_{t}^{i} \right\|_{2}^{2}\right]+2\lambda\mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_{t}^{i }\right\|_{2}\left\|v_{\alpha}(\rho_{t})-v_{\alpha}(\widehat{\rho}_{t}^{N} \right\|_{2}\right]\\ &+\sigma^{2}d\mathbb{E}\left[\left\|\left(\bar{V}_{t}^{i}-V_{t}^{i }\right)-\left(v_{\alpha}(\rho_{t})-v_{\alpha}(\widehat{\rho}_{t}^{N})\right) \right\|_{2}^{2}\right].\end{split} \tag{32}\] By Young inequality, we have \[2\lambda\mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_{t}^{i}\right\|_{2}\left\|v_{ \alpha}(\rho_{t})-v_{\alpha}(\hat{\rho}_{t})\right\|_{2}\right]\leq\lambda \left[\frac{\mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_{t}^{i}\right\|_{2}^{2} \right]}{2}+2\mathbb{E}\left[\left\|v_{\alpha}(\rho_{t})-v_{\alpha}(\widehat{ \rho}_{t}^{N})\right\|_{2}^{2}\right]\right], \tag{33}\] and \[\mathbb{E}\left[\left\|\left(\bar{V}_{t}^{i}-V_{t}^{i}\right)-\left(v_{\alpha}( \rho_{t})-v_{\alpha}(\widehat{\rho}_{t}^{N})\right)\right\|_{2}^{2}\right]\leq 2 \mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_{t}^{i}\right\|_{2}^{2}+\left\|v_{\alpha}( \rho_{t})-v_{\alpha}(\widehat{\rho}_{t}^{N})\right\|_{2}^{2}\right], \tag{34}\] insert the above two inequalities into Equation (32), we have \[\begin{split}\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_ {t}^{i}\right\|_{2}^{2}\right]&\leq\left(-\frac{3\lambda}{2}+2 \sigma^{2}d\right)\mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_{t}^{i}\right\|_{2}^{2} \right]\\ &+2\left(\lambda+\sigma^{2}d\right)\mathbb{E}\left[\left\|v_{ \alpha}(\rho_{t})-v_{\alpha}(\widehat{\rho}_{t}^{N})\right\|_{2}^{2}\right].\end{split} \tag{35}\] For term \(\mathbb{E}\left[\left\|v_{\alpha}(\rho_{t})-v_{\alpha}(\widehat{\rho}_{t}^{N}) \right\|_{2}^{2}\right]\), we have \[\mathbb{E}\left[\left\|\left|v_{\alpha}(\rho_{t})-v_{\alpha}(\widehat{\rho}_{t}^ {N})\right\|_{2}^{2}\right]\leq 2\mathbb{E}\left[\left\|v_{\alpha}(\rho_{t})-v_{\alpha}(\bar{ \rho}_{t}^{N})\right\|_{2}^{2}\right]+2\mathbb{E}\left[\left\|v_{\alpha}( \bar{\rho}_{t}^{N})-v_{\alpha}(\widehat{\rho}_{t}^{N})\right\|_{2}^{2}\right], \tag{36}\] where \[v_{\alpha}(\bar{\rho}_{t}^{N}):=\frac{\frac{1}{N}\sum\limits_{i=1}^{N}\bar{V}_{t}^ {i}e^{-\alpha f(\bar{V}_{t}^{i})}}{\frac{1}{N}\sum\limits_{i=1}^{N}e^{-\alpha f (\bar{V}_{t}^{i})}}; \tag{37}\] for the first term in Equation (36), we have \[\mathbb{E}\left[\left\|v_{\alpha}(\rho_{t})-v_{\alpha}(\bar{\rho}_{t}^{N}) \right\|_{2}^{2}\right]\leq C_{0}\frac{1}{N}, \tag{38}\] for some constant \(C_{0}\) depending on \(\lambda,\sigma,d,\alpha,L_{\gamma},L_{u},T^{*},R,v_{b},v^{*},M\) by Lemma 10; for the second term in Equation (36), by combining [11, Lemma 3.2.] and Lemma 10, we have \[\mathbb{E}\left[\left\|v_{\alpha}(\bar{\rho}_{t}^{N})-v_{\alpha}(\widehat{\rho }_{t}^{N})\right\|_{2}^{2}\right]\leq C_{1}\frac{1}{N}\sum\limits_{j=1}^{N} \mathbb{E}\left[\left\|\bar{V}_{t}^{j}-V_{t}^{j}\right\|_{2}^{2}\right], \tag{39}\] for some constant \(C_{1}\) depending on \(\lambda,\sigma,d,\alpha,L_{u},R,M\). (As previously mentioned, here we preferred to postpone technical steps and to collect them in Lemma 10 in order to allow the reader to gaps the structure of the proof first. This presentation approach will continue with this style along the paper.) Combining all of these, we have \[\begin{split}\frac{d}{dt}\frac{1}{N}\sum\limits_{i=1}^{N}\mathbb{ E}\left[\left\|\bar{V}_{t}^{i}-V_{t}^{i}\right\|_{2}^{2}\right]& \leq\left(-\frac{3\lambda}{2}+2\sigma^{2}d+4C_{1}\left( \lambda+\sigma^{2}d\right)\right)\frac{1}{N}\sum\limits_{i=1}^{N}\mathbb{E} \left[\left\|\bar{V}_{t}^{i}-V_{t}^{i}\right\|_{2}^{2}\right]\\ &+4\left(\lambda+\sigma^{2}d\right)C_{0}\frac{1}{N},\end{split} \tag{40}\] and by Gronwall's inequality, we have for any \(t\in[0,T^{*}]\) \[\frac{1}{N}\sum\limits_{j=1}^{N}\mathbb{E}\left[\left\|\bar{V}_{t}^{i}-V_{t}^ {i}\right\|_{2}^{2}\right]\leq 4\left(\lambda+\sigma^{2}d\right)\frac{C_{0}}{N}te^ {(-\frac{3\lambda}{2}+2\sigma^{2}d+4C_{1}\left(\lambda+\sigma^{2}d\right))t}. \tag{41}\] Finally by Jensen inequality and letting \(t=T^{*}\), we have \[\mathbb{E}\left[\left\|\frac{1}{N}\sum\limits_{i=1}^{N}\left(V_{T^{*}}^{i}- \bar{V}_{T^{*}}^{i}\right)\right\|_{2}^{2}\right]\leq\frac{C_{\text{MFA}}}{N}, \tag{42}\] where the constant \(C_{\text{MFA}}\) depends on \(\lambda,\sigma,d,\alpha,L_{u},L_{\gamma},T^{*},R,v_{b},v^{*},M\). The next lemma shows that the distribution of \(\bar{V}_{t}\) is sub-Gaussian. **Lemma 8**.: _Let \(R,M\) be finite and \(R\geq\left\|v_{b}-v^{*}\right\|_{2}\), then for any \(K>0\), let \(N\) satisfies \(N\geq(4\sigma^{2}M^{2})/(\lambda K^{2})\), then_ \[C_{K}:=\sup\limits_{t\in[0,T^{*}]}\mathbb{E}\left[e^{\frac{\sum_{i=1}^{N} \left\|\bar{V}_{t}^{i}-v^{*}\right\|_{2}^{2}}{NK^{2}}}\right]<\infty, \tag{43}\] _provided \(\mathbb{E}\left[\exp\!\left(\sum_{i=1}^{N}\left\|\bar{V}_{0}^{i}-v^{*}\right\| _{2}^{2}/NK^{2}\right)\right]<\infty\), where \(C_{K}\) depends on \(K,\lambda,\sigma,d,R,M,T^{*}\), and_ \[d\bar{V}_{t}^{i}=-\lambda\left(\bar{V}_{t}^{i}-\mathcal{P}_{v_{b},R}\left(v_{ \alpha}(\rho_{t})\right)\right)dt+\sigma\left(\left\|\bar{V}_{t}^{i}-v_{\alpha }(\rho_{t})\right\|_{2}\wedge M\right)dB_{t}^{i},\quad i\in[N], \tag{44}\] _here \(B_{t}^{i}\) is independent to each other and \(\text{Law}(\bar{V}_{t}^{i})=\rho_{t}\)._ Proof.: To apply Ito formula, we need to truncate function \(\exp\bigl{(}\left\lVert v\right\rVert_{2}^{2}/K^{2}\bigr{)}\) from above. Define \[G_{W}(x):=\begin{cases}x&x\in[0,W-1]\\ \frac{1}{16}(x+1-W)^{4}-\frac{1}{4}(x+1-W)^{3}+x&x\in[W-1,W+1]\\ W&x\in[W+1,\infty)\end{cases}, \tag{45}\] where \(W>0\). It is easy to verify \(G_{W}(x)\) is a \(\mathcal{C}^{2}\) approximation of function \(x\wedge W\) and satisfies \(G_{W}\in\mathcal{C}^{2}(\mathbb{R}^{+}),G_{W}(x)\leq x\wedge W,G_{W}^{\prime} \in[0,1]\) and \(G_{W}^{\prime\prime}\leq 0\). Since now \(G_{W,N,K}(t):=\exp\bigl{(}G_{W}(\sum_{i=1}^{N}\left\lVert\bar{V}_{t}^{i}-v^{*} \right\rVert_{2}^{2}/N)/K^{2}\bigr{)}\) is upper bounded, we can apply Ito formula to it. We will denote \(G_{W}^{\prime}:=G_{W}^{\prime}(\sum_{i=1}^{N}\left\lVert\bar{V}_{t}^{i}\right \rVert_{2}^{2}/N)\) and \(G_{W}^{\prime\prime}:=G_{W}^{\prime\prime}(\sum_{i=1}^{N}\left\lVert\bar{V}_{ t}^{i}\right\rVert_{2}^{2}/N)\) in the proof. Let \(Y_{t}:=\bigl{(}(\bar{V}_{t}^{1})^{\top},\cdots,(\bar{V}_{t}^{N})^{\top}\bigr{)} ^{\top}\), then the \(Nd\) dimensional process \(Y_{t}\) satisfies \(dY_{t}=-\lambda(Y_{t}-\overline{\mathcal{P}_{v_{b},R}(\rho_{t})})dt+\mathcal{ M}dB_{t}\), where \(\overline{\mathcal{P}_{v_{b},R}(\rho_{t})})=\bigl{(}\mathcal{P}_{v_{b},R}(\rho_ {t})\bigr{)}^{\top},\ldots,\mathcal{P}_{v_{b},R}(\rho_{t})\bigr{)}^{\top}\), \(\mathcal{M}=\operatorname{diag}\left(\mathcal{M}_{1},\ldots,\mathcal{M}_{N} \right),\mathcal{M}_{i}=\sigma\left\lVert\bar{V}_{t}^{i}-v_{\alpha}\left(\rho _{t}\right)\right\rVert_{2}\wedge M\mathcal{I}_{d}\) and \(B_{t}\) is the \(Nd\) dimensional Brownian motion. With \(Y_{t}\), we have \(G_{W,N,K}(t)=\exp\left(G_{W}(\left\lVert Y_{t}\right\rVert_{2}^{2}/N)/K^{2}\right)\) and \[dG_{W,N,K}(t) =\sum_{i=1}^{N}\nabla_{Y_{t}}G_{W,N,K}(t)dY_{t}+\frac{1}{2} \operatorname{tr}\left(\mathcal{M}\nabla_{Y_{t},Y_{t}}^{2}G_{W,N,K}(t) \mathcal{M}\right)dt \tag{46}\] \[=G_{W,N,K}(t)\frac{G_{W}^{\prime}}{K^{2}}\sum_{i=1}^{N}\left(2 \frac{\bar{V}_{t}^{i}-v^{*}}{N}\right)^{\top}d\bar{V}_{t}^{i}\] \[+\frac{1}{2}G_{W,N,K}(t)\sum_{i=1}^{N}\left(G_{W}^{\prime}\frac{2 d}{NK^{2}}+G_{W}^{\prime\prime}\frac{4\left\lVert\bar{V}_{t}^{i}-v^{*} \right\rVert_{2}^{2}}{N^{2}K^{2}}\right.\] \[\left.+\left(G_{W}^{\prime}\right)^{2}\frac{4\left\lVert\bar{V}_ {t}^{i}-v^{*}\right\rVert_{2}^{2}}{N^{2}K^{4}}\right)\left(\sigma\left\lVert \bar{V}_{t}^{i}-v_{\alpha}\left(\rho_{t}\right)\right\rVert_{2}\wedge M\right) ^{2}dt.\] The first term on the right-hand side in (46) is formally expanded as follows \[G_{W,N,K}(t)\frac{G_{W}^{\prime}}{K^{2}}\sum_{i=1}^{N}\left(2 \frac{\bar{V}_{t}^{i}-v^{*}}{N}\right)^{\top}d\bar{V}_{t}^{i} \tag{47}\] \[=G_{W,N,K}(t)G_{W}^{\prime}\sum_{i=1}^{N}\left(2\frac{\bar{V}_{t} ^{i}-v^{*}}{NK^{2}}\right)^{\top}d\bar{V}_{t}^{i}\] \[=G_{W,N,K}(t)G_{W}^{\prime}\sum_{i=1}^{N}\left(2\frac{\bar{V}_{t} ^{i}-v^{*}}{NK^{2}}\right)^{\top}\left(-\lambda\left(\bar{V}_{t}^{i}-v^{*}+v^{ *}-\mathcal{P}_{v_{b},R}(\rho_{t})\right)\right)dt+\sigma\left(\left\lVert\bar {V}_{t}^{i}-v_{\alpha}(\rho_{t})\right\rVert_{2}\wedge M\right)dB_{t}^{i}\right)\] \[=G_{W,N,K}(t)G_{W}^{\prime}\left\{\frac{-2\lambda}{NK^{2}}\sum_{i =1}^{N}\left\lVert\bar{V}_{t}^{i}-v^{*}\right\rVert_{2}^{2}dt-\frac{2\lambda}{NK^ {2}}\sum_{i=1}^{N}\left\langle\bar{V}_{t}^{i}-v^{*},v^{*}-\mathcal{P}_{v_{b},R}(v _{\alpha}(\rho_{t}))\right\rangle dt\right.\] \[\left.+2\sigma\sum_{i=1}^{N}\,\,\left(\left\lVert\bar{V}_{t}^{i}-v _{\alpha}(\rho_{t})\right\rVert_{2}\wedge M\right)\frac{(\bar{V}_{t}^{i}-v^{*}) }{NK^{2}}^{\top}dB_{t}^{i}\right\}.\] Notice additionally that \[\left\langle\bar{V}_{t}^{i}-v^{*},v^{*}-\mathcal{P}_{v_{b},R}(v_{\alpha}(\rho_{t} ))\right\rangle\leq\left\lVert\bar{V}_{t}^{i}-v^{*}\right\rVert_{2}\left\lVert v ^{*}-\mathcal{P}_{v_{b},R}(v_{\alpha}(\rho_{t}))\right\rVert_{2}\leq 2R\left\lVert\bar{V}_{t}^{i}-v^{*} \right\rVert_{2}, \tag{48}\] as \(v^{*}\) and \(\mathcal{P}_{v_{b},R}(v_{\alpha}(\rho_{t}))\) belong to the same ball \(B_{R}(v_{b})\) around \(v_{b}\) of radius \(R\). Similarly, we can expand the coefficient of the second term and by the properties \(G_{W}^{\prime}\in[0,1],G_{W}^{\prime\prime}\leq 0\) we bound it from above \[\begin{split}&\frac{1}{2}G_{W,N,K}(t)\sum_{i=1}^{N}\left(G^{\prime}_{ W}\frac{2d}{NK^{2}}+G^{\prime\prime}_{W}\frac{4\left\|\bar{V}^{i}_{t}-v^{*} \right\|_{2}^{2}}{N^{2}K^{2}}+\left(G^{\prime}_{W}\right)^{2}\frac{4\left\| \bar{V}^{i}_{t}-v^{*}\right\|_{2}^{2}}{N^{2}K^{4}}\right)\left(\sigma\left\| \bar{V}^{i}_{t}-v_{\alpha}\left(\rho_{t}\right)\right\|_{2}\wedge M\right)^{2} \\ &\leq G_{W,N,K}(t)G^{\prime}_{W}\frac{d\sigma^{2}M^{2}}{K^{2}}+G_{ W,N,K}(t)\left(G^{\prime}_{W}\right)^{2}\frac{2\sigma^{2}M^{2}}{N^{2}K^{4}} \sum_{i=1}^{N}\left\|\bar{V}^{i}_{t}-v^{*}\right\|_{2}^{2}\\ &\leq G_{W,N,K}(t)G^{\prime}_{W}\frac{d\sigma^{2}M^{2}}{K^{2}}+G_ {W,N,K}(t)G^{\prime}_{W}\frac{2\sigma^{2}M^{2}}{N^{2}K^{4}}\sum_{i=1}^{N}\left\| \bar{V}^{i}_{t}-v^{*}\right\|_{2}^{2},\end{split} \tag{49}\] By taking expectations in (46), combining (47), (48) and (49) we obtain \[\begin{split}\frac{d}{dt}\mathbb{E}\left[G_{W,N,K}(t)\right]& \leq\mathbb{E}\left[G_{W,N,K}(t)G^{\prime}_{W}\left\{\frac{-2 \lambda}{NK^{2}}\sum_{i=1}^{N}\left\|\bar{V}^{i}_{t}-v^{*}\right\|_{2}^{2}+ \frac{4R\lambda}{NK^{2}}\sum_{i=1}^{N}\left\|\bar{V}^{i}_{t}-v^{*}\right\|_{2 }\right.\\ &+\left.G_{W,N,K}(t)G^{\prime}_{W}\frac{d\sigma^{2}M^{2}}{K^{2}}+ G_{W,N,K}(t)G^{\prime}_{W}\frac{2\sigma^{2}M^{2}}{N^{2}K^{4}}\sum_{i=1}^{N} \left\|\bar{V}^{i}_{t}-v^{*}\right\|_{2}^{2}\right\}\right]\end{split} \tag{50}\] Rearranging now the above calculation, we have \[\begin{split}\frac{d}{dt}\mathbb{E}\left[G_{W,N,K}(t)\right]& \leq\mathbb{E}\left[G_{W,N,K}(t)G^{\prime}_{W}\left\{\left[ \left(\frac{4\lambda R}{NK^{2}}\sum_{i=1}^{N}\left\|\bar{V}^{i}_{t}-v^{*} \right\|_{2}\right)+\frac{\sigma^{2}M^{2}d}{K^{2}}\right]\right.\\ &\left.-\left(\frac{2\lambda}{NK^{2}}-\frac{2\sigma^{2}M^{2}}{N^{2 }K^{4}}\right)\sum_{i=1}^{N}\left\|\bar{V}^{i}_{t}-v^{*}\right\|_{2}^{2} \right\}\right],\end{split} \tag{51}\] By Young inequality, we have \[4R\left\|\bar{V}^{i}_{t}-v^{*}\right\|_{2}\leq 4R^{2}+\left\|\bar{V}^{i}_{t}-v ^{*}\right\|_{2}^{2}. \tag{52}\] By inserting the latter into Equation (51) we conclude with the estimate \[\begin{split}\frac{d}{dt}G_{W,N,K}(t)&\leq\mathbb{ E}\left[G_{W,N,K}(t)G^{\prime}_{W}\left\{\frac{\sigma^{2}M^{2}d+2\lambda R^{2}}{K^{2}} -\left(\frac{\lambda}{NK^{2}}-\frac{2\sigma^{2}M^{2}}{N^{2}K^{4}}\right)\sum _{i=1}^{N}\left\|\bar{V}^{i}_{t}-v^{*}\right\|_{2}^{2}\right\}\right]\\ &\leq\mathbb{E}\left[G_{W,N,K}(t)G^{\prime}_{W}\left(-A\sum_{i=1}^ {N}\left\|\bar{V}^{i}_{t}-v^{*}\right\|_{2}^{2}+B\right)\right],\end{split} \tag{53}\] where \[A:=\frac{\lambda}{NK^{2}}-\frac{2\sigma^{2}M^{2}}{N^{2}K^{4}},\quad B:=\frac{ \sigma^{2}M^{2}d+4\lambda R^{2}}{K^{2}}. \tag{54}\] If \(\sum_{i=1}^{N}\left\|\bar{V}^{i}_{t}-v^{*}\right\|_{2}^{2}\geq(B-1)/A\), we have \[G_{W,N,K}(t)G^{\prime}_{W}\left(-A\sum_{i=1}^{N}\left\|\bar{V}^{i}_{t}-v^{*} \right\|_{2}^{2}+B\right)\leq 0; \tag{55}\] if \(\sum_{i=1}^{N}\left\|\bar{V}^{i}_{t}-v^{*}\right\|_{2}^{2}\leq(B-1)/A\), we have \[G_{W,N,K}(t)G^{\prime}_{W}\left(-A\sum_{i=1}^{N}\left\|\bar{V}^{i}_{t}-v^{*} \right\|_{2}^{2}+B\right)\leq Be^{\frac{B-1}{NK^{2}A}}; \tag{56}\] so we always have \[G_{W,N,K}(t)G_{W}^{\prime}\left(-A\sum_{i=1}^{N}\left\|\bar{V}_{t}^{i}-v^{*} \right\|_{2}^{2}+B\right)\leq Be^{\frac{B-1}{NK^{2}A}}, \tag{57}\] and \[\frac{d}{dt}\mathbb{E}\left[G_{W,N,K}(t)\right]\leq Be^{\frac{B-1}{NK^{2}A}}. \tag{58}\] Based on the above inequality, we have \[\mathbb{E}\left[G_{W,N,K}(t)\right]\leq\mathbb{E}\left[G_{W,N,K}(0)\right]+Be^ {\frac{B-1}{NK^{2}A}}t\leq\mathbb{E}\left[e^{\frac{\sum_{i=1}^{N}\left\|\bar{V }_{0}^{i}-v^{*}\right\|_{2}^{2}}{NK^{2}}}\right]+Be^{\frac{B-1}{NK^{2}A}}t, \tag{59}\] let \(W\rightarrow\infty\), we have \[\mathbb{E}\left[e^{\frac{\sum_{i=1}^{N}\left\|\bar{V}_{t}^{i}-v^{*}\right\|_ {2}^{2}}{NK^{2}}}\right]\leq\mathbb{E}\left[e^{\frac{\sum_{i=1}^{N}\left\|\bar {V}_{0}^{i}-v^{*}\right\|_{2}^{2}}{NK^{2}}}\right]+Be^{\frac{B-1}{NK^{2}A}}t<\infty, \tag{60}\] provided \(\mathbb{E}\left[\exp(\sum_{i=1}^{N}\left\|\bar{V}_{0}^{i}-v^{*}\right\|_{2}^{ 2}/NK^{2})\right]<\infty\). If \(N\geq(4\sigma^{2}M^{2})/(\lambda K^{2})\), we have \[\frac{B-1}{NK^{2}A}\leq\frac{B}{NK^{2}A}=\frac{N(\sigma^{2}M^{2}d+4\lambda R^{ 2})}{\lambda NK^{2}-2\sigma^{2}M^{2}}\leq C(K,\lambda,\sigma,M,R,d), \tag{61}\] and so \(C_{K}\) is upper bounded and independent of \(N\). **Remark 9**.: _In Lemma 8, as the number of particles \(N\) increases, the condition for \(K\) to ensure \(C_{K}<\infty\) becomes more relaxed. Specifically, the value of \(K\) can be as small as one needs as \(N\) increases. This phenomenon can be easily understood by considering the limit as \(N\) approaches infinity. In this case, \(C_{K}\) tends to \(\sup_{t\in[0,T^{*}]}\exp\left(\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\| _{2}^{2}\right]/K^{2}\right)\). Therefore, as one shows an upper bound on the second moment of \(\bar{V}_{t}\), then it becomes evident that \(C_{K}\) remains finite as \(N\) tends to infinity._ With the help of Lemma 8, we can now prove the following lemma. **Lemma 10**.: _Assume A1 and A3 hold. Then for any \(t\in[0,T^{*}]\), finite \(R,M\) with \(R\geq\left\|v_{b}-v^{*}\right\|_{2}\) and \(N\) satisfies \(N\geq(16\alpha L_{\gamma}\sigma^{2}M^{2})/\lambda\), we have_ \[\mathbb{E}\left[\left\|v_{\alpha}(\rho_{t})-v_{\alpha}(\bar{\rho}_{t})\right\| _{2}^{2}\right]\leq\frac{C_{0}}{N}, \tag{62}\] _where \(C_{0}:=C_{0}(\lambda,\sigma,d,\alpha,L_{\gamma},L_{u},T^{*},R,v_{b},v^{*},M)\)._ Proof.: Without loss of generality, we assume \(v^{*}=0\) and \(\underline{f}=f(v^{*})=0\) in the proof. We have \[\mathbb{E}\left[\|v_{\alpha}(\rho_{t})-v_{\alpha}(\bar{\rho}_{t})\|_ {2}^{2}\right]=\mathbb{E}\left[\left\|\frac{\frac{1}{N}\sum_{i=1}^{N}\bar{V}_{t }^{i}e^{-\alpha f(\bar{V}_{t}^{i})}-\int_{\mathbb{R}^{d}}ve^{-\alpha f(v)}d \rho_{t}(v)}{\frac{1}{N}\sum_{i=1}^{N}e^{-\alpha f(\bar{V}_{t}^{i})}-\frac{\int _{\mathbb{R}^{d}}ve^{-\alpha f(v)}d\rho_{t}(v)}{\int_{\mathbb{R}^{d}}e^{-\alpha f (v)}d\rho_{t}(v)}}\right\|_{2}^{2}\right]\] \[\leq 2\mathbb{E}\left[\left\|\frac{1}{\frac{1}{N}\sum_{i=1}^{N}e ^{-\alpha f(\bar{V}_{t}^{i})}}\left[\frac{1}{N}\sum_{i=1}^{N}\bar{V}_{t}^{i}e ^{-\alpha f(\bar{V}_{t}^{i})}-\int_{\mathbb{R}^{d}}ve^{-\alpha f(v)}d\rho_{t}( v)\right]\right\|_{2}^{2}\right]\] \[+2\left\|v_{\alpha}(\rho_{t})\right\|_{2}^{2}\mathbb{E}\left[ \left\|e^{\alpha\frac{1}{N}\sum_{i=1}^{N}f(\bar{V}_{t}^{i})}\left[\frac{1}{N} \sum_{i=1}^{N}e^{-\alpha f(\bar{V}_{t}^{i})}-\int_{\mathbb{R}^{d}}e^{-\alpha f (v)}d\rho_{t}(v)\right]\right\|_{2}^{2}\right]\] \[\leq 2\ \mathcal{I}\times\mathcal{II}+2\left\|v_{\alpha}(\rho_{t })\right\|_{2}^{2}\mathcal{I}\times\mathcal{II},\] where \[\mathcal{I} :=\left(\mathbb{E}\left[e^{4\alpha\frac{1}{N}\sum_{i=1}^{N}f( \bar{V}_{t}^{i})}\right]\right)^{\frac{1}{2}}, \tag{64}\] \[\mathcal{II} :=\left(\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}\bar{V}_{ t}^{i}e^{-\alpha f(\bar{V}_{t}^{i})}-\int_{\mathbb{R}^{d}}ve^{-\alpha f(v)}d \rho_{t}(v)\right\|_{2}^{4}\right]\right)^{\frac{1}{2}},\] (65) \[\mathcal{III} :=\left(\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}e^{- \alpha f(\bar{V}_{t}^{i})}-\int_{\mathbb{R}^{d}}e^{-\alpha f(v)}d\rho_{t}(v) \right\|_{2}^{4}\right]\right)^{\frac{1}{2}}. \tag{66}\] In the following, we will upper bound terms \(\mathcal{I},\mathcal{II}\) and \(\mathcal{III}\) separately. Firstly, by Lemma 8, we have \[\mathbb{E}\left[e^{\frac{\sum_{i=1}^{N}\left\|\bar{V}_{t}^{i}\right\|_{2}^{2} }{NK^{2}}}\right]\leq C_{K}<\infty,\quad t\in[0,T^{*}], \tag{67}\] where \(C_{K}\) only depends on \(K,\lambda,\sigma,d,R,M,T^{*}\). Then \[\mathbb{E}\left[e^{4\alpha\frac{1}{N}\sum_{i=1}^{N}f(\bar{V}_{t} ^{i})}\right] \leq\mathbb{E}\left[e^{4\alpha\frac{1}{N}\sum_{i=1}^{N}L_{\gamma }\left(1+\left\|\bar{V}_{t}^{i}\right\|_{2}^{1+\gamma}\right)}\right]\] \[\leq e^{4\alpha L_{\gamma}}\mathbb{E}\left[e^{4\alpha L_{\gamma} \frac{1}{N}\sum_{i=1}^{N}\left\|\bar{V}_{t}^{i}\right\|_{2}^{1+\gamma}}\right]\] \[\leq e^{8\alpha L_{\gamma}}\mathbb{E}\left[e^{4\alpha L_{\gamma} \frac{1}{N}\sum_{i=1}^{N}\left\|\bar{V}_{t}^{i}\right\|_{2}^{2}}\right] \tag{68}\] \[=e^{8\alpha L_{\gamma}}\mathbb{E}\left[e^{\frac{1}{K^{2}}\frac{1} {N}\sum_{i=1}^{N}\left\|\bar{V}_{t}^{i}\right\|_{2}^{2}}\right]\quad//\text{ let now }K^{2}=1/(4\alpha L_{\gamma})\] \[\leq e^{8\alpha L_{\gamma}}C_{K}\mid_{K=\frac{1}{2\sqrt{\alpha L _{\gamma}}}},\] where \(N\) should satisfy \(N\geq(16\alpha L_{\gamma}\sigma^{2}M^{2})/\lambda\). Secondly, we have \[\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}\bar{V}_{t}^{i}e^{- \alpha f(\bar{V}_{t}^{i})}-\int_{\mathbb{R}^{d}}ve^{-\alpha f(v)}d\rho_{t}(v) \right\|_{2}^{4}\right] =\frac{1}{N^{4}}\mathbb{E}\left[\sum_{i_{1},i_{2},i_{3},i_{4}\in[N ]}\left\langle\bar{Z}_{t}^{i_{1}},\bar{Z}_{t}^{i_{2}}\right\rangle\left\langle \bar{Z}_{t}^{i_{3}},\bar{Z}_{t}^{i_{4}}\right\rangle\right]\] \[\leq\frac{4!L_{u}^{4}}{N^{2}}, \tag{69}\] where \(\left\{\bar{Z}_{t}^{i}:=\bar{V}_{t}^{i}e^{-\alpha f(\bar{V}_{t}^{i})}-\int_{ \mathbb{R}^{d}}ve^{-\alpha f(v)}d\rho_{t}(v)\right\}_{i\in[N]}\) are i.i.d. and have zero mean, and so \[\left(\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}\bar{V}_{t}^{i}e^{- \alpha f(\bar{V}_{t}^{i})}-\int_{\mathbb{R}^{d}}ve^{-\alpha f(v)}d\rho_{t}(v) \right\|_{2}^{4}\right)^{\frac{1}{2}}\leq\frac{5L_{u}^{2}}{N}, \tag{70}\] similarly, we have \[\left(\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}e^{-\alpha f(\bar{V}_{ t}^{i})}-\int_{\mathbb{R}^{d}}e^{-\alpha f(v)}d\rho_{t}(v)\right\|_{2}^{4} \right]\right)^{\frac{1}{2}}\leq\frac{5}{N}. \tag{71}\] Combine all the terms, we have \[\mathbb{E}\left[\left\|v_{\alpha}(\rho_{t})-v_{\alpha}(\bar{\rho}_{t})\right\| _{2}^{2}\right]\leq 10e^{6\alpha L_{\gamma}}C_{K}^{\frac{1}{2}}\mid_{K=\frac{1}{2 \sqrt{\alpha L_{\gamma}}}}\left(L_{u}^{2}+\sup_{t\in[0,T^{*}]}\|v_{\alpha}( \rho_{t})\|_{2}^{2}\right)\frac{1}{N}; \tag{72}\] by Lemma 13, Lemma 15, Lemma 16, we know that \(\|v_{\alpha}(\rho_{t})\|_{2}\) can be uniformly bounded by a constant depending on \(\alpha,\lambda,\sigma,d,R,v_{b},v^{*},M,L_{\nu},\nu\) (see in particular Equation (87) that combines the aforementioned lemmas), so we have \[\mathbb{E}\left[\|v_{\alpha}(\rho_{t})-v_{\alpha}(\bar{\rho}_{t})\|_{2}^{2} \right]\leq\frac{C_{0}}{N}, \tag{73}\] for some constant \(C_{0}\) depends on \(\lambda,\sigma,d,\alpha,L_{\nu},\nu,L_{\gamma},L_{u},T^{*},R,v_{b},v^{*},M\). #### 3.2.2 Upper Bound of the Third Term In this section, we upper bound term \(III\). Before we state the main result (Proposition 14) of this section, we need first show two necessary lemmas. **Lemma 11**.: _Let \(R,M\in(0,\infty)\) and assume SDE (18) has a strong solution, then we have_ \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^ {2}\right] \leq-\lambda\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^ {2}\right]+\lambda\left(\left\|\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t}) \right)-v^{*}\right\|_{2}^{2}+\|v_{\alpha}(\rho_{t})-\mathcal{P}_{v_{b},R} \left(v_{\alpha}(\rho_{t})\right)\|_{2}^{2}\right)\] \[\quad+\sigma^{2}M^{2}d, \tag{74}\] _if further let \(\lambda\geq 2\sigma^{2}d\), we have_ \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}\right] \leq-\lambda\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}\right]+ \lambda\left(\left\|\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t})\right)-v^ {*}\right\|_{2}^{2}+\|v_{\alpha}(\rho_{t})-\mathcal{P}_{v_{b},R}\left(v_{ \alpha}(\rho_{t})\right)\right\|_{2}^{2}\right). \tag{75}\] Proof.: By Ito formula, we have \[d\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2} =2\left(\bar{V}_{t}-v^{*}\right)^{\top}d\bar{V}_{t}+\sigma^{2}d \left(\left\|\bar{V}_{t}-v_{\alpha}(\rho_{t})\right\|_{2}^{2}\wedge M^{2} \right)dt \tag{76}\] \[=-2\lambda\left\langle\bar{V}_{t}-v^{*},\bar{V}_{t}-\mathcal{P}_{ v_{b},R}\left(v_{\alpha}(\rho_{t})\right)\right\rangle dt+2\sigma\left(\left\| \bar{V}_{t}-v_{\alpha}(\rho_{t})\right\|_{2}\wedge M\right)\left(\bar{V}_{t}-v ^{*}\right)^{\top}dB_{t}\] \[+\sigma^{2}d\left(\left\|\bar{V}_{t}-v_{\alpha}(\rho_{t})\right\| _{2}^{2}\wedge M^{2}\right)dt\] \[=-\lambda\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}+\left\| \bar{V}_{t}-\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t})\right)\right\|_{2 }^{2}-\left\|\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t})\right)-v^{*} \right\|_{2}^{2}\right]dt\] \[+2\sigma\left(\left\|\bar{V}_{t}-v_{\alpha}(\rho_{t})\right\|_{2 }\wedge M\right)\left(\bar{V}_{t}-v^{*}\right)^{\top}dB_{t}+\sigma^{2}d\left( \left\|\bar{V}_{t}-v_{\alpha}(\rho_{t})\right\|_{2}^{2}\wedge M^{2}\right)dt,\] take expectation on both side, we have \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^ {2}\right] \tag{77}\] \[+\sigma^{2}d\mathbb{E}\left[\left\|\bar{V}_{t}-v_{\alpha}(\rho_{ t})\right\|_{2}^{2}\wedge M^{2}\right].\] For the term \(\mathbb{E}\left[\left\|\bar{V}_{t}-\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t })\right)\right\|_{2}^{2}\right]\), we have \[\mathbb{E}\left[\left\|\bar{V}_{t}-\mathcal{P}_{v_{b},R}\left(v _{\alpha}(\rho_{t})\right)\right\|_{2}^{2}\right] \tag{78}\] \[+2\mathbb{E}\left[\left\langle\bar{V}_{t}-v_{\alpha}(\rho_{t}),v _{\alpha}(\rho_{t})-\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t})\right) \right\rangle\right]\] \[\geq\mathbb{E}\left[\left\|\bar{V}_{t}-v_{\alpha}(\rho_{t})\right\| _{2}^{2}\right]+\mathbb{E}\left[\left\|v_{\alpha}(\rho_{t})-\mathcal{P}_{v_{b}, R}\left(v_{\alpha}(\rho_{t})\right)\right\|_{2}^{2}\right]\] \[-\left(\frac{1}{2}\mathbb{E}\left[\left\|\bar{V}_{t}-v_{\alpha}( \rho_{t})\right\|_{2}^{2}\right]+2\mathbb{E}\left[\left\|v_{\alpha}(\rho_{t}) -\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t})\right)\right\|_{2}^{2}\right]\right)\] \[=\frac{1}{2}\mathbb{E}\left[\left\|\bar{V}_{t}-v_{\alpha}(\rho_{ t})\right\|_{2}^{2}\right]-\mathbb{E}\left[\left\|v_{\alpha}(\rho_{t})-\mathcal{P}_{v_{b},R} \left(v_{\alpha}(\rho_{t})\right)\right\|_{2}^{2}\right],\] insert this into Equation (77), we have \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^ {2}\right] \tag{79}\] \[-\frac{1}{2}\lambda\mathbb{E}\left[\left\|\bar{V}_{t}-v_{\alpha}( \rho_{t})\right\|_{2}^{2}\right]+\sigma^{2}d\left(\left\|\bar{V}_{t}-v_{\alpha}( \rho_{t})\right\|_{2}^{2}\wedge M^{2}\right).\] From Equation (79), we know \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^ {2}\right] \tag{80}\] \[+\sigma^{2}M^{2}d,\quad\text{for any }\lambda,\sigma;\] and \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^ {2}\right] \tag{81}\] \[+\left(-\frac{1}{2}\lambda+\sigma^{2}d\right)\mathbb{E}\left[ \left\|\bar{V}_{t}-v_{\alpha}(\rho_{t})\right\|_{2}^{2}\right],\quad\text{for any }\lambda,\sigma;\] if \(\lambda\geq 2\sigma^{2}d\), by Equation (81), we have \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}\right] \leq-\lambda\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2} \right]+\lambda\left(\left\|\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t}) \right)-v^{*}\right\|_{2}^{2}+\left\|v_{\alpha}(\rho_{t})-\mathcal{P}_{v_{b},R }\left(v_{\alpha}(\rho_{t})\right)\right\|_{2}^{2}\right). \tag{82}\] **Remark 12**.: _When \(R=M=\infty\), we can show_ \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}\right]=- \lambda\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}\right]+\lambda \left\|v_{\alpha}(\rho_{t})-v^{*}\right\|_{2}^{2}-\left(\lambda-\sigma^{2}d \right)\mathbb{E}\left[\left\|\bar{V}_{t}-v_{\alpha}(\rho_{t})\right\|_{2}^{2 }\right], \tag{83}\] _and if further \(\lambda\geq\sigma^{2}d\), we have_ \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}\right] \leq-\lambda\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}\right]+ \lambda\left\|v_{\alpha}(\rho_{t})-v^{*}\right\|_{2}^{2}, \tag{84}\] _this form differs from [23, Lemma 18]._ The next result is a quantitative version of the Laplace principle and is from [23, Proposition 21.]; hence, we report it here without proof. **Lemma 13**.: _For any \(r>0\) we define \(f_{r}:=\sup_{v\in B_{r}(v^{*})}f(v)\). Then, under the inverse continuity condition A2, for any \(r\in(0,R_{0}]\) and \(q>0\) such that \(q+f_{r}\leq f_{\infty}\), we have_ \[\left\|v_{\alpha}(\rho)-v^{*}\right\|_{2}\leq\frac{(q+f_{r})^{\nu}}{L_{\nu}}+ \frac{\exp(-\alpha q)}{\rho\left(B_{r}\left(v^{*}\right)\right)}\int\left\|v- v^{*}\right\|_{2}d\rho(v) \tag{85}\] With the above preparation, we can now upper bound term \(III\). We have by Jensen inequality \[III:=\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}\bar{V}_{T^{*}}^{i}-v^{*} \right\|_{2}^{2}\right]\leq\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}\left[\left\| \bar{V}_{T^{*}}^{i}-v^{*}\right\|_{2}^{2}\right], \tag{86}\] so it is enough to upper bound \(\mathbb{E}\left[\left\|\bar{V}_{T^{*}}-v^{*}\right\|_{2}^{2}\right]\). **Proposition 14**.: _Assume A1, A2 and A3 hold, \(\rho_{0}\in\mathcal{P}_{4}\left(\mathbb{R}^{d}\right)\) and satisfy_ \[\rho_{0}\left(B_{r}\left(v^{*}\right)\right)>0\quad\text{ for all }\quad r>0.\] _For any \(\epsilon\in(0,W_{2}^{2}(\rho_{0},\delta_{v^{*}}))\), \(R\in(\left\|v_{b}-v^{*}\right\|_{2}+\sqrt{\epsilon/2},\infty)\) and \(M\in(0,\infty)\), set_ \[T^{*}:=\frac{1}{\lambda}\log\left(\frac{2W_{2}^{2}(\rho_{0},\delta_{v^{*}})}{ \epsilon}\right),\] _then if \(\lambda\geq 2\sigma^{2}d\) or \(\sigma^{2}M^{2}d=\mathcal{O}(\epsilon)\), we can set \(\alpha\) large enough depends on \(\lambda,\sigma,d,T^{*},R,v_{b},M,\epsilon\) and properties of \(f\), such that \(\mathbb{E}\left[\left\|\bar{V}_{T^{*}}-v^{*}\right\|_{2}^{2}\right]=\mathcal{O }(\epsilon)\)._ Proof.: We only prove the case when \(\lambda\geq 2\sigma^{2}d\) with detail, the case when \(\sigma^{2}M^{2}d=\mathcal{O}(\epsilon)\) is similar. By Lemma 13 and Lemma 16, we have \[\left\|v_{\alpha}(\rho_{t})-v^{*}\right\|_{2} \leq\frac{(q+f_{r})^{\nu}}{L_{\nu}}+\frac{\exp(-\alpha q)}{\rho_ {t}\left(B_{r}\left(v^{*}\right)\right)}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{ *}\right\|_{2}\right] \tag{87}\] \[\leq \frac{(q+f_{r})^{\nu}}{L_{\nu}}+\exp(-\alpha q)C_{2}C_{3},\] where \(C_{2}:=(\exp\{q^{\prime}T^{*}\})/C_{4}<\infty\), \(q^{\prime},C_{4}\) are from Lemma 16, \(C_{3}:=\sup_{[0,T^{*}]}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2} \right]<\infty\) by Lemma 15. In the next, we will deal with term \((q+f_{r})^{\nu}/L_{\nu}\). Let \(q=f_{r}\), by A2 and A3, we can choose proper \(r\), such that \(2L_{\nu}r^{1/\nu}\leq 2f_{r}\leq f_{\infty}\), further by A3, we have \[\frac{(q+f_{r})^{\nu}}{L_{\nu}}=\frac{(2f_{r})^{\nu}}{L_{\nu}}\leq\frac{(2L_{ \gamma})^{\nu}r^{(1+\gamma)\nu}}{L_{\nu}}, \tag{88}\] so if \[r<r_{0}:=\min\left\{\left(\frac{\epsilon}{8}\right)^{\frac{1}{2(1+\gamma)\nu}} \left(\frac{L_{\nu}}{(2L_{\gamma})^{\nu}}\right)^{\frac{1}{(1+\gamma)\nu}}, \sqrt{\frac{\epsilon}{2}}\right\}, \tag{89}\] we will have \[\frac{\left(q+f_{r}\right)^{\nu}}{L_{\nu}}=\frac{(2f_{r})^{\nu}}{L_{\nu}}\leq \frac{\sqrt{\epsilon}}{2\sqrt{2}}. \tag{90}\] For term \(\exp(-\alpha q)C_{2}C_{3}\), we can choose \(\alpha\) big enough such that \[\exp(-\alpha q)C_{2}C_{3}\leq\frac{\sqrt{\epsilon}}{2\sqrt{2}}. \tag{91}\] With these choices of \(r\) and \(\alpha\) and combine Equation (87), we have \[\left\|v_{\alpha}(\rho_{t})-v^{*}\right\|_{2}^{2}<\frac{\epsilon}{2},\quad \forall\ t\in[0,T^{*}], \tag{92}\] and \[\left\|v_{\alpha}(\rho_{t})-v_{b}\right\|_{2}\leq\left\|v_{\alpha}(\rho_{t})- v^{*}\right\|_{2}+\left\|v^{*}-v_{b}\right\|_{2}\leq\sqrt{\frac{\epsilon}{2}}+ \left\|v^{*}-v_{b}\right\|_{2}\leq R, \tag{93}\] then by Lemma 11, we have \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}\right] \leq-\lambda\left(\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2} \right]-\left\|v_{\alpha}(\rho_{t})-v^{*}\right\|_{2}^{2}\right)\leq-\lambda \left(\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}\right]-\frac{ \epsilon}{2}\right), \tag{94}\] since now \(\mathcal{P}_{v_{b},R}(v_{\alpha}(\rho_{t}))=v_{\alpha}(\rho_{t})\). Finally by Gronwall inequality, we have \(\mathbb{E}\left[\left\|\bar{V}_{T^{*}}-v^{*}\right\|_{2}^{2}\right]\leq\epsilon\). **Lemma 15**.: _Let \(\left\|v_{b}-v^{*}\right\|_{2}<R<\infty,0<M<\infty\) and assume SDE (18) has a strong solution, then we have_ \[\sup_{t\in[0,T^{*}]}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2} \right]\leq\sqrt{\max\left\{\mathbb{E}\left[\left\|\bar{V}_{0}-v^{*}\right\|_ {2}^{2}\right],\lambda R^{2}+\sigma^{2}dM^{2}\right\}}. \tag{95}\] Proof.: By Equation (77), we have \[\frac{d}{dt}\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^ {2}\right] \leq-\lambda\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2} ^{2}\right]+\lambda\left\|\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t}) \right)-v^{*}\right\|_{2}^{2} \tag{96}\] \[-\lambda\mathbb{E}\left[\left\|\bar{V}_{t}-\mathcal{P}_{v_{b},R} \left(v_{\alpha}(\rho_{t})\right)\right\|_{2}^{2}\right]+\sigma^{2}d\mathbb{E }\left[\left\|\bar{V}_{t}-v_{\alpha}(\rho_{t})\right\|_{2}^{2}\wedge M^{2}\right]\] \[\leq-\lambda\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2} ^{2}\right]+\lambda R^{2}+\sigma^{2}dM^{2},\] so by Gronwall inequality, we have \[\mathbb{E}\left[\left\|\bar{V}_{t}-v^{*}\right\|_{2}^{2}\right]\leq\max\left\{ \mathbb{E}\left[\left\|\bar{V}_{0}-v^{*}\right\|_{2}^{2}\right],\lambda R^{2} +\sigma^{2}dM^{2}\right\}, \tag{97}\] for any \(t\geq 0\). **Lemma 16**.: _Assume SDE (18) has a strong solution and let \(\rho_{t}\) be the distribution of \(\bar{V}_{t}\), then for any \(M\in(0,\infty),\tau\geq 1,r>0\) and \(R\in(\left\|v_{b}-v^{*}\right\|_{2}+r,\infty)\), we have_ \[\rho_{t}\left(B_{r}\left(v^{*}\right)\right)\geq C_{4}\exp(-q^{\prime}t)>0, \tag{98}\] _where_ \[C_{4}:=\int_{B_{r}(v^{*})}1+(\tau-1)\left\|\tfrac{v-v^{*}}{r}\right\|_{2}^{ \tau}-\tau\left\|\tfrac{v-v^{*}}{r}\right\|_{2}^{\tau-1}d\rho_{0}(v), \tag{99}\] _and \(q^{\prime}\) depends on \(\tau,\lambda,\sigma,d,r,R,v_{b},M\)._ Proof.: We know the law of \(\tilde{V}_{t}\) satisfies the Fokker-Planck equation: \[\partial_{t}\rho_{t}=\lambda\operatorname{div}\left(\left(v-\mathcal{P}_{v_{b},R} \left(v_{\alpha}(\rho_{t})\right)\right)\rho_{t}\right)+\frac{\sigma^{2}}{2} \Delta\left(\left(\left\|v-v_{\alpha}\left(\rho_{t}\right)\right\|^{2}\wedge M ^{2}\right)\rho_{t}\right). \tag{100}\] In the next, we first define test function \[\phi_{r}^{\tau}(v):=\begin{cases}1+\left(\tau-1\right)\left\|\frac{v}{r}\right\| _{2}^{\tau}-\tau\left\|\frac{v}{r}\right\|_{2}^{\tau-1}&\left\|v\right\|_{2} \leq r\\ 0&else\end{cases},\quad\tau\geq 1, \tag{101}\] it is easy to verify \(\phi_{r}^{\tau}\in\mathcal{C}_{c}^{1}(\mathbb{R}^{d},[0,1])\). Since \(\phi_{r}^{\tau}(v)\in[0,1]\), we have \(\rho_{t}(B_{r}(v^{*}))\geq\int_{B_{r}(v^{*})}\phi_{r}^{\tau}(v)d\rho_{t}(v)\), so to lower bound \(\rho_{t}(B_{r}(v^{*}))\), we only need to lower bound \(\int_{B_{r}(v^{*})}\phi_{r}^{\tau}(v)d\rho_{t}(v)\). By Green formula, we have \[\begin{split}&\frac{d}{dt}\int_{B_{r}(v^{*})}\phi_{r}^{\tau}(v-v^ {*})d\rho_{t}(v)=-\lambda\int_{B_{r}(v^{*})}\left\langle v-\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t})\right),\nabla\phi_{r}^{\tau}(v-v^{*})\right\rangle d \rho_{t}(v)\\ &\quad+\frac{\sigma^{2}}{2}\int_{B_{r}(v^{*})}\left(\left\|v-v_ {\alpha}(\rho_{t})\right\|_{2}^{2}\wedge M^{2}\right)\Delta\phi_{r}^{\tau}(v- v^{*})d\rho_{t}(v)\\ &=\tau(\tau-1)\int_{B_{r}(v^{*})}\frac{\left\|v-v^{*}\right\|_{2 }^{\tau-3}}{r^{\tau-3}}\left\{\left(1-\frac{\left\|v-v^{*}\right\|_{2}}{r} \right)\left[\lambda\left\langle\frac{v-\mathcal{P}_{v_{b},R}\left(v_{\alpha} (\rho_{t})\right)}{r},\frac{v-v^{*}}{r}\right\rangle\right.\right.\\ &\quad\left.\left.-\frac{\sigma^{2}}{2}\left(d+\tau-2\right)\frac{ \left\|v-v_{\alpha}(\rho_{t})\right\|_{2}^{2}\wedge M^{2}}{r^{2}}\right]+ \frac{\sigma^{2}}{2}\frac{\left\|v-v_{\alpha}(\rho_{t})\right\|_{2}^{2} \wedge M^{2}}{r^{2}}\right\}d\rho_{t}(v).\end{split} \tag{102}\] For simplicity, we will denote \[\begin{split}\Theta:&=\left(1-\frac{\left\|v-v^{*} \right\|_{2}}{r}\right)\left[\lambda\left\langle\frac{v-\mathcal{P}_{v_{b},R} \left(v_{\alpha}(\rho_{t})\right)}{r},\frac{v-v^{*}}{r}\right\rangle\right.\\ &\quad\left.-\frac{\sigma^{2}}{2}\left(d+\tau-2\right)\frac{ \left\|v-v_{\alpha}(\rho_{t})\right\|_{2}^{2}\wedge M^{2}}{r^{2}}\right]+ \frac{\sigma^{2}}{2}\frac{\left\|v-v_{\alpha}(\rho_{t})\right\|_{2}^{2}\wedge M ^{2}}{r^{2}}.\end{split} \tag{103}\] We can choose \(\epsilon_{1}\) small enough, depends on \(\lambda,\tau,\sigma,d\), such that when \(\left\|v-v^{*}\right\|_{2}/r>1-\epsilon_{1}\), we have \[\Theta\geq\left(1-\frac{\left\|v-v^{*}\right\|_{2}}{r}\right)\lambda\left\langle \frac{v-\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t})\right)}{r},\frac{v-v^ {*}}{r}\right\rangle+\frac{\sigma^{2}}{3}\frac{\left\|v-v_{\alpha}(\rho_{t}) \right\|_{2}^{2}\wedge M^{2}}{r^{2}}, \tag{104}\] when \(v_{\alpha}(\rho_{t})\not\in B_{R}(v_{b})\), we have \(\left\|\left\langle v-\mathcal{P}_{v_{b},R}\left(v_{\alpha}(\rho_{t})\right),v- v^{*}\right\rangle\right\|_{2}/r^{2}\leq C(r,R,v_{b})\) and \((\left\|v-v_{\alpha}(\rho_{t})\right\|_{2}^{2}\wedge M^{2})/r^{2}\geq C(r,M,R,v _{b})\), since \(R>\left\|v_{b}-v^{*}\right\|_{2}+r\), so we can choose \(\epsilon_{2}\) small enough, depends on \(\lambda,r,\sigma,R,v_{b},M\), such that when \(\left\|v-v^{*}\right\|_{2}/r>1-\min\{\epsilon_{1},\epsilon_{2}\}\), we have \(\Theta>0\); when \(v_{\alpha}(\rho_{t})\in B_{R}(v_{b})\) and \(\left\|v-v_{\alpha}(\rho_{t})\right\|_{2}\leq M\), we have \[\begin{split}\Theta&\geq\left(1-\frac{\left\|v-v^{*} \right\|_{2}}{r}\right)\lambda\left\langle\frac{v-v_{\alpha}(\rho_{t})}{r}, \frac{v-v^{*}}{r}\right\rangle+\frac{\sigma^{2}}{3}\frac{\left\|v-v_{\alpha}( \rho_{t})\right\|_{2}^{2}}{r^{2}}\\ &=\left[\frac{\sigma^{2}}{3}+\left(1-\frac{\left\|v-v^{*} \right\|_{2}}{r}\right)\lambda\right]\frac{\left\|v-v^{*}\right\|_{2}^{2}}{r^{2} }+\frac{\sigma^{2}}{3}\frac{\left\|v_{\alpha}(\rho_{t})-v^{*}\right\|_{2}^{2}}{r ^{2}}\\ &\quad-\left[\frac{2\sigma^{2}}{3}+\left(1-\frac{\left\|v-v^{*} \right\|_{2}}{r}\right)\lambda\right]\left\langle\frac{v_{\alpha}(\rho_{t})-v^ {*}}{r},\frac{v-v^{*}}{r}\right\rangle\\ &\geq 0,\quad\text{when }\frac{\left\|v-v^{*}\right\|_{2}}{r}\in[1- \frac{2\sigma^{2}}{3\lambda},1]\end{split} \tag{105}\] by Lemma 17; when \(v_{\alpha}(\rho_{t})\in B_{R}(v_{b})\) and \(\left\|v-v_{\alpha}(\rho_{t})\right\|_{2}>M\), we have \[\Theta\geq\left(1-\frac{\left\|v-v^{*}\right\|_{2}}{r}\right)C(\lambda,r,R,v_{b })+\frac{\sigma^{2}}{3}M^{2}, \tag{106}\] so we can choose \(\epsilon_{3}\) small enough, depends on \(\lambda,r,\sigma,R,v_{b},M\), such that when \(\left\|v-v^{*}\right\|_{2}/r>1-\min\{\epsilon_{1},\epsilon_{2},\epsilon_{3},2 \sigma^{2}/3\lambda\}\), we have \(\Theta\geq 0\); so combine all the above cases, we have \(\Theta\geq 0\) when \(\left\|v-v^{*}\right\|_{2}/r\geq 1-\min\{\epsilon_{1},\epsilon_{2},\epsilon_{3},2 \sigma^{2}/3\lambda\}\). When \(\left\|v-v^{*}\right\|_{2}/r\leq 1-\min\{\epsilon_{1},\epsilon_{2}, \epsilon_{3},2\sigma^{2}/3\lambda\}\), we have \[\tau(\tau-1)\frac{\left\|v-v^{*}\right\|_{2}^{\tau-3}}{r^{\tau-3}}\Theta=\tau (\tau-1)\frac{\left\|v-v^{*}\right\|_{2}^{\tau-3}}{r^{\tau-3}}\frac{\Theta}{ \phi_{r}^{\tau}(v)}\phi_{r}^{\tau}(v-v^{*})\geq-C_{5}\phi_{r}^{\tau}(v-v^{*}), \tag{107}\] for some constant \(C_{5}\) depends on \(r,R,M,v_{b},\lambda,\sigma,d,\tau\), since \(\left\|\Theta\right\|_{2}\) is upper bounded and \(\phi_{r}^{\tau}(v)\geq\phi_{r}^{\tau}(1-\min\{\epsilon_{1},\epsilon_{2}, \epsilon_{3},2\sigma^{2}/3\lambda\})>0\). All in all we have \[\frac{d}{dt}\int_{B_{r}(v^{*})}\phi_{r}^{\tau}(v-v^{*})d\rho_{t}(v)\geq-q^{ \prime}\int_{B_{r}(v^{*})}\phi_{r}^{\tau}(v-v^{*})d\rho_{t}(v), \tag{108}\] where \[q^{\prime}:=\max\{C_{5},0\} \tag{109}\] and by Gronwall's inequality, we have \[\rho_{t}(B_{r}(v^{*}))\geq\int_{B_{r}(v^{*})}\phi_{r}^{\tau}(v-v^{*})d\rho_{t }(v)\geq e^{-q^{\prime}t}\int_{B_{r}(v^{*})}\phi_{r}^{\tau}(v-v^{*})d\rho_{0}(v). \tag{110}\] **Lemma 17**.: _Assume \(a,b>0\), then we have_ \[(a+b(1-x))x^{2}+ay^{2}-(2a+b(1-x))xy\geq 0, \tag{111}\] _for any \(x\in[1-2a/b,1]\cap(0,\infty),y\geq 0\)._ Proof.: When \(y=0\), this is true. When \(y>0\), divide both side by \(ay^{2}\) and denote \(c=b/a\), the lemma is equivalent to show \[(1+c(1-x))(\frac{x}{y})^{2}-(2+c(1-x))\frac{x}{y}+1\geq 0, \tag{112}\] so it is enough to show \[\min_{r\geq 0}(1+c(1-x))r^{2}-(2+c(1-x))r+1\geq 0, \tag{113}\] when \(x\in[1-2/c,1]\). We have \[\arg\min_{r}(1+c(1-x))r^{2}-(2+c(1-x))r+1=\frac{2+c(1-x)}{2+2c(1-x)}, \tag{114}\] and so \[\begin{split}&\min_{r\geq 0}(1+c(1-x))r^{2}-(2+c(1-x))r+1\\ &\quad=(1+c(1-x))(\frac{2+c(1-x)}{2+2c(1-x)})^{2}-(2+c(1-x)) \frac{2+c(1-x)}{2+2c(1-x)}+1\\ &\quad=-\frac{1}{2}\frac{(2+c(1-x))^{2}}{2+2c(1-x)}+1\geq 0,\quad \text{when }x\in[1-\frac{2}{c},1],\end{split} \tag{115}\] with this, we finished the proof. Numerical Experiments In this section we numerically demonstrate the benefit of using CBO with truncated noise. We compares our variant with standard CBO (with isotropic [11, 23, 42] and anisotropic noise [13, 24]) at the test of several benchmark problems in optimization which we summarize in Table 1. We consider the following benchmark test functions for global optimization. The criterion for success is defined as achieving the condition \(\left\|\frac{1}{N}\sum_{i=1}^{N}V_{K\Delta t}^{i}-v^{*}\right\|_{2}\leq 0.1\), we test \(1000\) times and set \(N_{success}/1000\) as the success rate, where \(N_{success}\) is the number of successful test. Isotropic Case.In the isotropic noise case, we always set \(\lambda=1,d=15,\sigma=0.3,v_{b}=0,R=\infty,\alpha=10^{5}\) and set-size \(\Delta t=0.02\). We choose \(\rho_{0}=\mathcal{N}(0,I_{d})\) and the initial points \(\{V_{0}^{i}\}_{i=1}^{N}\) are i.i.d. sampled from \(\rho_{0}\). The following tables show the success rates of the CBO method with truncation \(M=1\) and the original CBO method [11, 42, 23] without truncation in terms of growing number \(N\) of particles. Among the \(5\) test functions, Rastrigin and Alpine are harder than the rest ones, so in the test, we use more particles and larger iteration number \(K\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline Name & Objective function \(f(v)\) & \(v^{*}\) & \(f\left(v^{*}\right)\) \\ \hline \hline Ackley & \(-20\exp\left(-0.2\sqrt{\frac{1}{4}\sum_{i=1}^{d}\left(v_{i}\right)^{2}} \right)-\exp\left(\frac{1}{4}\sum_{i=1}^{d}\cos\left(2\pi\left(v_{i}\right) \right)\right)+20+e\) & \((0,\ldots,0)\) & \(0\) \\ \hline Griewank & \(1+\sum_{i=1}^{d}\frac{\left(v_{i}\right)^{2}}{4000}-\prod_{i=1}^{d}\cos\left( \frac{v_{i}}{i}\right)\) & \((0,\ldots,0)\) & \(0\) \\ \hline Rastrigin & \(10d+\sum_{i=1}^{d}\left[\left(v_{i}\right)^{2}-10\cos\left(2\pi\left(v_{i} \right)\right)\right]\) & \((0,\ldots,0)\) & \(0\) \\ \hline Alpine & \(10\sum_{i=1}^{d}\left\|\left(v_{i}-v_{i}^{*}\right)\sin\left(10\left(v_{i}-v_{ i}^{*}\right)\right)-0.1\left(v_{i}-v_{i}^{*}\right)\right\|\) & \((0,\ldots,0)\) & \(0\) \\ \hline Salomon & \(1-\cos\left(200\pi\sqrt{\sum_{i=1}^{d}\left(v_{i}\right)^{2}}\right)+10\sqrt{ \sum_{i=1}^{d}\left(v_{i}\right)^{2}}\) & \((0,\ldots,0)\) & \(0\) \\ \hline \end{tabular} \end{table} Table 1: Benchmark test functions \begin{table} \begin{tabular}{c c|c c c c c} \multicolumn{8}{c}{Iteration Number \(K=200\)} \\ \hline \multirow{2}{*}{\(\begin{array}{c}\text{hline}\\ \text{Test Function}\\ \end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\text{$M$}\\ \end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\text{N=150}\\ \text{N=300}\\ \end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\text{N=600}\\ \end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\text{N=900}\\ \end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\text{N=1200}\\ \end{array}\)} \\ \hline \hline \multirow{4}{*}{\(\begin{array}{c}\text{Achley}\\ \end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\text{1}\\ \text{+}\infty\\ \end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\text{0.978}\\ \end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\text{0.999}\\ \end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\text{1}\\ \text{1}\\ \text{0.824}\\ \end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\text{1}\\ \text{0.935}\\ \end{array}\)} \\ \hline \hline Griewank & \(\begin{array}{c}\text{1}\\ \text{+}\infty\\ \end{array}\) & \(\begin{array}{c}\text{0.060}\\ \end{array}\)} & \(\begin{array}{c}\text{0.188\\ \end{array}\)} & \(\begin{array}{c}\text{0.5013}\\ \text{0.5013}\\ \end{array}\)} & \(\begin{array}{c}\text{0.671\\ \text{0.791}\\ \end{array}\) \\ \hline \hline Salomon & \(\begin{array}{c}\text{1}\\ \text{+}\infty\\ \end{array}\) & \(\begin{array}{c}\text{0.970}\\ \end{array}\)} & \(\begin{array}{c}\text{1}\\ \text{0.005}\\ \end{array}\) & \(\begin{array}{c}\text{1}\\ \text{0.068}\\ \end{array}\) & \(\begin{array}{c}\text{1}\\ \text{0.603}\\ \end{array}\) & \(\begin{array}{c}\text{0.013}\\ \text{0.909}\\ \end{array}\) & \(\begin{array}{c}\text{0.032}\\ \text{0.979}\\ \end{array}\) \\ \hline \hline \end{tabular} \end{table} Table 2: For the functions Ackley and Salomon, the CBO method with truncation (\(M=1\)) demonstrates the ability to locate the global minimum using only \(300\) particles. Conversely, even with an increased number of particles (up to \(1200\)), the original CBO method can not achieve a flawless success rate. In the case of Griewank, the original CBO method exhibits a notably low success rate, even when utilizing \(1200\) particles. However, under the same conditions, the CBO method with truncation (\(M=1\)) attains a success rate of \(0.791\). Anisotropic Case.In the anisotropic noise case, we set \(\lambda=1,v_{b}=0,R=\infty,\alpha=10^{5}\) and step-size \(\Delta t=0.02\). The following tables show the success rates of the anisotropic CBO method with truncation \(M=1\) and the original anisotropic CBO method [13, 24] without truncation in terms of \(N\). Test function Alpine is more difficult, in the test, we find if we set \begin{table} \begin{tabular}{c|c|c c c c c} \hline \hline \multicolumn{6}{c}{Iteration Number \(K=1000\), \(\sigma=5,d=20,\rho_{0}=\mathcal{N}(0,100L_{d})\)} \\ \hline \hline Test Function & \(M\) & N=75 & N=150 & N=300 & N=600 & N=900 \\ \hline \hline \multirow{3}{*}{Rastrigin} & 1 & 0.285 & 0.928 & 0.990 & 1 & 1 \\ & \(+\infty\) & 0.728 & 0.952 & 0.993 & 1 & 1 \\ \hline \hline \multirow{3}{*}{Ackley} & 1 & 0.510 & 0.997 & 1 & 1 & 1 \\ & \(+\infty\) & 0.997 & 1 & 1 & 1 & 1 \\ \hline \hline \multirow{3}{*}{Grewank} & 1 & 0.097 & 0.458 & 0.576 & 0.625 & 0.665 \\ & \(+\infty\) & 0.093 & 0.101 & 0.157 & 0.159 & 0.167 \\ \hline \hline \multirow{3}{*}{Salomon} & 1 & 0.010 & 0.434 & 0.925 & 0.998 & 1 \\ & \(+\infty\) & 0.622 & 0.954 & 0.970 & 0.934 & 0.891 \\ \hline \hline \end{tabular} \end{table} Table 4: In test, we find \(\sigma=5\) works well. In the case of Rastrigin, Ackley and Salomon, the original anisotropic CBO method works better than the anisotropic CBO method with truncation, when the particle number is small. In the case of Salomon, when increasing the number of particle to \(N=900\), the success rate of the original anisotropic CBO method decreases. In the case of Griewank, we find the anisotropic CBO method with truncation (\(M=1\)) works better than the original anisotropic CBO method. \begin{table} \begin{tabular}{c|c|c c c c c} \hline \hline \multicolumn{6}{c}{Iteration Number \(K=200\)} \\ \hline \hline Test Function & \(M\) & N=300 & N=600 & N=900 & N=1200 & N=1500 \\ \hline \hline \multirow{3}{*}{Rastrigin} & 1 & 0.180 & 0.256 & 0.298 & 0.322 & 0.337 \\ & \(+\infty\) & 0 & 0 & 0.004 & 0.004 & 0.007 \\ \hline \hline \multirow{3}{*}{Alpine} & 1 & 0.029 & 0.049 & 0.051 & 0.070 & 0.080 \\ & \(+\infty\) & 0 & 0.001 & 0.004 & 0.004 & 0.004 \\ \hline \hline \multicolumn{6}{c}{Iteration Number \(K=500\)} \\ \hline \hline Test Function & \(M\) & N=300 & N=600 & N=900 & N=1200 & N=1500 \\ \hline \hline \multirow{3}{*}{Rastrigin} & 1 & 0.213 & 0.265 & 0.316 & 0.326 & 0.343 \\ & \(+\infty\) & 0.001 & 0.004 & 0.005 & 0.009 & 0.010 \\ \hline \hline \multirow{3}{*}{Alpine} & 1 & 0.103 & 0.115 & 0.147 & 0.165 & 0.173 \\ & \(+\infty\) & 0.010 & 0.015 & 0.033 & 0.037 & 0.040 \\ \hline \hline \end{tabular} \end{table} Table 3: Both algorithms have difficulty to find the global minimizer, however the success rates for the CBO method with truncation (\(M=1\)) are significantly higher compared to those of the original CBO method. \(\mathcal{N}(0,100I_{d})\), neither of the two algorithms works. In the test, we set \(d=15,\sigma=1,\rho_{0}=\mathcal{N}(0,I_{d})\). ## 5 Conclusions In this paper we establish the convergence to a global minimizer of a potentially nonconvex and nonsmooth objective function for a variant of consensus-based optimization (CBO) which incorporates truncated noise. We observe that truncating the noise in CBO enhances the well-behavedness of the statistics of the law of the dynamics, which enables enhanced convergence performance and allows in particular a wider flexibility in choosing the parameters of the method. ## Acknowledgements and Competing Interests This work has been funded by the KAUST Baseline Research Scheme and the German Federal Ministry of Education and Research, and the Bavarian State Ministry for Science and the Arts. In addition to this, MF acknowledges the support of the Munich Center for Machine Learning; PR acknowledges the support of the Extreme Computing Research Center at KAUST; KR furthermore acknowledges the financial support from the Technical University of Munich - Institute for Ethics in Artificial Intelligence (IEAI); and LS thanks the support of KAUST Optimization and Machine Learning Lab. LS also thanks the hospitality of the Chair of Applied Numerical Analysis of the Technical University of Munich for discussions that contributed to the finalization of this work. \begin{table} \begin{tabular}{c|c|c c c c c} \multicolumn{7}{c}{} \\ & \multicolumn{5}{c}{Iteration Number \(K=200\)} \\ \hline \hline \multicolumn{7}{c}{Test Function} & \(M\) & N=300 & N=600 & N=900 & N=1200 & N=1500 \\ \hline \hline \multirow{2}{*}{Alpine} & 1 & 0 & 0.006 & 0.006 & 0.008 & 0.025 \\ & \(+\infty\) & 0.001 & 0.004 & 0.008 & 0.007 & 0.021 \\ \hline \hline \multicolumn{7}{c}{Iteration Number \(K=500\)} \\ \hline \hline \multicolumn{7}{c}{Test Function} & \(M\) & N=300 & N=600 & N=900 & N=1200 & N=1500 \\ \hline \hline \multirow{2}{*}{Alpine} & 1 & 0.130 & 0.224 & 0.291 & 0.336 & 0.365 \\ & \(+\infty\) & 0.083 & 0.175 & 0.250 & 0.292 & 0.330 \\ \hline \hline \multicolumn{7}{c}{Iteration Number \(K=1000\)} \\ \hline \hline \multicolumn{7}{c}{Test Function} & \(M\) & N=300 & N=600 & N=900 & N=1200 & N=1500 \\ \hline \hline \multirow{2}{*}{Alpine} & 1 & 0.102 & 0.198 & 0.293 & 0.340 & 0.368 \\ & \(+\infty\) & 0.097 & 0.179 & 0.250 & 0.295 & 0.331 \\ \hline \hline \end{tabular} \end{table} Table 5: In this test, the anisotropic CBO method with truncated noise (\(M=1\)) works better than the original anisotropic CBO method in most cases.
2306.14939
The Art of Embedding Fusion: Optimizing Hate Speech Detection
Hate speech detection is a challenging natural language processing task that requires capturing linguistic and contextual nuances. Pre-trained language models (PLMs) offer rich semantic representations of text that can improve this task. However there is still limited knowledge about ways to effectively combine representations across PLMs and leverage their complementary strengths. In this work, we shed light on various combination techniques for several PLMs and comprehensively analyze their effectiveness. Our findings show that combining embeddings leads to slight improvements but at a high computational cost and the choice of combination has marginal effect on the final outcome. We also make our codebase public at https://github.com/aflah02/The-Art-of-Embedding-Fusion-Optimizing-Hate-Speech-Detection .
Mohammad Aflah Khan, Neemesh Yadav, Mohit Jain, Sanyam Goyal
2023-06-26T17:30:35Z
http://arxiv.org/abs/2306.14939v2
# The Art of Embedding Fusion: Optimizing Hate Speech Detection ###### Abstract Hate speech detection is a challenging natural language processing task that requires capturing linguistic and contextual nuances. Pre-trained language models (PLMs) offer rich semantic representations of text that can improve this task. However there is still limited knowledge about ways to effectively combine representations across PLMs and leverage their complementary strengths. In this work, we shed light on various combination techniques for several PLMs and comprehensively analyze their effectiveness. Our findings show that combining embeddings leads to slight improvements but at a high computational cost and the choice of combination has marginal effect on the final outcome. We also make our codebase public here. ## 1 Introduction Recent advances in deep learning have been significantly influenced by the introduction of pretrained models Zhou et al. (2023), which serve as a strong foundation for various downstream tasks such as classification, generation, and sequence labeling. In particular, these models generate dense vector representations of input text that have been effective across a wide range of models, replacing older techniques such as TF-IDF, Word2Vec, and GLoVe. The success of pretrained language models (PLMs) has led to the development of domain-specific versions, such as HateBERT Caselli et al. (2020) and BERTweet Nguyen et al. (2020), which use the same architecture as BERT Devlin et al. (2019). In this study we aim to identify the most effective model or combination of models (BERT, HateBERT, and BERTweet) for hate speech classification tasks. ## 2 Related Work Hate speech detection has been a prevalent task in the NLP community for a long time. Various techniques have been used to recognize hate speech, such as combining n-gram and linguistic features with machine learning models Davidson et al. (2017), contrastive learning Kim et al. (2022), and retraining language models on hateful data Caselli et al. (2020). Pre-trained language models (PLMs) have been successful in generating context-rich word embeddings, which can be combined to generate sentence embeddings using different methods like pooling embeddings or training siamese networks Reimers & Gurevych (2019). However, as different PLMs were trained on different datasets and have different sizes, their capabilities are expected to differ. Although previous works have shown that combining embeddings from different sources can boost performance (Lester et al. (2020), Badri et al. (2022)), no work has compared all the well-known ways to combine word embeddings for hate speech detection. Overall, hate speech detection is an important task in NLP, and various techniques have been used to achieve it. PLMs have been successful in generating context-rich word embeddings, but their capabilities differ depending on their training dataset and size. Combining embeddings from different sources has been shown to improve performance, but there is currently no work that compares all the well-known ways to combine word embeddings for hate speech detection. ## 3 Dataset Three datasets are utilized in this study: OLID Zampieri et al. (2019) for offensive vs non-offensive Twitter post classification, Latent Hatred ElSherief et al. (2021) for implicit hate vs explicit hate vs non hate classification, and DynaHate Vidgen et al. (2021) for hate vs non-hate classification with human-in-the-loop generated sentences. Dataset statistics are provided in A.1 and preprocessing steps are outlined in A.2. ## 4 Methodology For each sentence we first produce an embedding using BERT, HateBERT as well as BERTweet by using pooler output. Pooler output is the last layer hidden-state of the first token of the sequence (classification token), further processed by a Linear layer and a Tanh activation function. This is a model endpoint exposed under the HuggingFace API, Wolf et al. (2020). We conduct experiments using three random seeds, utilizing five combination strategies (addition, concatenation, interleaving, multiplication, and random interleaving) to combine two or all three embeddings. Each standalone/combined embedding is used to train a multi-layer perceptron (MLP) for the classification task using five-fold cross-validation. We anticipate Concatenation and Interleave methods to perform similarly, as MLPs do not take positional information. We expect random interleaving to perform poorly, as embeddings become degenerate and dimensions lose meaning. Finally, we expect combining multiple embeddings to outperform using a single embedding. More detailed explanations of these methods can be found in A.3 and A.4. ## 5 Results From Tables 1, 2, we observe that the performances of the classifiers are very similar irrespective of the combination of embeddings. Only random interleaving is a poor choice as it makes the embeddings degenerate. Combinations where the dimensionality increases seem to be marginally better which can be attributed to the fact that it brings in more data for the model to extrapolate relations. In all the three tables 6, 7, and 8, the top 3 methods of combination remain to be interleaving, concatenation and multiplication of embeddings, but only marginally. In general having more than one embedding seems to be marginally better and amongst the 3 models HateBERT and BERTweet are more likely to perform better which can be attributed to their training on Hateful and Twitter data. ## 6 Conclusion The results1 indicate that concatenation and interleaving have similar performance as expected. Addition, a commonly used embedding combination, also shows good performance. Although multiplication is rarely used, its performance is comparable to addition across tasks. Therefore, in low-compute settings, an embedding combination such as addition can be used to achieve similar performance as concatenation without increasing the input dimensionality by 2-3x, which would require more training time and resources. \begin{table} \begin{tabular}{|c|c|} \hline Model & Accuracy \\ \hline \hline bert berwer hatebert interleaved & 0.716 \\ bert berwer hatebert concat & 0.705 \\ hatebertbert interleaved & 0.704 \\ hatebertbertconcat & 0.700 \\ bert hatebertconcat & 0.693 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline Model & Accuracy \\ \hline \hline hatebertbert interleaved & 0.703 \\ hatebertbertbert multiplied & 0.700 \\ bert berwer hatebertconcat & 0.700 \\ bert berwer hatebert multiplied & 0.700 \\ bert berwer interleaved & 0.700 \\ \hline \end{tabular} \end{table} Table 1: DynaHate Results: Top 5 Combinations Table 2: LatentHated Results: Top 5 Combinations ### Acknowledgements We express our gratitude to Devansh Gupta & Rishi Singhal for their valuable feedback on the initial drafts of our paper. We also extend our thanks to Dr. Md. Shad Akhtar for providing guidance during the early stages of this work when it was just a course project. ### URM Statement It is acknowledged by the authors that all of the authors involved in this work meet the URM criteria of the ICLR 2023 Tiny Papers Track.
2309.02706
HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models
Large language models (LLMs) trained on massive corpora demonstrate impressive capabilities in a wide range of tasks. While there are ongoing efforts to adapt these models to languages beyond English, the attention given to their evaluation methodologies remains limited. Current multilingual benchmarks often rely on back translations or re-implementations of English tests, limiting their capacity to capture unique cultural and linguistic nuances. To bridge this gap for the Korean language, we introduce the HAE-RAE Bench, a dataset curated to challenge models lacking Korean cultural and contextual depth. The dataset encompasses six downstream tasks across four domains: vocabulary, history, general knowledge, and reading comprehension. Unlike traditional evaluation suites focused on token and sequence classification or mathematical and logical reasoning, the HAE-RAE Bench emphasizes a model's aptitude for recalling Korean-specific knowledge and cultural contexts. Comparative analysis with prior Korean benchmarks indicates that the HAE-RAE Bench presents a greater challenge to non-Korean models by disturbing abilities and knowledge learned from English being transferred.
Guijin Son, Hanwool Lee, Suwan Kim, Huiseo Kim, Jaecheol Lee, Je Won Yeom, Jihyu Jung, Jung Woo Kim, Songseong Kim
2023-09-06T04:38:16Z
http://arxiv.org/abs/2309.02706v5
# HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models ###### Abstract Large Language Models (LLMs) trained on massive corpora demonstrate impressive capabilities in a wide range of tasks. While there are ongoing efforts to adapt these models to languages beyond English, the attention given to their evaluation methodologies remains limited. Current multilingual benchmarks often rely on back translations or re-implementations of English tests, limiting their capacity to capture unique cultural and linguistic nuances. To bridge this gap for the Korean language, we introduce HAE-RAE Bench, a dataset curated to challenge models lacking Korean cultural and contextual depth. The dataset encompasses six downstream tasks across four domains: vocabulary, history, general knowledge, and reading comprehension. Contrary to traditional evaluation suites focused on token or sequence classification and specific mathematical or logical reasoning, HAE-RAE Bench emphasizes a model's aptitude for recalling Korean-specific knowledge and cultural contexts. Comparative analysis with prior Korean benchmarks indicates that the HAE-RAE Bench presents a greater challenge to non-native models, by disturbing abilities and knowledge learned from English being transferred. ## 1 Introduction Over time, both language models and benchmark datasets have evolved in tandem, continually becoming more sophisticated and challenging, in recognition of the reciprocal relationship between them. Despite the pivotal role played by benchmark datasets in advancing the capabilities of language models, the evaluation of their multilingual abilities remains largely limited. Existing evaluation efforts often rely on translated versions of English datasets Shi et al. (2022) or translation-specific benchmarks such as WMT 21 Akhbardeh et al. (2021). This approach, while providing some insights into the models' performance across languages, may not fully capture the intricacies, nuances, and knowledge specific to each linguistic context. Some of the existing efforts to evaluate language models in Korean include Korean-NLI & STS Ham et al. (2020), KLUE Park et al. (2021), and KoBEST Kim et al. (2022). Korean-NLI & STS is derived from machine and human translations of English datasets for natural language inference (NLI) and semantic textual similarity (STS), accordingly, they hardly capture the unique nuances of the Korean language. KLUE is a Korean version of the GLUE benchmark Wang et al. (2018), which supports a variety of tasks including NLI, STS, and topic classification. Unfortunately, its adoption was limited due to its relatively simple tasks. The latest benchmark, KoBEST, is designed to assess a language model's ability to address questions that require advanced reasoning, like understanding passages of time or causality. However, with the advent of Large Language Models (LLMs) such as GPT-4 OpenAI (2023) and conversational agents built upon them, there's an increasing need to evaluate the cultural knowledge of language models, to ensure they converse with native speakers without sounding incoherent. To address this issue, we introduce **HAE-RAE Bench*** a Korean benchmark dataset originally crafted to capture culture-specific nuances inherent to the Korean language. Footnote *: The link for the dataset will be added after the review due to anonymity issues. ## 2 Related Work ### Language Model Since the introduction of the Transformer architecture Vaswani et al. (2017) and early derivatives like BERT Devlin et al. (2018) and GPT Radford et al. (2018), research in English language mod els has expanded rapidly. The debut of Instruct-GPT (Ouyang et al., 2022) and Flan-T5 (Chung et al., 2022), with their instruction-following capabilities, further invigorated this interest. This has led to the development of various instruction-tuned models, such as Llama-2-Chat (Touvron et al., 2023), Vicuna (Vicuna, 2023), WizardLM (Xu et al., 2023), and Platypus (Lee et al., 2023). While most of these models primarily focus on English, there are notable exceptions for Chinese, with Qwen (QwenLM, 2023), Baichuan (Yang et al., 2023), and GLM (Zeng et al., 2022). To adapt these advancements to various languages, several approaches are being researched. These include: 1. Building language-specific models from scratch, such as Polyglot-Ko(Korean) (Ko et al., 2023), Hyperclova(Korean) (Kim et al., 2021), Japanese StableLM(Japanese) (StabilityAI, 2023), and ruGPT(Russian) (ai forever, 2023); 2. Developing multilingual models like BLOOM (Scao et al., 2022), MTS (Xue et al., 2020), and UMT5 (Chung et al., 2023); 3. Adapting English models for other languages, as seen with Sabia (Pires et al., 2023) and Chinese-LLaMA (Cui et al., 2023). Accordingly, essential research questions emerge, including: "When can we consider a model to have been trained on a sufficient number of language tokens to produce culturally and grammatically coherent sentences?" This highlights the importance of benchmarks curated to evaluate the multilingualism of language models. ### Multilingual Evaluation Along with the English language models, multi-task benchmarks like GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) were introduced. Once these were saturated, they were followed by even bigger benchmarks such as HELM (Liang et al., 2022), MMLU (Hendrycks et al., 2020), and Big BENCH (Srivastava et al., 2022). Non-English evaluation research has mirrored this trend, predominantly through translation or re-implementation of existing English benchmarks. Examples include JGLUE (Kurihara et al., 2022), KLUE (Park et al., 2021), and CMMLU (Li et al., 2023), which are Japanese and Korean adaptations of GLUE and Chinese re-implementation of MMLU, respectively. Yet, these benchmarks fall short of capturing the native knowledge in language models. This highlights the need for evaluation suites curated to assess the cultural context of a model. Recent research in this direction is BHASA(Leong et al., 2023), aimed at gauging the cultural depth of language models in Southeast Asian languages. Nonetheless, limitations are apparent: only 34 questions for Indonesian and 28 for Tamil in the entire dataset specifically address cultural representation tasks. HAE-RAE Bench advances the field by introducing an evaluation set of 1.5K questions, curated to assess Korean-specific knowledge in language models. ### Korean Evaluation Korean language model evaluation is also a field of interest, with resources emerging after English and Chinese. Examples include Korean-NLI & STS (Ham et al., 2020), KorFin-ASC (Son et al., 2023), KLUE (Park et al., 2021), and KoBEST (Kim et al., 2022). Korean-NLI & STS are based on translations of English datasets for natural language inference (NLI) and semantic textual similarity (STS), potentially missing Korean nuances. KorFin-ASC was originally built from Korean news however it concentrates on sentiment classification in the financial domain. KLUE mirrors the GLUE benchmark for Korean, covering tasks like Topic Classification, Semantic Textual Similarity, Natural Language Inference, and more. However, these benchmarks, whether translated or task-oriented, can not fully assess language-specific models. Large English models can excel in these evaluation suites by leveraging their multilingual capabilities derived from the scale. Recent research include KoBEST (Kim et al., 2022), which features Korean re-implementations of HellaSwag (Zellers et al., 2019), COPA (Gordon et al., 2012), BOOLQ (Clark et al., 2019), SentiNeg (Savanur and Sumathi, 2023), and WiC (Pilehvar and Camacho-Collados, 2018). KoBEST advances the field by introducing reasoning tasks previously overlooked. HAE-RAE Bench distinguishes itself from above mentioned Korean benchmarks by evaluating the depth of knowledge encoded in language models instead of their natural language understanding or reasoning abilities. ## 3 HAE-RAE Bench The design principle behind HAE-RAE Bench significantly differs from earlier Korean benchmark suites like KLUE (Park et al., 2021) or KoBEST (Kim et al., 2022). While previous benchmarks focused on evaluating natural language understanding or reasoning abilities, HAE-RAE emphasizes the depth of knowledge itself. This change is driven by the emergence of LLMs and conversational agents or search engines built on them. We posit that knowledge of Korean vocabulary, culture, geography, and history might be as crucial, if not more so, than traditional NLU tasks such as token or sequence classification in conversational situations. Accordingly, the resulting benchmark encompasses six downstream tasks: Loan Words(LW), Standard Nomenclature(SN), Rare Words(RW), General Knowledge(GK), History(HI) and Reading Comprehension(RC). Statistics for the HAE-RAE Bench dataset are provided in Table 1. "Type" indicates the structure of the question. "Q" denotes that the instance comprises a question with multiple choices, while "Q, P" indicates the inclusion of an associated passage. We also present the fertility rate of the dataset, tokenized using different models: Polyglot-Ko (Ko et al., 2023), UMT5 (Chung et al., 2023), and Llama-2 (Touvron et al., 2023). The fertility rate (Acs, 2019) calculates the average number of sub-tokens generated per word. A fertility rate of 1 implies that the tokenizer's vocabulary encompasses every word in the text. A higher fertility rate may suggest potential challenges for the tokenizer in grasping context. In our observation the fertility rate increases for models with less emphasis on Korean. To assess the relative complexity of the vocabularies in the HAE-RAE Bench, we compared its fertility rate with that of KoBEST, as shown in Table 2. Using the polyglot-ko tokenizer, the fertility rates for HAE-RAE Bench and KoBEST are 3.0 and 2.7, respectively. This suggests that the HAE-RAE Bench comprises less common words. Examples for each subset of the datasets are presented in section 9.3. ### Loan Words **Task Description** Loan words refer to vocabularies directly adopted from foreign languages. In South Korea, the National Institute of Korean Language (NIKL) * formulates corresponding Korean terms for such words. In this task, language models are given a foreign word along with five choices and are tasked to identify the correct Korean equivalent. **Creation** The pairs of foreign words and their Korean equivalents are sourced from NIKL. Some Korean terms are infrequently used, either because the foreign word has been entrenched in society for a long time or because it's a recent addition and not yet widely recognized. To ensure we focus on reasonably common terms, we filter the list to only include words present in both "Naver Knowledge Encyclopedia" * and "Daum Encyclopedia" *, the two most widely used online encyclopedias in Korea. From the refined list, we randomly sampled 200 vocabularies. Incorrect options were selected from the remaining terms based on their Levenshtein distance (Levenshtein et al., 1966) to the correct answer. While Levenshtein distance may initially seem to prioritize syntax over semantics, it effectively captures both in the Korean language. "Han" (Chinese logograms) constitute about 55% of the Korean vocabulary, accordingly words with the same Korean letter have related meanings. Moreover, the structure of the Korean language involves compounding, where multiple "roots" (fundamen \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Total N.} & \multicolumn{2}{c}{Avg. Words (std)} & \multicolumn{2}{c}{Fertility Rate (std)} \\ \cline{3-8} Category & Type & Question & Unique Morpheme & per question & per passage & Polyglot-Ko & UMT5 & Llama-2 \\ \hline Loan Words & Q & 169 & 960 & 5.1 (0.3) & - & 3.9 (0.3) & 4.1 (0.3) & 6.7 (0.6) \\ Rare Words & Q & 405 & 2721 & 13.0 (3.4) & - & 3.1 (0.3) & 3.5 (0.3) & 6.1 (0.4) \\ Standard Nomenclature & Q & 153 & 1018 & 8.3 (0.5) & - & 3.2 (0.4) & 3.6 (0.4) & 6.4 (0.6) \\ Reading Comprehension & Q, P & 447 & 5825 & 7.1 (1.8) & 69.6 (44.6) & 2.5 (0.4) & 2.8 (0.4) & 6.0 (0.5) \\ General Knowledge & Q, P & 176 & 2099 & 7.0 (3.0) & 9.1 (13.6) & 3.4 (0.6) & 3.7 (0.6) & 6.4 (0.9) \\ History & Q & 188 & 1595 & 12.8 (3.5) & - & 3.3 (0.4) & 3.8 (0.4) & 6.3 (0.6) \\ \hline \hline \end{tabular} \end{table} Table 1: HAE-RAE Bench Statistics. \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Polyglot-Ko & UMT5 & Llama-2 \\ \hline HAE-RAE Bench & 3.00 (0.38) & 3.56 (0.37) & 6.33 (0.59) \\ KoBEST & 2.70 (0.34) & 3.39 (0.45) & 6.44 (0.75) \\ \hline \hline \end{tabular} \end{table} Table 2: Fertility rate (std) of HAE-RAE Bench and KoBEST. tal word units) merge to form new words. Consequently, words sharing similar meanings often include the same "root", making the syntactic and semantic distances in Korean words largely aligned. Finally, we applied a Levenshtein distance threshold of 3, omitting samples with fewer than four incorrect options meeting this criterion. ### Standard Nomenclature Task DescriptionStandard Nomenclatures, published by NIKL, are unified terminology for domain-specific words. In this task, language models are presented with a specialized term along with five options, with the objective of identifying the official term endorsed by NIKL. **Creation** Pairs of domain-specific words and their official terms are collected from NIKL. We follow the approach used in 3.1 to create questions. ### Rare Words Task DescriptionThe Rare Words task aims to probe language models' understanding of challenging vocabulary. Given a definition and five words, models are tasked with selecting the word that best suits the provided definition. **Creation** We sourced pairs of definitions and challenging words from past episodes of the TV program "Woorimal Battle," * known for its challenging Korean vocabulary quizzes. We follow the approach used in 3.1 to create questions. Footnote *: [https://program.kbs.co.kr/1tv/culture/woorimal/pc/index.html](https://program.kbs.co.kr/1tv/culture/woorimal/pc/index.html) ### General Knowledge Task DescriptionGeneral Knowledge evaluates the model's familiarity with various aspects of the Korean cultural, using five-option multiple-choice questions. CreationWe first identify five primary categories for general knowledge: law, tradition, geography, Korean pop, and Korean drama. We then crowd-sourced questions to fit these subcategories. Overlapping and factually incorrect questions were removed, along with those not aligning with the defined category. Additional investigations were conducted to ensure no superficial artifacts were inadvertently introduced. Basic statistics for each sub-category is illustrated in Table 3. InvestigationFollowing (Kaushik and Lipton, 2018), we examined the performance of Polyglot-Ko-12.8B using question-only (Q-only) and context-only (C-only) settings. Polyglot-Ko-12.8B achieved scores of 25.57% and 23.86% for Q-only and C-only respectively, while the full setting outperformed both with a score of 32.95%. Although the Q-only and C-only settings are within 10% accuracy of the full setting, it's worth noting that the model's lower bound is set at 20%. Therefore, we conclude that the dataset were crafted correctly to require both question and context to answer. ### History Task DescriptionThe history task assesses the model's understanding of historical events. Presented with a question and five options, the model must identify the correct answer. CreationWe first sourced web pages tagged "Korean history" from Namuwiki, Korea's equivalent to Wikipedia, and randomly selected 40 pages. From each page, authors manually crafted five questions. We refer (Malaviya et al., 2022) and filtered out 12 questions with overlapping tokens between questions and answers. Moreover, to investigate potential biases introduced while creating the wrong options, we analyzed two simple linguistic indicators: the probability of the longest option being correct was 21.53%, and for the shortest option, it was 17.01%. Through this process, we remove \begin{table} \begin{tabular}{l l l l} \hline \hline Metric & Full & Q-Only & C-Only & \(\Delta\) (_min_) \\ \hline Acc & **32.95** & 25.57 & 23.86 & -7.38 \\ Macro F1 & **32.01** & 24.35 & 23.64 & -7.56 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of Polyglot-Ko-12.8B on General Knowledge with truncated inputs. \begin{table} \begin{tabular}{l c c} \hline \hline Category & Sample N. & Average Length \\ \hline Tradition & 17 & 35.2 \\ Law & 10 & 32.2 \\ Geography & 49 & 46 \\ Korean Pop & 50 & 42.3 \\ Korean Drama & 50 & 36.7 \\ \hline \hline \end{tabular} \end{table} Table 3: The number of data instances for each category. overly simplistic questions and investigate for potential biases. ### Reading Comprehension **Task Description** Reading comprehension tasks involve providing paired questions and passages along with four options. The materials for our Reading Comprehension (RC) tests were sourced from the Korean Language Ability Test (KLAT), an exam designed to evaluate proficiency in Korean as a second language. **Creation** The tests were gathered from sample materials publicly released by the Korea Educational Testing Service (KETS). We omitted questions that required interpreting images. The sourced KLAT is divided into four proficiency tiers: three that correspond to the Common European Framework of Reference (CEFR) levels--A (beginner), B (intermediate), and C (advanced)--plus an introductory level below A for absolute beginners. ### Quality Check To further filter the collected questions, we reviewed the entire dataset and conducted factual verification using online resources. In this process, we manually corrected 23 questions with labeling or crawling errors. ## 4 Evaluation Settings ### Language Models We evaluated 10 models across varying sizes from four model families. From openly available models we selected (1) Korean-focused models: Polyglot-ko-1.3B/3.8B/5.8B/12.8B (Ko et al., 2023), (2) Multilingual models: UMT5-XL/XXL (Chung et al., 2023), and (3) English-centric models: Llama-2-7B/13B (Touvron et al., 2023). To investigate the influence of the number of pretrained Korean tokens on model performance we excluded models that do not disclose related statistics. This leaves out Falcon (Penedo et al., 2023) and BLOOM (Scao et al., 2022) from our experiments. Additionally, we included GPT-3.5-Turbo and GPT-4 in our evaluation to gauge HAE-RAE Bench's efficacy in assessing LLMs. **Polyglot-Ko**(Ko et al., 2023) is available in four sizes: 1.3B, 3.8B, 5.8B, and 12.8B, all built using the GPT-NeoX codebase. It was pretrained on a Korean-only corpus, with sizes ranging between 167B to 212B tokens. Despite its smaller pretraining budget compared to similar-sized English models, Polyglot-Ko achieved state-of-the-art results on KoBEST, a benchmark comprising five Korean language understanding and reasoning tasks (Kim et al., 2022). **UMT5**(Chung et al., 2023) was originally trained in five sizes: small (77M), base (250M), large (800M), xlarge (3B), and xxlarge (13B), closely following the mT5 architecture(Xue et al., 2020). However, the large variant wasn't released publicly due to pretraining instability. The models are trained on a corpus of 1T tokens, which includes 14.8 billion Korean tokens. UMT5 surpasses mT5 in benchmarks such as XNLI (Conneau et al., 2018) and TyDi QA (Clark et al., 2020). As the small and base models don't have counterparts in the Polyglot-Ko suite, our experiments focus on the xlarge and xxlarge models. **Llama-2**(Touvron et al., 2023) is available in three sizes: 7B, 13B, and 70B. It's trained on a corpus of 2T tokens, predominantly in English (89.7%), with Korean comprising a mere 0.06% or about 0.6B tokens. We utilize the version without fine-tuning. The Llama-2-70B model is excluded from our study due to the absence of a corresponding Korean model. ### Methodology To evaluate the models, we employ the "log-likelihood" method implemented via LM-Eval-Harness (Gao et al., 2021) using accuracy as our primary metric. This involves computing the log-likelihood for each response and selecting the option with the highest likelihood. All evaluations are implemented using bfloat16 precision in 0-shot, 5-shot, and 10-shot settings. The aim of HAE-RAE Bench is to curate a dataset that challenges models lacking depth in Korean culture and knowledge, thereby guiding researchers in creating better Korean language models. To compare the ability of this benchmark to differentiate less native language models against prior benchmarks, we use KoBEST (Kim et al., 2022) as our baseline. We selected KoBEST as it offers a broad range of language understanding and reasoning tasks. KoBEST comprises five tasks: BoolQ, COPA, HellaSwag, WiC, and SentiNeg. However, given the findings of (Ko et al., 2023), that both monolingual and multilingual language models exhibit inconsistent performance on WiC, we opted to omit this task from our assessment. While there are other available datasets that may be adopted as baselines, they come with limitations. For instance, Korean-NLI&STS Ham et al. (2020), being translated from English, is inherently easier for English models. KLUE Park et al. (2021), despite being handcrafted, primarily focuses on basic NLU tasks like topic classification and NER. This makes it incapable of evaluating complex reasoning capabilities. Additionally, its test set is not publicly available. ## 5 Evaluation Results **Is HAE-RAE bench harder for foreign models?** In Tables 5 and 6, we observe that the performance of LMs scales with model size and the number of exemplars within the same suite. Yet, despite their extensive training budgets, UMT5 and Llama-2 consistently fall short of their Polyglot-Ko counterparts. Furthermore, they rarely surpass the results of Polyglot-Ko-1.3B (0-shot). These results reaffirm the importance of language-specific corpora in learning cultural context and knowledge. It also highlights the effectiveness of HAE-RAE Bench, in assessing the language model's proficiency in Korean. Our results illustrated in Tables 7, 8, and 9 suggest that HAE-RAE Bench is particularly challenging for non-Korean models compared to the KoBEST benchmark. The performance gap between Polyglot-Ko and its counterparts is more pronounced on HAE-RAE Bench compared to KoBEST across all exemplar counts. Notably, for Llama-2-13B, the margin narrows considerably on KoBEST with an increase in exemplars. This discrepancy highlights that the proposed benchmark is especially challenging for models not tailored in Korean while being difficult to \begin{table} \begin{tabular}{l r r r r r r r r r r} \hline \hline & & \multicolumn{3}{c}{History} & \multicolumn{3}{c}{General Knowledge} & \multicolumn{3}{c}{Reading Comprehension} \\ \cline{3-11} Model & Params & n=0 & n=5 & n=10 & n=0 & n=5 & n=10 & n=0 & n=5 & n=10 \\ \hline \multirow{4}{*}{Polyglot-Ko} & 1.3B & 60.11 & 78.19 & 77.13 & 26.70 & 30.68 & 28.98 & 34.45 & 37.81 & 37.14 \\ & 3.8B & 69.15 & 86.17 & 85.11 & 28.41 & **33.52** & 33.52 & 40.49 & 42.06 & 40.04 \\ & 5.8B & 79.79 & 85.11 & 81.91 & 29.55 & 27.84 & 28.41 & 40.72 & 42.73 & 41.39 \\ & 12.8B & **80.32** & **88.30** & **90.43** & **32.95** & **33.52** & **34.66** & **41.61** & **45.41** & **46.76** \\ \multirow{4}{*}{UMT5} & 3B & 14.36 & 12.77 & 14.36 & 22.73 & 19.32 & 19.32 & 25.28 & 24.83 & 25.28 \\ & 13B & 21.59 & 18.09 & 19.15 & 21.81 & 25.00 & 19.32 & 29.75 & 25.28 & 27.74 \\ \multirow{4}{*}{LLaMA-2} & 7B & 28.72 & 35.64 & 35.64 & 21.02 & 24.43 & 25.00 & 29.98 & 32.89 & 31.32 \\ & 13B & 35.11 & 38.83 & 40.96 & 28.41 & 31.82 & 28.98 & 31.99 & 36.47 & 34.00 \\ \hline \hline \end{tabular} \end{table} Table 6: Evaluation results of the performance on History, General Knowledge, and Reading Comprehension tasks. \begin{table} \begin{tabular}{l r r r r r} \hline \hline & \multicolumn{3}{c}{Polyglot-Ko} & \multicolumn{3}{c}{\(\Delta\)} \\ \cline{2-6} Dataset & Params & Average & UMT5 & Llama-2 \\ \hline \multirow{4}{*}{HaE-RAE Bench} & 1.3B & 51.0 & -16.5 & -15.1 \\ & 3.8B & 54.6 & -20.1 & -18.7 \\ \multirow{4}{*}{Bench} & 5.8B & 59.4 & -25.0 & -23.5 \\ & 12.8B & 59.5 & -25.1 & -23.6 \\ \hline \multirow{4}{*}{KoBEST} & 1.3B & 56.3 & -6.3 & -6.1 \\ & 3.8B & 55.7 & -5.7 & -5.5 \\ \multirow{4}{*}{KoBEST} & 5.8B & 56.0 & -6.0 & -5.8 \\ & 12.8B & 65.2 & -15.2 & -15.0 \\ \hline \hline \end{tabular} \end{table} Table 7: Average Performance of Polyglot-Ko vs. UMT5-XXL and Llama-2-13B on HAE-RAE and KoBEST (0-shot). \begin{table} \begin{tabular}{l r r r r r r r r r} \hline \hline & & \multicolumn{3}{c}{Loan Words} & \multicolumn{3}{c}{Standard Nomenclature} & \multicolumn{3}{c}{Rare Words} \\ \cline{3-11} Model & Params & n=0 & n=5 & n=10 & n=0 & n=5 & n=10 & n=0 & n=5 & n=10 \\ \hline \multirow{4}{*}{Polyglot-Ko} & 1.3B & 76.92 & 88.76 & 91.72 & 60.13 & 69.93 & 71.24 & 47.41 & 61.48 & 61.23 \\ & 3.8B & 78.70 & 88.76 & 91.72 & 63.40 & 79.74 & 77.78 & 47.16 & 70.62 & 72.10 \\ \multirow{4}{*}{UMT5} & 5.8B & 82.84 & 93.49 & 94.08 & **66.67** & 82.35 & 83.66 & **56.79** & 73.09 & 74.57 \\ & 12.8B & **87.57** & **94.67** & **94.67** & 61.44 & **84.97** & **86.93** & 53.09 & **75.31** & **76.05** \\ \multirow{4}{*}{UMT5} & 3B & 58.58 & 61.54 & 59.76 & 41.83 & 37.25 & 33.33 & 25.68 & 25.43 & 24.44 \\ & 13B & 58.58 & 59.76 & 60.36 & 41.83 & 43.79 & 44.44 & 33.09 & 30.37 & 28.64 \\ \multirow{4}{*}{LLaMA-2} & 7B & 66.86 & 73.96 & 75.15 & 39.22 & 49.02 & 50.98 & 29.38 & 39.26 & 39.01 \\ & 13B & 66.86 & 77.51 & 78.11 & 49.02 & 57.52 & 64.05 & 32.35 & 42.47 & 43.95 \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluation results of the performance on Loan Words, Standard Nomenclature, and Rare Word tasks. mitigate by in-context learning. The entire result for KoBEST is illustrated in section 9.1. **Does language frequency in the training corpora matter?** Despite UMT5 being trained on a larger volume of Korean tokens, it underperforms Llama-2 on HAE-RAE Bench. Moreover, the advantage of in-context learning is relatively minimal for UMT5. Our findings support previous claims that language-specific reasoning capabilities of language models are not solely tied to the number of dedicated tokens in the pretraining corpus (Shi et al., 2022). These results indicate that language models under the size of 20B parameters also transfer their in-context learning abilities to low-resource languages. **How important is the model size for HAE-RAE Bench?** In Table 10, we employ regression and Analysis of Variance(ANOVA) to examine the correlation between the parameter count of Polyglot-Ko models and their performance. To narrow the focus solely on the impact of model size, the analysis is limited to the Polyglot-Ko family, thus setting aside variables like corpus quality or model architecture. For the KoBEST benchmark, the results demonstrate a marked relationship between performance and model size, as indicated by the high \(R^{2}\) value of 0.71 and the significant F-statistic and p-value. In contrast, for HAE-RAE Bench the model size explains only about a quarter of the performance variability. Additionally, the absence of statistical significance in both the regression and ANOVA for HAE-RAE Bench implies that its evaluation is influenced by a broader spectrum of factors, pointing to challenges beyond just model size. **Can GPT-3.5/4 ace HAE-RAE Bench?** In Table 11, the performance of GPT-3.5 and GPT-4 on the HAE-RAE Bench and KoBEST is presented. Unlike openly available models for which we leveraged a log probability method to gauge accuracy, these models do not provide log probabilities for individual tokens. Accordingly, we prompted the models to generate the number of the options they deemed correct. Direct comparison between these evaluation methods is not feasible, however, the method used for proprietary models is more challenging than the log-likelihood method applied to open models. The former entails generating answers from the entire vocabulary, whereas the latter restricts choices to five options. Notably, GPT-3.5 and GPT-4 achieved scores of 51.2% and 67.8% on the HAE-RAE Bench, respectively, indicating potential for further improvements. Conversely, their performances on KoBEST were 68.0% and 81.1%, suggesting narrower margins for improvement. In summary, state-of-the-art language models such as GPT-3.5 and GPT-4 have yet to master either the HAE-RAE Bench or KoBEST, though more room is left for HAE-RAE Bench. The entire evaluation results for GPT-3.5 and GPT-4 models is available at section 9.2. **Can knowledge be transferred from English?** Past research indicates that LLMs can internally transfer knowledge acquired in English to low \begin{table} \begin{tabular}{l r r r r} \hline \hline & \multicolumn{2}{c}{Regression} & ANOVA \\ Benchmark & \(\beta_{0}\) & \(\beta_{1}\) & \(R^{2}\) & \(F\)-statistic \\ \hline HAE-RAE Bench & 58.79 & 0.73 & 0.26 & 1.42 \\ KoBEST & 56.49 & 1.17 & 0.71* & 8.23* \\ \hline \hline \end{tabular} \end{table} Table 10: Results from regression and ANOVA for the HAE-RAE and KoBEST benchmarks. An asterisk (*) denotes outcomes with a p-value less than 0.01, indicating statistical significance. resource languages Huang et al. (2023); Zhou et al. (2023). To investigate whether LLMs leverage abilities derived from English corpora to solve HAE-RAE Bench, we employed Cross-lingual thought prompting (XLT) with GPT-3.5 and GPT-4. XLT Shi et al. (2022) is a technique that aids the transfer of abilities learned in English to other languages. As illustrated in Table 11, English prompting enhances the performance of LLMs on both HAE-RAE Bench and KoBEST. However, the gains for HAE-RAE Bench are modest: 4.2 for GPT-3.5 and 0.4 for GPT-4. In contrast, the improvements on KoBEST are more substantial, with margins of 9.9 and 11.1, respectively. Given KoBEST's focus on language understanding and reasoning, we suspect that such skills are more seamlessly transferable across languages within models. On the other hand, HAE-RAE Bench probes the nuances of cultural context and knowledge, aspects that are challenging to learn from English tokens, thereby undermining the benefits of extensive training across various languages. ## 6 Error Analysis Error analysis is important to understand the common errors or likely biases of language model mistakes and identify areas of future research. Accordingly, we compare the results of Polyglot-Ko-12.8B (0-shot) and GPT-4 Korean Prompting) for possible errors. We first examine the answer distribution to see if either model has a bias toward selecting certain numbers. This is shown in Figure 1. We find that both models are less likely to guess " 5" compared to other numbers. This pattern can be traced back to the dataset composition: while most questions offer five multiple-choice options, the reading comprehension subset provides only four. Beyond this, neither model displays any notable trends. In the **Rare Words**, **Loan Words**, and **Standard Nomenclature** subsets of HAE-RAE Bench, incorrect options were generated using a sorting method based on Levenshtein distance. To investigate the impact of Levenshtein distance on model performance, we compare the average distances for options based on whether the model answered the question correctly. As shown in Table 12, no discernible difference in Levenshtein distance is observed for either model between correct and incorrect answers. We suspect that the set Levenshtein distance threshold of 3 may not lead to meaningful variations in question difficulty. To delve deeper, we reviewed all incorrect questions for Polyglot-Ko-12.8B and GPT-4. However, given the questions' simple structures, such as _"What is the [official loan word / correct standard nomenclature] for [word]?"_ or _"Which word is suitable for the definition [def]?"_, we did not identify any syntactic characteristics that might explain the incorrect questions. For the **General Knowledge** subset, we assess the performance of Polyglot-Ko-12.8B and GPT-4 across the subcategories, as shown in Figure 2. GPT-4 consistently outperforms Polyglot-Ko-12.8B. Polyglot-Ko-12.8B fares better in law and culture but lags in geography, K-pop, and especially K-drama. The model's weaker performance in K-pop and K-drama may stem from its knowledge cutoff, given the need Figure 1: Density distribution of answer choices by Polyglot-Ko-12.8B, GPT-4, and Gold Labels. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{GPT-3.5-Turbo} & \multicolumn{3}{c}{GPT-4} \\ \cline{2-7} Dataset & Ko & En & \(\Delta\) & Ko & En & \(\Delta\) \\ \hline HAE-RAE Bench & 51.2 & 55.4 & 4.2 & 67.8 & 68.2 & 0.4 \\ KoBEST & 68.0 & 79.3 & 11.4 & 81.1 & 91.0 & 9.9 \\ \hline \hline \end{tabular} \end{table} Table 11: Evaluation result of GPT-3.5-Turbo and GPT-4 on HAE-RAE Bench and KoBEST with zero shot setting. We use the snapshot from June 13th 2023 for both models. Ko and En denote the language of the prompt used. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{GPT-4} & \multicolumn{2}{c}{Polyglot-Ko-12.8B} \\ \cline{2-5} Dataset & Correct & Incorrect & Correct & Increrect \\ \hline Rare Words & 1.58 (0.81) & 1.58 (0.80) & 1.58 (0.81) & 1.58 (0.80) \\ Loan Words & 1.58 (0.80) & 1.56 (0.80) & 1.58 (0.80) & 1.57 (0.80) \\ Standard Nomenclature & 1.58 (0.80) & 1.55 (0.80) & 1.57 (0.81) & 1.56 (0.80) \\ \hline \hline \end{tabular} \end{table} Table 12: Average Levenshtein distance of options for correct and incorrect questions. for up-to-date information in these areas. GPT-4 excels across all categories, likely benefiting from a diverse training set. Both models have the lowest scores in the K-drama category, suggesting either model limitations or ambiguous questions. The **Reading Comprehension** subset of HAE-RAE Bench comprises four difficulty levels: Introductory (for absolute beginners), A (beginner), B (intermediate), and C (advanced), based on the Common European Framework of Reference (CEFR). In figure 3 we examine the performance of each model across these levels. Our findings indicate that GPT-4 consistently outperforms Polyglot-Ko-12.8B across all difficulty tiers, with the performance gap becoming more pronounced at higher levels (B and C). The performance of Polyglot-Ko-12.8B peaks at difficulty level A and declines, suggesting a limitation in handling more challenging questions. ## 7 License HAE-RAE Bench is released under a CC BY-NC-ND license. This license prohibits remixing, redistribution, and commercial use of the dataset. This constraint is due to the reading comprehension subset, for which the copyright holder of KLAT has restricted commercial alterations. However, we do not anticipate this as a significant issue since benchmark datasets are seldom used for commercial purposes. Researchers can still freely download and evaluate their models using this dataset. ## 8 Conclusion In this paper, we introduce HAE-RAE Bench, a dataset curated to evaluate the cultural knowledge encoded in language models. Unlike previous Korean language model evaluation suites, HAE-RAE Bench is crafted to present a greater challenge to non-Korean models, disrupting their ability to guess answers based on in-context learning or scale-derived multilingualism. Our work is among the first to propose a non-task-oriented dataset aimed at assessing whether a language model's knowledge is adequate for roles like a domestic conversational agent or search engine. This research suggests a pathway for advancing non-English NLP, emphasizing the need for language models that are as proficient in language-specific knowledge as they are in language understanding and reasoning tasks.
2302.00263
Dictionary-based Manifold Learning
We propose a paradigm for interpretable Manifold Learning for scientific data analysis, whereby we parametrize a manifold with $d$ smooth functions from a scientist-provided dictionary of meaningful, domain-related functions. When such a parametrization exists, we provide an algorithm for finding it based on sparse non-linear regression in the manifold tangent bundle, bypassing more standard manifold learning algorithms. We also discuss conditions for the existence of such parameterizations in function space and for successful recovery from finite samples. We demonstrate our method with experimental results from a real scientific domain.
Hanyu Zhang, Samson Koelle, Marina Meila
2023-02-01T06:13:09Z
http://arxiv.org/abs/2302.00263v2
# Dictionary-based Manifold Learning ###### Abstract We propose a paradigm for interpretable Manifold Learning for scientific data analysis, whereby we parametrize a manifold with \(d\) smooth functions from a scientist-provided _dictionary_ of meaningful, domain-related functions. When such a parametrization exists, we provide an algorithm for finding it based on sparse _non-linear_ regression in the manifold tangent bundle, bypassing more standard manifold learning algorithms. We also discuss conditions for the existence of such parameterizations in function space and for successful recovery from finite samples. We demonstrate our method with experimental results from a real scientific domain. ## 1 Introduction Dimension reduction algorithms map high-dimensional data into a low-dimensional space by a learned function \(f\). However, it is often difficult to ascribe an interpretable meaning to the learned representation. For example, in non-linear methods such as Laplacian Eigenmaps [3] and t-SNE [25], \(f\) is learned without construction of an explicit function in terms of the features. In contrast, when scientists describe/model a system using knowledge from their domain, often the resulting model is in terms of domain relevant features, which are continuous functions of other domain variables (e.g. equations of motion). For example, in the application of Molecule Dynamic Simulation (MDS) study, data are often high dimensional with non-trivial topology, non i.i.d. noise. Figure 0(a) shows pairwise scatterplots of six toluene molecule features and 0(b) displays a single scientifically relevant function that model (approximately) the state space of the toluene molecule; it is an angle of rotations. A functional form \(f\) can also be used to compare embeddings from different sources, derive out-of-sample extensions, and to interrogate mechanistic properties of the analyzed system. In figure 0(c), we compare the scientifically identified functional mapping \(f\) with existing manifold learning algorithms. This paper proposes to construct a manifold model that interpolates between the two above modalities. Specifically, our algorithm will map samples \(\xi_{i}\) from a manifold to new coordinates \(f(\xi_{i})\) like in purely data driven manifold learning, but these will be selected from a predefined _finite_ set of smooth functions \(\mathcal{F}\), called a _dictionary_, to represent intrinsic manifold coordinates of the data manifold \(\mathcal{M}\). Thus, the obtained embedding is smooth, has closed-form expression, can map new points from the manifold \(\mathcal{M}\) to \(f(\mathcal{M})\) exactly, and is interpretable with respect to the dictionary. This method, which we call TSLasso, requires the key assumption that the manifold \(\mathcal{M}\) is parametrized by a subset of functions in the dictionary. However, creating dictionaries of meaningful concepts for a scientific domain and finding those elements that well-describe the data manifold is an everyday task in scientific research. We put the subset-selection task on a formal mathematical basis, and exhibit in Section 5 a scientific domain where the assumptions we make hold, and where our method replaces dictionary-based visual inspection of the data manifold. Problem StatementSuppose data \(\mathcal{D}=\{\xi_{i},i\in[n]\}\) are sampled from a \(d\)-dimensional connected smooth1 submanifold \(\mathcal{M}\) embedded in the Euclidean space \(\mathbb{R}^{D}\), where typically \(D\gg d\). Assume that the intrinsic dimension \(d\) is known. \(\mathcal{M}\) has the Riemannian metric induced from \(\mathbb{R}^{D}\). We are also given a dictionary of functions \(\mathcal{F}=\{f_{j},j\in[p]\}\). All of the functions \(f_{j}\) are defined in the neighborhood of \(\mathcal{M}\) in \(\mathbb{R}^{D}\) and take values in some connected subset of \(\mathbb{R}\). We require that they are smooth on \(\mathcal{M}\) (as a subset of \(\mathbb{R}^{D}\)), and have analytically computable gradients in \(\mathbb{R}^{D}\). Our goal is to select \(d\) functions in the dictionary, so that the mapping \(f_{S}=(f_{j})_{j\in S\subset\mathcal{F}}\) is a diffeomorphism on an open neighborhood \(U\subset\mathcal{M}\) to \(f_{S}(U)\subset\mathbb{R}^{|S|}\) at almost everywhere on \(\mathcal{M}\), \(f_{S}\) is then a _global_ mapping with fixed number of functions. The learned mapping \(f_{S}\) will be a _valid parametrization_ of \(\mathcal{M}\). Footnote 1: In this paper, by _smooth_ manifold or function we mean of class \(C^{l}\), \(l\geq 1\), to be defined in Section 4. The almost everywhere in the previous definition relaxes the usual definition of smooth embedding. Consider the circle embedded in \(\mathbb{R}^{2}\) by the map \(g:t\mapsto(\cos t,\sin t)\) for \(t\in\mathbb{R}\). Consider the function defined for \((x,y):|x^{2}+y^{2}-1|\leq 1/2\), then \[\Theta:(x,y)\mapsto\begin{cases}\arcsin\frac{y}{\sqrt{x^{2}+y^{2}}}&x\geq 0 \\ \pi-\arcsin\frac{y}{\sqrt{x^{2}+y^{2}}},&x<0\end{cases} \tag{1}\] is a valid parametrization for \(\mathcal{M}\). We had made two adjustments to standard differential geometry [16]. First, in differential geometry terminology, \((U\subseteq\mathcal{M},f_{S})\) locally is a coordinate _chart_ for \(\mathcal{M}\) and \(f_{S}^{-1}\) is called a _parameterization_ of \(U\). In this paper, we often refer to \(f_{S}\) as the 'parameterization', as \(f_{S},f_{S}^{-1}\) are diffeomorphisms and are both representative. We argue that \(f_{S}\) is of more immediate interest, since this map consists of interpretable and analytically computable dictionary functions, and \(f_{S}^{-1}\), while guaranteed to exist on \(f_{S}(U)\), is defined only implicitly in many scenarios. Figure 1: Example of toluene molecule dynamic data. **Left:** pairwise scatterplots of first six coordinates in \(\mathbb{R}^{50}\) and histograms of each coordinate on the diagonal. The preprocessing procedure is described in section 5. **Middle:** Atoms in a toluene molecule. Scientists previously discovered that the torsion associated with the peripheral methyl group bound governs the state space of the toluene molecule as a one dimensional manifold. **Right:** Embedding of toluene data into \(\mathbb{R}^{2}\) by diffusion map, colored by the bond torsion labeled. The variation of the color along the circle demonstrates this function as parametrizing the data manifold. Second, since a manifold may require multiple charts, we relax the requirement that \(f_{S}\) is locally a diffeomorphism everywhere to _almost everywhere_. In the circle example, since the manifold \(\mathcal{M}\) is compact, it is not possible to find a single smooth function that can locally be a diffeomorphism everywhere. This relaxation allows us to find \(d\) functions parametrizing a \(d-\)dimensional compact manifold in our definition. Our main technique is to operate over gradient fields on \(\mathcal{M}\), which extends Meila et al. [18]. In Section 2, we introduce some backgrounds on gradient fields on manifolds. In Section 3, we present our algorithm TSLasso in detail. In Section 4, we provide sufficient conditions for selection consistency. Section 5 shows experimental results on simulations and molecular dynamics datasets. Section 6 discusses related work and interesting features of our approach. ## 2 Preliminaries: Gradients on Manifolds The reader is referred to Lee [16] for more backgrounds on differential geometry. In this section, we review gradient fields on manifolds, which play a central role in our algorithm. Consider a \(d-\)dimensional manifold \(\mathcal{M}\). At point \(\xi\), its tangent space \(\mathcal{T}_{\xi}\mathcal{M}\) can be viewed as the equivalent class of directions of infinitesimal curves passing \(\xi\). For a smooth function \(f:\mathcal{M}\mapsto\mathbb{R}\), its differential \(Df:\mathcal{T}_{\xi}\mathcal{M}\mapsto\mathbb{R}\) is a linear map that generalizes directional derivatives in calculus in Euclidean space, characterizing how the value of \(f\) varies along different directions in \(\mathcal{T}_{\xi}\mathcal{M}\). The chain rule also holds for compositions of functions on manifolds. When \(\mathcal{M}\) is Riemannian with metric \(\mathbf{g}\), the gradient is a collection of tangent vectors \(X(\xi)\), one at each point \(\xi\), such that for all \(\xi\in\mathcal{M}\) and all \(v\in\mathcal{T}_{\xi}\mathcal{M}\) \[\langle X(\xi),v\rangle_{\mathbf{g}}=Df(v)|_{\xi}. \tag{2}\] For example, under the usual Euclidean metric, a function \(f:\mathbb{R}^{D}\mapsto\mathbb{R}\) has a gradient vector \(\nabla f(\xi)\) at each point \(\xi\in\mathbb{R}^{D}\) as defined in ordinary multivariate calculus. For our problem, \(\mathcal{M}\) is a \(d-\)dimensional manifold embedded in \(\mathbb{R}^{D}\) with inherited metric. \(\mathcal{T}_{\xi}\mathcal{M}\) can be identified as a \(d-\)dimensional linear subspace of \(\mathcal{T}_{\xi}\mathbb{R}^{D}\), whose basis can be represented by an orthogonal \(D\times d\) matrix \(\mathbf{T}_{\xi}\). Let \(f\) be a smooth real-valued function, defined on a open neighborhood of \(\mathcal{M}\). There are two points of views for \(f\) when it is restricted on \(\mathcal{M}\): (i) as a function on \(\mathbb{R}^{D}\) and has gradient \(\nabla f\) as usual. (ii) as a function on \(\mathcal{M}\) and one can show that the gradient field \(\operatorname{grad}f\) given by the coordinate representation \(\operatorname{grad}f:=\mathbf{T}_{\xi}^{\top}\nabla f\) satisfies (2) [16]. More generally, consider a map \(F=(f_{1},\cdots,f_{s}):\mathcal{M}\mapsto\mathbb{R}^{s}\). The differential \(DF=(Df_{1},\cdots,Df_{s})\) is then defined to be a linear mapping from \(\mathcal{T}_{\xi}\mathcal{M}\mapsto\mathcal{T}_{\xi}\mathbb{R}^{s}\). Under basis \(\mathbf{T}_{\xi}\), a coordinate representation of \(DF\) is \(\mathbf{T}_{\xi}^{\top}\nabla F\), where \(\nabla F\) is a \(D\times s\) matrix, constructed buy row-wise stacking the gradients \(\nabla f_{1},\cdots,\nabla f_{s}\). ## 3 The TSLasso algorithm The idea of the TSLasso algorithm is to express the orthonormal bases \(\mathbf{T}_{\xi}\in\mathbb{R}^{D\times d}\) of the manifold tangent spaces \(\mathcal{T}_{\xi}\mathcal{M}\) as sparse linear combinations of dictionary function gradient vector fields. This simplifies the non-linear problem of selecting a best functional approximation to \(\mathcal{M}\) to the linear problem of selecting best local approximations in the tangent bundle. If the subset \(S\) with \(|S|=d\) gives a valid parametrization, in a neighborhood \(U_{\xi}\subset\mathcal{M}\) of almost all point \(\xi\), \(f_{S}\) is a diffeomorphism, i.e. there is some mapping \(g:f_{S}(U_{\xi})\mapsto U_{\xi}\) such that the identity map \(f_{S}\circ g\) is identity map on \(f_{S}(U_{\xi})\) and \(g\circ f_{S}\) is the identity map on \(U_{\xi}\). Thus, in coordinate representation we can denote a matrix representation of \(Df_{S}(\xi)\) by \(\mathbf{X}_{\xi,S}=\mathbf{T}_{\xi}^{\top}\nabla f_{S}(\xi)\in\mathbb{R}^{d \times d}\), and further there is some matrix \(\mathbf{B}_{\xi,S}\in\mathbb{R}^{d\times d}\) such that for all \(\xi\in\mathcal{M}\) \[\mathbf{I}_{d}=\mathbf{X}_{\xi,S}\mathbf{B}_{\xi,S} \tag{3}\] according to the chain rule of function composition on manifolds. For notation simplicity, we will write \(\mathbf{X}_{iS},\mathbf{B}_{iS},\mathcal{T}_{i}\mathcal{M}\) as the corresponding quantities at point \(\xi_{i}\) when we are discussing finite sample. We can select \(S=[p]\), and simplify the notation of \(\mathbf{X}_{iS},\mathbf{B}_{iS}\) to \(\mathbf{X}_{i}\in\mathbb{R}^{d\times p},\mathbf{B}_{i}\in\mathbb{R}^{p\times d}\), but crucially, if we do not have colinear gradients, then we can restrict all but \(d\) rows of \(\mathbf{B}_{i}\) to be zeros. We can also select \(s=\{j\}\), and define \(\mathbf{B}_{.j}\in\mathbb{R}^{nd}\) as the vector formed by concatenating \(\mathbf{B}_{i\{j\}}\). Stacking \(\mathbf{B}_{.j}\) together forms \(\mathbf{B}\in\mathbb{R}^{p\times nd}\). ### Loss Function We now seek a subset \(S\subset[p]\) such that (1) only the corresponding \(nd\) vectors \(\mathbf{B}_{.j}:j\in S\) have non-zero entries and (2) each submatrix \(\mathbf{X}_{iS}\) forms a \(\operatorname{rank}d\) matrix. The previous observation inspires minimizing Frobenius norm \(\mathbf{I}_{d}-\mathbf{X}_{i}\mathbf{B}_{i}\) with joint sparsity constraints over rows of \(\mathbf{B}_{i}\). This sparsity is also induced jointly over all data points. \[J_{\lambda_{n}}(\mathbf{B})=\frac{1}{2}\sum_{i=1}^{n}\lvert\mathbf{I}_{d}- \mathbf{X}_{i}\mathbf{B}_{i}\rvert\rvert_{F}^{2}+\frac{\lambda_{n}}{\sqrt{dn} }\sum_{j=1}^{p}\lvert\lvert\mathbf{B}_{.j}\rvert\rvert_{2}. \tag{4}\] Note that this optimization problem is a variant of Group Lasso [30] that forces group of coefficients of size \(dn\) to be zero simultaneously in the regularization path. The details of the tangent space estimation are deferred to Section 3.2. It can be shown this loss function is invariant to local tangent space rotation. ### Tangent Space Estimation So far we have solved our problem assuming we have access to the tangent space at each point \(\xi\in\mathcal{M}\). However, this is rarely true. In practical use, the first step to realize the previous idea of expressing tangent spaces is to estimate them. _Weighted Local Principal Component Analysis_ (WL-PCA) algorithm proposed as Singer and Wu [23], Chen et al. [6], Aamari and Levrard [1] are exnaples to estimate such basis. These methods are shown to have accurate tangent space estimation when the hyperparameters are selected appropriately. Intuitively, estimating tangent spaces is estimating local covariances matrices centered at each point \(\xi_{i}\). We therefore select a neighborhood radius parameter \(r_{N}\) and identify \(\mathcal{N}_{i}=\{i^{\prime}\in[n],\text{ with }\ \ ||\xi_{i}-\xi_{i^{\prime}}||_{2} \leq r_{N}\}\) to be all neighbor points of \(\xi_{i}\) within Euclidean (in \(\mathbb{R}^{D}\)) distance \(r_{N}\) so that we can pass into this algorithm. When compute local covariance matrices, one may weight different points. These weights of each \(\xi_{j}\) in \(\mathcal{N}_{i}\) can be chosen to be proportional some kernel function \(K(x)\) such that for all \(j\in\mathcal{N}_{i}\) the weight is proportional to \(K_{ij}=K(||\xi_{i}-\xi_{j}||/\epsilon_{N})\), where \(\epsilon_{N}\) is a tuning-parameter proportional to \(r_{N}\) in the sense that kernel-values of pairs of non-neighboring points should be close to zero. Any \(C^{2}\) positive monotonic decreasing function \(K(u)\) with compact support is valid; examples including constant kernel \(K(u)=1_{[0,1]}(u)\), Epanechnikov \(K(u)=(1-u^{2})1_{[0,1](x)}\) and Gaussian \(K(u)=\exp(-u^{2})1_{[0,1](x)}\) etc. We specifically choose the Gaussian kernel in our experiments since it provides better tangent space estimation empirically, as it weights more on points that are close to where the tangent space is of interest. Given these weights \(K_{ij}\) for \(\xi_{j}\)s, the local weighted mean and weighted covariance at \(\xi_{i}\) can be estimated, and singular value decomposition is used to find the basis. Let \(k_{i}=\lvert\mathcal{N}_{i}\rvert\) be the number of neighbors of point \(\xi_{i}\) and \(\mathbf{\Xi}_{i}=\{\xi_{i^{\prime}},i^{\prime}\in\mathcal{N}_{i}\}\in\mathbb{ R}^{\lvert\mathcal{N}_{i}\rvert\times D}\) be the correpsonding local position matrices. Also denote a column vector of ones of length \(k\) by \(\mathbf{1}_{k}\), and define the Singular Value Decomposition algorithm \(\text{SVD}(\mathbf{X},d)\) of matrix \(\mathbf{X}\) as outputting \(\mathbf{V},\Lambda\), where \(\Lambda\) and \(\mathbf{V}\) are the largest \(d\) eigenvalues and their corresponding eigenvectors. Tangent space estimation algorithm is displayed in algorithm TangentSpaceBasis. ### The TSLasso Algorithm We now present the full TSLasso approach. Following the logic in 3, we transform our non-linear manifold parameterization support recovery problem into a collection of sparse linear problems in which we express coordinates of individual tangent spaces as linear combinations of gradients of functions from our dictionary. Tangent spaces at each point are estimated in step 4, enabling utilizing gradients of dictionary functions in \(\mathcal{T}_{\xi}\mathcal{M}\) by projecting the gradient \(\nabla f_{j}(\xi_{i})\in\mathbb{R}^{D}\) on to estimated tangent spaces \(\mathbf{T}_{i}\). Finally we input these gradients into objective function (4) to solve for the support. ``` 1:Input: Local dataset \(\mathbf{\Xi}_{i}\), intrinsic dimension \(d\), kernel parameter \(\epsilon_{N}\) 2: Compute local kernel weights \(K_{i,\mathcal{N}_{i}}=(K_{ij})_{j\in\mathcal{N}_{i}}\in\mathbb{R}^{k_{i}}\). 3: Compute weighted mean \(\bar{\xi}_{i}=(K_{i,\mathcal{N}_{i}}^{\top}\mathbf{1}_{k_{i}})^{-1}K_{i, \mathcal{N}_{i}}^{\top}\mathbf{\Xi}_{i}\) 4: Compute weighted local difference matrix \(\mathbf{Z}_{i}=\operatorname{diag}(K_{i,\mathcal{N}_{i}}^{\frac{1}{2}})(\mathbf{ \Xi}_{i}-\mathbf{1}_{k_{i}}\bar{\xi}_{i})\) 5: Compute \(\mathbf{T}_{i},\Lambda\leftarrow\text{SVD}(\mathbf{Z}_{i}^{\top}\mathbf{Z}_{i},d)\) 6:Output:\(\mathbf{T}_{i}\) ``` **Algorithm 1**TangentSpaceBasis ``` 1:Input: Dataset \(\mathcal{D}\), dictionary \(\mathcal{F}\), intrinsic dimension \(d\), regularization parameter \(\lambda_{n}\), radius parameter \(r_{N}\), kernel parameter \(\epsilon_{N}\). 2:for\(i=1,2,\ldots n\) (or subset \(I\subset[n]\))do 3: Compute \(\mathcal{N}_{i}\) and \(\mathbf{\Xi}_{i}\) using \(\mathcal{D},r_{N}\) 4: Compute the orthonormal tangent space basis \(\mathbf{T}_{i}\leftarrow\)TangentSpaceBasis\((\mathbf{\Xi}_{i},d,\epsilon_{N})\) 5: Compute \(\nabla f_{j}(\xi_{i})\) for \(j\in[p]\). 6: Project onto tangent space \(\mathbf{X}_{i}=\mathbf{T}_{i}^{\top}[\nabla f_{j}(\xi)]_{j\in[p]}\) 7:endfor 8: Solve for \(\mathbf{B}\) by minimizing \(J_{\lambda_{n}}(\mathbf{B})\) in (4). 9:Output:\(S=\{j\in[p]:||\mathbf{B}_{j}||_{2}>0\}\) ``` **Algorithm 2**TSLasso ### Other considerations NormalizationThe rescaling of functions \(f_{j}\) will affect the solution of the Group Lasso objective, since functions with larger gradient norm will tend to have smaller \(\parallel\mathbf{B}_{.j}\parallel\). This can affect the support \(S\) recovered. Therefore, we compute \(\gamma_{j}^{2}=\frac{1}{n}\sum_{i=1}^{n}\lVert\nabla f_{j}(\xi_{i})\rVert^{2}\) and set \(f_{j}\gets f_{j}/\gamma_{j}\). This approximates normalization by \(||\nabla f_{j}||_{L_{2}(\mathcal{M})}\). Since \(|\nabla f_{j}(\xi_{i})|^{2}=|\mathrm{grad}\,f_{j}(\xi_{i})|^{2}+|\nabla f_{j}^ {\perp}(\xi_{i})|^{2}\), where \(\nabla f_{j}^{\perp}\) denotes the component of \(\nabla f_{j}\) orthogonal to \(\mathcal{M}\), normalization prior to projection penalizes functions with large \(\nabla f_{j}^{\perp}\) and favors functions whose gradients are more parallel to the tangent space of \(\mathcal{M}\). Note that, in the high-dimensional setting, we expect random functions to have gradient perpinclular to \(\mathcal{TM}\), and so these will be penalized by our normalization strategy. ComputationNote that we do not need to run TSLasso on our whole dataset in order to take advantage of all of our data, and can instead run on a subset \(I\subset[n]\) such that \(|I|=n^{\prime}\). In particular, the search task in identifying the local datasets \(\mathbf{\Xi}_{i}\) is \(O(Dnn^{\prime})\), which is significantly less than the time to construct a full neighbor graph for an embedding. For each \(i\), computing the local mean is \(O(k_{i}D)\), and finding the tangent space is \(O(k_{i}D^{2}+k_{i}^{3})\). Gradient computation runtime is \(O(D)\), but the constant may be large. Projection is \(O(dDp)\). For each Group Lasso iteration, the compute time is \(O(n^{\prime}mpd)\)[18]. TuningFor the real data experiments, we select \(\epsilon_{N}\) using the method of Joncas et al. [14], while in simulation, we set it proportional to noise. As explained in the next section, we are theoretically motivated by the definition of parameterization to select a support \(S\) that has cardinality equal to \(d\), which is assumed to be given, although dimension estimation as in Levina and Bickel [17] could also be appropriate. For \(\lambda\), we apply binary search to the regularization path from \(\lambda=0\) to \(\lambda_{\text{max}}=\max_{j}(\sum_{i=1}^{n}(\lVert\mathrm{grad}_{T^{\prime} \lambda}\,f_{j}(\xi_{i}))\rVert_{2}^{2})^{1/2}\) to find \(\lambda\) s.t. the cardinality of the selected support is \(d\). In the next section, we introduce support recovery conditions for the success of this approach, and introduce a variation of TSLasso for when they are violated. ## 4 Support Recovery Guarantee In this section, we discuss the behavior of TSLasso theoretically. First, we discuss the existence and uniqueness of a group of functions \(f_{S}\subset\mathcal{F}\) that can serve as a valid parametrization. When such minimal parametrization exists and is unique, we provide sufficient conditions so that TSLasso cor rectly selects this group with high probability w.r.t. sampling on the manifold and this probability converges to one if sample size tends to infinity. **Assumption 4.1**.: Throughout this section, we assume the followings to be true. 1. \(\mathcal{M}\) is a \(d\)-dimensional \(C^{\ell},\ell\geq 1\) compact manifold with reach \(\tau>0\) embedded in \(\mathbb{R}^{D}\) with inherited Euclidean metric. 2. Data \(\{\xi_{i}\}_{i=1}^{n}\) are sampled from some probability measure \(P\) on the manifold that has a Radon-Nikodym derivative \(\pi(\xi)\) with respect to the Hausdorff measure. There exist two positive constants \(\pi_{\min},\pi_{\max}\) such that \(0<\pi_{\min}\leq\pi(\xi)\leq\pi_{\max}\) for all \(\xi\in\mathcal{M}\). 3. Dictionary \(\mathcal{F}=\{f_{j}(\xi):j\in[p]\}\) contains \(p\)\(C^{1}\) functions defined on a neighborhood of \(\mathcal{M}\) in \(\mathbb{R}^{D}\). Further assume that \(\delta:=\inf_{\xi\in\mathcal{M}}\min_{j\in[p]}||\nabla f_{j}(\xi)||>0\) and denote \(\Gamma:=\sup_{\xi\in\mathcal{M}}\max_{j\in[p]}||\nabla f_{j}(\xi_{i})||\). 4. \(S\subset[p],|S|=d\) is the only subset such that \(\operatorname{rank}f_{S}=d\) a.e. on \(\mathcal{M}\) w.r.t. Hausdorff measure. Assumption 1 on manifold and 2 on sampling are common in the manifold estimation literature (e.g. Aamari and Levrard [1]). The positive reach in 1 will avoid extreme curvature and bizarre behavior of the manifold, and the assumption 2 on the density enforces the uniformity of sampling. Assumption 3 restricts the smoothness of all dictionary functions and ensures that all dictionary functions do not have critical points on \(\mathcal{M}\) as a function on \(\mathbb{R}^{D}\). One should also notice that \(\Gamma<\infty\) by the compactness assumption of \(\mathcal{M}\). Now we are ready to prove support recovery consistency under suitable conditions. Let \(\hat{\mathbf{B}}\) be the solution of problem (4) and \(S(\hat{\mathbf{B}})\) be the nonzero rows of \(\hat{\mathbf{B}}\). We will show that the probability of \(S(\hat{\mathbf{B}})=S\) converges to 1 as \(n\) increases. We start by defining \[b_{S}=\inf_{\xi:\operatorname{rank}Df_{S}(\xi)=d}\min_{j\in S}||\mathbf{B}_{ \xi,\{j\}}||_{2} \tag{5}\] Larger \(b_{S}\) is an indicator of higher strength of signal. Further consider the matrix \(\tilde{\mathbf{X}}_{\xi}\) whose \(j\)-th column is \(\mathbf{X}_{\xi,\cdot j}/||\nabla f_{j}(\xi)||\). Correspondingly we can define \(\tilde{\mathbf{X}}_{\xi,S}\) as the submatrix of \(\tilde{\mathbf{X}}\) with columns in \(S\). Let \(\mathbf{G}_{\xi,S}=\operatorname{diag}\{||\nabla f_{j}(\xi)||\}_{j\in S}\) and define \[\mu_{S} =\sup_{\xi\in\mathcal{M},j\in S,j^{\prime}\notin S}|\tilde{ \mathbf{X}}_{\xi,\cdot j}^{\top}\tilde{\mathbf{X}}_{\xi,\cdot j^{\prime}}|\, \tag{6}\] \[\nu_{S} =\sup_{\xi\in\mathcal{M}}||(\tilde{\mathbf{X}}_{\xi,S}^{\top} \tilde{\mathbf{X}}_{\xi,S})^{-1}-\mathbf{G}_{\xi,S}^{2}||. \tag{7}\] Here \(\nu_{S}\) is finite if \(\mu_{S}<1/(d-1)\), guaranteed by the Gershgorin circle theorem. The parameter \(\mu_{S}\) can be thought of as a renormalized incoherence between the functions in \(S\) and those not in \(S\); \(\nu_{S}\) is a internal colinearity parameter, which is small when the columns of \(\mathbf{X}_{S}(\xi)\) are closer to orthogonality and the gradient of functions in \(S\) are more parallel to the tangent space. We also define \[\phi_{S}=\sup_{\xi\in\mathcal{M}}\max_{j\in S}||\nabla f_{j}(\xi)||_{2} \tag{8}\] which upper bounds the Euclidean gradient of functions in \(S\). **Proposition 4.2**.: _Suppose Assumptions 4.1 hold. In algorithm 2, suppose tangent spaces are estimated by WL-PCA in Section 3.2 using Gaussian kernel and bandwidth parameter choice \(\epsilon_{N}=r_{N}=C((\log n/(n-1))^{1/d})\) with large enough constant \(C\), and normalization on dictionary is performed as in Section 3.4. If \((1+\nu_{S}/\delta^{2})^{2}\mu_{S}\phi_{S}\Gamma d<1\) and \(\lambda_{n}(1+\nu_{S}/\delta^{2})^{2}<b_{S}\sqrt{n}/2\), then there is a constant \(N\) depending only on \(\mathcal{M},\pi_{\min},\pi_{\max}\) such that when \(n>N\), it holds that_ \[Pr(S(\widehat{\mathbf{B}})=S)\geq 1-4(\frac{1}{n})^{\frac{2}{d}} \tag{9}\] The proof is contained in the supplementary material. The main idea is first to find a sufficient condition so that given correct gradient of each function TSlasso can find the correct support, assuming correct estimation of the tangent space. Then we consider this condition in the case where gradient is estimated from data and obtain the guarantee by the fact that tangent spaces can be consistently estimated with larger sample size. There are some differences to be noted of this recovery result compared with classical recovery guarantees in Group Lasso type problems in e.g. Wainwright [26], Obozinski et al. [19], Elyaderani et al. [10]. First, we cannot adopt directly the usual assumption in Lasso literature that each column of \(\mathbf{X}\) has unit norm, considering the normalization in Section 3.2. Also, the asymptotic regime we are considering here is only \(n\rightarrow\infty\). Although we are using a Group Lasso type optimization problem, the dimension \(p\) is fixed since we only consider the fixed dictionary. There is no other conditions between \(p\) and \(n\) in our result, as required in many literature. Third, the noise structure is not the same as a general Group Lasso problem since the source of noise is estimation of tangent space. Since we are sampling _on the manifold_, there is no noise level parameter that appears in standard Lasso literature. In a simulation experiment, we also explore the behavior of our method on noisy settings. ## 5 Experiments We illustrate the behavior of TSLasso on both synthetic and real data. Our synthetic data sets include a swiss roll in \(\mathbb{R}^{49}\) and a rigid ethanol data in \(\mathbb{R}^{50}\) and our real datasets are data molecular dynamics simulation (MDS) for three different molecules (Ethanol, Malonaldehyde and Toluene). Due to space limit we only present result of real datasets here. Results on synthetic datasets are included in the supplementary materials. For all of the experiments, the data consist of \(n\) data points in \(D\) dimensions. TSLasso is applied to a uniformly random subset of size \(n^{\prime}=|\mathcal{I}|\) using \(p\) dictionary functions, and this process is repeated \(\omega\) number of times. Note that the entire data set is used for tangent space estimation. In our experiments, the intrinsic dimension \(d\) is assumed known, but could be estimated by a method such as in Levina and Bickel [17]. The local tangent space kernel bandwidth \(\epsilon_{N}\) is estimated using the algorithm of Joncas et al. [14] for molecular dynamics data. Parameters are summarized in Table 1.Experiments were performed in Python on a 16 core Linux Debian Cluster with 768 gigabytes of RAM. Code is available at github.com/codanonymous/tslasso. Data is available at [https://figshare.com/s/fbd95c10b09f1140389d](https://figshare.com/s/fbd95c10b09f1140389d). These simulations dynamically generate atomic configurations which, due to interatomic interactions, exhibit non-linear, multiscale, non-i.i.d. noise, as well as non-trivial topology and geometry. That is, they lie near a low-dimensional manifold [9]. Such simulations are reasonable application for TSLasso because there is no sparse parameterization of the data manifold known a priori. Such parameterizations are useful. They provide scientific insight about the data generating mechanism, and can be used to bias future simulations. However, these parameterizations are typically are detected by a trained human expert manually inspecting embedded data manifolds for covariates of interest. Therefore, we instead apply TSLasso to identify functional covariates that parameterize this manifold. Experiment SetupsWe obtain a Euclidean group-invariant featurization of the atomic coordinates as a vector of planar angles \(a_{i}\in\mathbb{R}^{3\binom{N_{a}}{3}}\): the planar angles formed by triplets of atoms in the molecule [7]. We then perform an SVD on this featurization, and project the data onto the top \(D=50\) singular vectors to remove linear redundancies. Note that this represents a particular metric on the molecular _shape space_. The dictionaries we considered are constructed on _bond diagram_, a priori information about molecular structure garnered from historical work. Building a dictionary based on this structure is akin to many other methods in the field [15; 27]. Specifically, this dictionary consist of all equivalence classes of 4-tuples of atoms implicitly defined along the molecule skeletons. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline Dataset & \(n\) & \(N_{a}\) & \(D\) & \(d\) & \(\epsilon_{N}\) & \(n^{\prime}\) & \(p\) & \(\omega\) \\ \hline Eth & 50000 & 9 & 50 & 2 & 3.5 & 100 & 12 & 25 \\ Mal & 50000 & 9 & 50 & 2 & 3.5 & 100 & 12 & 25 \\ Tol & 50000 & 15 & 50 & 1 & 1.9 & 100 & 30 & 25 \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters in different experiments: Eth (Ethanol), Mal (Malonaldehyde) and Tol (Toluene) Since original angular data featurization is an overparameterization of the shape space, one cannot use automatically obtained gradients in TSLasso. We therefore project the gradients prior to normalization on the tangent bundle of the shape space as it is embedded in \(\mathbb{R}^{D}\). For TSLasso, the regularization parameter \(\lambda_{n}\) ranges from 0 to the value for which \(||\mathbf{B}_{\cdot j}||_{2}=0\) for all \(j\). The last \(d\) surviving dictionary functions are chosen as the parameterization for the manifold. Results on MDS DataThe toeuene case is a manifold with \(d=1\). We observe that in all replicates, TSLasso successfully select one of the six torsions associated with the peripheral methyl group bond, which shows the ability of our algorithm to automatically select appropriate parametrizing functions. We plot the incoherence for Ethanol and Malonaldehyde as the heatmap in figure 1(b) and 1(f), which present two groups of highly linearly dependent torsions, corresponding to the two bonds between heavy atoms in the molecules. Therefore, we expect to select a pair of incoherent torsions out of these dictionaries. In figure 1(h) and 1(d), support recovery frequencies for sets of size \(d=s=2\) using TSLasso on ethanol and malonaldehyde data respectively. As we expected, TSLasso select one function from the two groups of highly colinear functions in most replicates. These results shows that our approach is able to identify embedding coordinates that are comparable or preferable to the a priori known functional support. Results such as these usually are generated subsequent to running a non-parametric manifold learning algorithm, either through visual or saliency-based analyses, but we are able to achieve comparable results without the use of such an algorithm. See supplementary materials for a comparison of our algorithm with other manifold learning algorithms. These results also suggest that the local denoising property of the tangent space estimation, coupled with the global regularity imposed by the assumption that the manifold is parameterized by the same functions throughout, is sufficient to replicate the denoising effect of a manifold learning algorithm. Plus, with the help of domain functions, our embeddings come with good interpretibility. Also we point out that in our experiments, the subsampled size \(n^{\prime}=100\) is only around 1% of the whole dataset and in almost all replicates this subsample is sufficient to obtain a valid parametrization. Tangent space estimation is only needed for these points. Therefore bypassing the usual manifold embedding procedure (on the whole dataset) we are able to obtain interpretable embeddings with fewer samples and in a shorter time. ## 6 Discussion and Related Work Our method has several good properties. As long as the dictionary is constructed from some functions that have meaning in the domain of the problem, then our learned embedding is _interpretable_ by definition. Furthermore, as discussed in Section 2, the mapping \(f_{S}\) is smooth, (implicitly) invertible, and can be naturally extended to values \(\xi\in\mathcal{M}\) not in the data. Finally, our method is flexible with respect to a range of non-linearities. These features contrast with standard approaches in non-linear dimension reduction. Parametrizing high-dimensional data by a small subset of smooth functions has been studied outside the context of manifold learning as _autoencoders_[11]. Early work on parametric manifold learning includes Saul and Roweis [22] and Teh and Roweis [24], who proposed a mixture of local linear models whose coordinates are aligned. In a _non-parametric_ setting, LTSA [31] also gives a global parametrization by aligning locally estimated tangent spaces. When principal eigenvectors of the Laplace-Beltrami operator on the manifold are used for embedding, like in Diffusion Maps [8], it can be shown [20] that in the limit of large \(n\), with properly selected eigenfunctions and geometric conditions on the manifold, the eigenfunctions provide a smooth embedding of the manifold to Euclidean space. However, both the parametric and non-parametric methods above produce learned embeddings \(f\) that are abstract in the sense that they do not have a concise functional form. In this sense, we draw a parallel between our approach and _factor models_[28]. Group Lasso type regression for gradient-based variable selection was previously explored in Haufe et al. [12] and Ye and Xie [29], but both have a simpler group structure, and are not utilized in the setting of dimension reduction. More recently, so-called _symbolic regression_ methods such as Brunton et al. [4], Rudy et al. [21], and Champion et al. [5] have been used for linear, non-linear, and machine-learned systems, respectively, and these methods may regarded as univariate relatives of our approach, since they are concerned with dynamics through time, while we consider the data manifold independently of time. We also draw several distinctions between the TSLasso method and the ManifoldLasso method in Meila et al. [18]. First, ManifoldLasso uses the same essential idea of sparse linear regression in gradient space, but in order to explain individual embedding coordinate functions. In contrast, we have no consistent matching between unit vectors in \(\mathbf{I}_{d}\), and so can only provide an overall regularization path, rather than one corresponding to individual tangent basis vectors. The tangent bases are not themselves gradients of a known function, and, indeed it may not be the case that such a function even exists. Second, TSLasso method dispenses _with the entire Embedding algorithm_, Riemannian metric estimation, and pulling back the embedding gradients steps in ManifoldLasso, while providing almost everything a user can get from ManifoldLasso. Apart from simplification, TSLasso can be run on \(n^{\prime}\ll n\) data points, about \(1/500\) of the data in our experiments (Table 1), while the algorithm in ManifoldLasso computes an embedding from all data points. Hence, all operations before the actual GroupLasso are hundreds of times faster than in ManifoldLasso. Theoretically, Meila et al. [18] only provides (i)analysis in function spaces, and (ii) recovery guarantees for the final step, GroupLasso, based on generic assumptions about the noise. Our paper has end-to-end guarantees of recovery guarantees from a sample in Section 4. The reliance on domain prior knowledge in the form of the dictionary \(\mathcal{F}\) is essential for TSLasso, and can be a restriction to its usability in practice, especially given the restrictions on gradient field colinearity. However, as the experiments have illustrated, there are domains where construction of a dictionary is reasonable, and explaining the behavior of organic molecules in terms of torsions and planar angles is common in chemistry and drug design [2, 13]. More generally, it would be desirable to utilize a completely agnostic dictionary that also contained the features themselves, and so development of an optimization strategy capable of handling the large amount of colinearity intrinsic to such a set-up is an active area of research. Figure 2: Results from molecular dynamics data. 2a, 2e show bond diagrams for ethanol and malonaldehyde, respectively. 2b and 2f show the heatmap of cosines (incoherences) of dictionary functions. The color is darker when there is more colinearity. 2c, 2g are regularization paths for a single replicate of ethanol and malonaldehyde. Note that in both figures there are a redundant trajectory of two functions that are added together. 2d, 2h Selection of pairs of functions for ethanol and malonaldehyde over replicants using TSLasso. The node point on the circles represents all functions in the dictionary and the number along the lines are frequencies of each pairs selected over 25 repetitions. 2d means in all 25 repetitions, TSLasso selects \(g_{1,1}\) and \(g_{2,1}\), which are the bond torsions around C-C bond and C-O bond respectively. 2h show that in 24 out of 25 replicates, TSLasso is able to select one function from each highly colinear function group. ## Acknowledgement The authors acknowledge the support from NSF DMS award 1810975 and DMS award 2015272. The authors also thank the Tkatchenko lab, and especially Stefan Chmiela for providing both data and expertise. This work is completed at the Pure and Applied Mathematics (IPAM). Marina also gratefully acknowledges a Simons Fellowship from IPAM to her, which made her stay during Fall 2019 possible.
2308.15153
In-hand manipulation planning using human motion dictionary
Dexterous in-hand manipulation is a peculiar and useful human skill. This ability requires the coordination of many senses and hand motion to adhere to many constraints. These constraints vary and can be influenced by the object characteristics or the specific application. One of the key elements for a robotic platform to implement reliable inhand manipulation skills is to be able to integrate those constraints in their motion generations. These constraints can be implicitly modelled, learned through experience or human demonstrations. We propose a method based on motion primitives dictionaries to learn and reproduce in-hand manipulation skills. In particular, we focused on fingertip motions during the manipulation, and we defined an optimization process to combine motion primitives to reach specific fingertip configurations. The results of this work show that the proposed approach can generate manipulation motion coherent with the human one and that manipulation constraints are inherited even without an explicit formalization.
Ali Hammoud, Valerio Belcamino, Alessandro Carfi, Veronique Perdereau, Fulvio Mastrogiovanni
2023-08-29T09:35:11Z
http://arxiv.org/abs/2308.15153v1
# In-hand manipulation planning using human motion dictionary ###### Abstract Dexterous in-hand manipulation is a peculiar and useful human skill. This ability requires the coordination of many senses and hand motion to adhere to many constraints. These constraints vary and can be influenced by the object characteristics or the specific application. One of the key elements for a robotic platform to implement reliable in-hand manipulation skills is to be able to integrate those constraints in their motion generations. These constraints can be implicitly modelled, learned through experience or human demonstrations. We propose a method based on motion primitives dictionaries to learn and reproduce in-hand manipulation skills. In particular, we focused on fingertip motions during the manipulation, and we defined an optimization process to combine motion primitives to reach specific fingertip configurations. The results of this work show that the proposed approach can generate manipulation motion coherent with the human one and that manipulation constraints are inherited even without an explicit formalization. ## I Introduction Humans' daily activities often require interacting with objects and dexterously manipulating them in hand. These abilities result from a continuous learning process, during human lives, based on the observation of other humans' actions and personal attempts and errors. For robots to successfully integrate and operate in the human environment, they should interact with the environment as humans do [1]. Therefore, robots should be able to manipulate unknown objects dexterously, adapting their previous experience to new scenarios. Furthermore, robots should be able to learn new manipulation skills by observing other "agents" actions. A robotic platform can achieve this by integrating advanced perception tools and flexible learning methods to represent and plan new manipulation actions. Given a predefined manipulation goal, planning a dexterous manipulation consists in determining the necessary finger trajectories to reach it. Following the literature, we can divide approaches for planning robotic manipulations into two categories: data-driven and classical [2, 3, 4, 5, 6, 7, 8, 9, 10]. In data-driven approaches, dexterous manipulation models are trained either by robot trial and error or by observing human demonstrations [2, 3, 4, 5]. Instead, classical approaches are based on robotics principles, dividing complex tasks into sets of elementary actions [6, 7, 8, 9]. In the context of in-hand manipulation path planning, most data-driven approaches rely on dynamic movement primitives (DMP) [2]. This solution is elegant and generates smooth trajectories while keeping a small number of parameters. DMP is made up of a set of generalized dynamics system equations that flexibly express movements. With DMP, it is possible to produce smooth movements of any shape by altering a simple linear dynamical system with a non-linear component [3]. Non-linear components of DMP systems can be determined using data either from human demonstrations or robot ones. Alternative systems use Markov Decision Processes (MDP). With MDP, a Dynamical Bayesian Network models the evolution of an agent's state according to its actions and the environment dynamics [4]. Finding an in-hand manipulation grasp sequence to pass from an initial hand position to a final one is what the MDP idea entails [5]. However, this does not include movement between grasps. In general, in-hand manipulation based on training methods necessitates a large amount of training data and a significant processing time. On the other side, in-hand manipulation planning in classical robotics is solved by modelling the environment and the robotic hand. In this context, robot hand actions are decomposed into atomic sequences such as the in-hand regrasping and finger reallocation. In-hand regrasping consists in repositioning the object to the desired pose within the robotic hand. Sundaralingam and Hermans 2017 [6] proposed an Fig. 1: Example of an in-hand manipulation supported by the fingertips.
2306.12390
Functional data analysis: Application to the second and third wave of COVID-19 pandemic in Poland
In this article we use the methods of functional data analysis to analyze the number of positive tests, deaths, convalescents, hospitalized and intensive care people during second and third wave of the COVID-19 pandemic in Poland. For this purpose firstly we convert the data to smooth functions. Then we use principal component analysis and multiple function-on-function linear regression model to analyze waves of COVID-19 pandemic in Polish voivodeships.
Patrycja Hęćka
2023-06-21T17:24:03Z
http://arxiv.org/abs/2306.12390v1
# Functional data analysis: Application to the second and third wave of COVID-19 pandemic in Poland ###### Abstract In this article we use the methods of functional data analysis to analyze the number of positive tests, deaths, convalescents, hospitalized and intensive care people during second and third wave of the COVID-19 pandemic in Poland. For this purpose firstly we convert the data to smooth functions. Then we use principal component analysis and multiple function-on-function linear regression model to analyze waves of COVID-19 pandemic in Polish vovodeships. keywords: functional data analysis, COVID-19, functional principal component analysis, smooth functions, function-on-function regression Msc: [2020] 62R10, 62P10 + Footnote †: journal: Elsevier ## 1 Introduction The SARS-CoV-2 virus has become a global problem since it was revealed at the end of 2019 in China. On March 4 the first case was detected in Poland. In this paper, we use the methods of functional data analysis to analyze the number of hospitalized and intensive care people during the second and third wave of the COVID-19 pandemic in selected, Polish vovodeships. Determining the boundaries of individual coronavirus waves in Poland is contractual. The first case of infection was found on March 4, 2020 in a 66-year-old man in hospital in Zielona Gora. The beginning of the first wave in Poland is therefore assumed to be spring 2020. The second COVID-19 wave in Poland lasted five months - from September to January. The peak of the second wave fell in November with a record increase - 27,875 infections - November 7, 2020. The third wave began three months after the peak of the second wave. It is considered that the beginning of the third wave is February 16, 2021. Her record was on April 1 with 35,251 new cases of SARS-CoV-2 - the highest daily number of infections since the beginning of the pandemic in Poland. However, due to deficiencies in the published data to the public, we assume as the second wave the period between October 23, 2020 and February 15, 2021, and as the third wave the time from February 16 to July 5, 2021. The data we used has been collected and published by Michal Rogalski ([1], contact: [email protected], see data source: [http://bit.ly/covid19-poland](http://bit.ly/covid19-poland), accessed: March 1, 2022). Figure (1) shows daily, discrete observations of the number of positive COVID-19 test results, number of deaths, convalescents, hospitalized people and people in serious condition. In order to reduce the influence of the number of inhabitants of a given vovodeship on the analysis the number of cases, hospitalized people, convalescents, deaths, and people in a serious condition we divide by the number of inhabitants of a given vovodeship. Number of inhabitants we download from the website of the Statistical Information Centre (see: [2], _"Area and population in the territorial profile in 2021, Area, population number and density, as of ## 1 Introduction Figure 1: Daily observations of the number of positive tests results, deaths, convalescents, hospitalized people and people in critical condition since October 23, 2020 to July 5, 2021 in all vivodeships in Poland. 1 January 2021_", date of publication: July 22, 2021, accessed: March 7, 2022). Then the obtained numbers were multiplied by 100,000. From the analysis of appropriately scaled values, it turns out that the largest number of hospitalized and in serious condition people per 100,000 inhabitants between October 23, 2020 and July 5, 2021 was recorded in the Swietokrzyskie Voivodeship. Most positive tests and deaths were noted in the Kujawsko-Pomorskie Voivodeship. The largest number of convalescents was recorded in the Warminsko-Mazurskie Voivodeship. The least people were hospitalized in the Wielkopolskie Voivodeship, people in a serious condition in the Pomorskie Voivodeship, deaths in the Malopolskie Voivodeship, positive tests in Podkarpackie, and convalescents in Podlaskie. For this reason for the test sample of the model which will be analyzed in the next chapters we select Swietokrzyskie, Wielkopolskie, Podkarpackie and Malopolskie voivodeships. Since then, the variability over time of positive test results, deaths, recoveries, hospitalized people and people in a serious condition will be considered through functional variables respectively: \(X_{1}(t),X_{2}(t),X_{3}(t),Y_{1}(t),Y_{2}(t)\). Variables \(X\) will be treated as predictors, and \(Y\) as response variables in functional regression models. The observed data is the number of daily values of these five functional variables for sixteen voivodeships in Poland. Figure (2) shows development of the disease in selected, most diverse voivodeships by discrete, daily, scaled observations, i.e. a set of curves \(\{(x_{ij}(t),y_{ik}(t)):i=1,...,16;j=1,2,3;k=1,2\}\). An important limitation on the FDA methods is that the functional variables should be observed in the same interval. The classic solution to such a problem is to transfer all curves to the same interval. In this work, we will consider curves on the interval \(T=[0,1]\). Hence, from this points \(x_{ij}\) and \(y_{ik}\) present the curves located at the interval \([0,1]\). ## 2 From functional data to smooth functions There are a lot of methods of conversion data to smooth functions (see [3], [5], [6]). Our goal is to use discrete data \(y_{k}\), \(k=1,...,m\) to estimate the function \(x\). The main procedure in statistics, mathematics, and engineering for converting discrete data in smooth functions is the expansion of the base. Consider the vectors \(x_{i}(t):i=1,...,n;t\in T=[0,1]\) and assume that the observations \(y_{ik}\) are available for each knots \(t_{i1},t_{i2},...,t_{im_{i}}\in T\). Then we can present each observation \(y_{ik}\) as follows: \[y_{ik}=x_{i}(t_{ik})+\epsilon_{ik},i=1,...,n;k=1,...,m_{i}, \tag{1}\] where \(\epsilon_{ik}\) is a noise contributes to the roughness of the analyzed data. Suppose that the sample curves belong to the finite dimensional space generated by the set of basis functions \(\{\phi_{1}(t),...,\phi_{p}(t)\}\). Then we can represent each curve \(x_{i}\) by a linear expansion of the form: \[x_{i}(t)=\sum_{j=1}^{p}\alpha_{ij}\phi_{j}(t),i=1,...,n. \tag{2}\] We define by \(\alpha_{i}\) vectors of the basis coefficients: \(\alpha_{i}=(\alpha_{i1},...,\alpha_{ip})^{\prime}\), which lengths are equal to \(p\) and can be estimated by different methods. One of the most popular method is least squares method (see [3], section 4.2). The least squares estimators of \(\alpha_{i}\) are of the form \(\hat{\alpha}_{i}=(\Phi_{i}^{\prime}\Phi_{i})^{-1}\Phi_{i}^{\prime}y_{i}\), where \(\Phi_{i}=(\phi_{j}(t_{ik}))_{m_{i}\times p},j=1,...,p;k=1,...,m_{i}\). Smoothness of the function is controlled by the number of used basis functions. The greater the number of basis functions \(p\), the better the curve fits to discrete points. The smaller the \(p\), the smoother the curve is. Decision about increasing or decreasing the number of basis functions is related to achieving a compromise between bias and variance - "bias-variance trade-off" (see Figure 2: Daily observations of the number of positive tests results, deaths, convalescents, hospitalized people and people in critical condition since October 23, 2020 to July 5, 2021 in selected ovivodeships in Poland. The number of people was divided by the number of inhabitants of a given vivodeship and then multiplied by 100,000. [3], section 4.5.1). One of the methods using to get a "trade-off" between variance and bias is minimalization of the mean-squared error (MSE). Figure (3) shows the fitting of 20 basis functions with equally spaced knots in order to approximate seven curves showing the number of cases, deaths, recoveries, hospitalized people and people in a serious condition because of COVID-19. Coefficients for each functional form were fitted by least squares. The number of basis functions was chosen so that the value of the mean squared error was the smallest, but also to avoid overfitting of the model. ## 3 Multiple Function-on-Function Linear Model In this section we show multiple function-on-function linear model which was described in details e.g. by Acal, Escabias, Aguilera and Valderrama (see [4], section 3.1) and by Xiong Cai, Liugen Xue and Jiguo Cao [7]. Such a model is often used to characterize pandemic in Europe for example in France [10], Italy [11] and Spain [4]. The MFFLR model let for estimation the functional response variable \(Y\) from a vector of \(J\) functional predictor variables denoted by \(X=(X_{1},...,X_{J})^{\prime}.\) Consider a random sample from \((X,Y)\) denoted by \((x_{i},y_{i}):i=1,...,n\) with \(x_{i}=(x_{i1},x_{i2},...,x_{iJ})^{\prime}.\) Then we define the functional linear model as follows: \[y_{i}(t)=\alpha(t)+\sum_{j=1}^{J}\int_{T}x_{ij}(s)\beta_{j}(s,t)ds+\epsilon_{ i}(t),i=1,...,n, \tag{3}\] where \(\alpha(t)\) is the intercept function, \(\beta_{j}(s,t)\) are coefficient functions and \(\epsilon_{i}(t)\) are independent functional errors. Consider the decomposition of the principal components of the functional response variable and the functional predictior variables, given by \[x_{ij}(t)=\overline{x}_{j}(t)+\sum_{l=1}^{n-1}\xi_{il}^{x_{j}}f_{l}^{x_{j}}(t), \tag{4}\] \[y_{i}(t)=\overline{y}(t)+\sum_{l=1}^{n-1}\xi_{il}^{y}f_{l}^{y}(t), \tag{5}\] where \(\xi_{il}^{x_{j}}\) and \(\xi_{il}^{y}\) are the principal components scores. The eigenfunctions of the sample covariance \(x_{ij}(t)\) and \(y_{i}(t)\) are denoted by \(f_{l}^{x_{j}}\) and \(f_{l}^{y}\). The principal components decomposition given by the formula (4) allows for the transformation of the MFFLR model described by the formula (3) into a linear regression model for each principal component response variable \(Y\) from the components of the principal functional predictors, given by the formula (6). \[\hat{\xi}_{ik}^{y}=\sum_{j=1}^{J}\sum_{l=1}^{n-1}b_{kl}^{x_{j}}\xi_{il}^{x_{j }}+\epsilon_{ik},i=1,...,n;k=1,...,n-1. \tag{6}\] The functional coefficients are given here by \(\beta_{j}(s,t)=\sum_{k=1}^{n-1}\sum_{l=1}^{n-1}b_{kl}^{x_{j}}f_{k}^{x_{j}}(s) f_{l}^{y}(t).\) Finally we get the following PC-MFFLR model for the functional response: \[\hat{y}_{i}(s)=\overline{y}(s)+\sum_{k=1}^{K}\hat{\xi}_{ik}^{y}f_{k}^{y}(s)= \overline{y}(s)+\sum_{k=1}^{K}(\sum_{j=1}^{J}\sum_{l\in L_{kj}}\hat{b}_{kl}^{ x_{j}}\xi_{il}^{x_{j}})f_{k}^{y}(s), \tag{7}\] where \(K\) is the number of principal components selected to the model, and \(\hat{b}_{kl}^{x_{j}}\) are linear least-squares estimators of the regression coefficients \(b_{kl}\). Figure 3: Fitted curves to daily observations of the number of positive tests results, deaths, convalescents, hospitalized people and people in critical condition since October 23, 2020 to July 5, 2021 in selected voivodeships in Poland. 20 basis functions were used. Let's assume that we have \(n\) completely observed curves for all variables and \(m\) missing curves for the response variable. For missing response curves, the parameters \(b_{kl}\) in model (6) are estimated using complete \(n\) sample response curves and predictors. Then the missing response curves \(y_{i}^{miss}(s):i=n+1,...,n+m\) are estimated by computing the principal components scores of the predictors: \(\xi_{il}^{x_{j}}:i=n+1,...,n+m,l=1,...,n-1\) and inserting them into equation (7). Then the estimated PC-MFFLR model can be used to predict the value of the new response variable \(Y\) on the test sample. We can solve the imputation problem by using the multiple function-on-function linear regression model for each response variable \(Y_{1}(t)\) (hospitalized people) and \(Y_{2}(t)\) (people in critical condition). Both functional regression models are estimated from the full data of 12 voivodeships that determine the training sample. Then predictions are made for the four, selected voivodeships: Malopolskie, Wielkopolskie, Swietokrzyskie and Podkarpackie. ## 4 Data analysis In this section, we use Principal Component Analysis and a PC-MFFLR model in order to analyze functional data and predict missing response curves. The results were obtained with software R (packages 'fda' [9], 'ggplot2' [8]). ### Principal components analysis for functional data A lot of papers describe PCA in the functional context (for example see [3], section 8). Our first step is to estimate the principal functional components for each of the five functional predictors. It turns out that the first principal components explain respectively 46.93%, 42.27%, 38.78%, 41.17%, 44.30% of the variability of \(X_{1}\), \(X_{2}\), \(X_{3}\), \(Y_{1}\), \(Y_{2}\) from the training sample. The second principal components explain: 26.59%, 19.30%, 26.79%, 29.41%, 31.70% of the variability. The third principal components explain much less, respectively: 13.96%, 14.97%, 18.91%, 15.45%, 12.71%. #### 4.1.1 Weight functions Figure (4) presents the weight functions (harmonics) related to the first 3 principal components. The presented weight functions are coefficients that enable the eigenvectors to be computed from the original basis. For the number of hospitalized people, the first, second and third principal components explain together about 86.03, and for the number of people in critical condition 88.71 percentage of the variance. Other principal components explain a small percentage of information. Graphs of weight functions are difficult to interpret. In the next subsubsection we present plots that can be helpful to analyze the functional principal components. However, looking at the figure (4), we can gain some intuition about them. The first principal component shows the general variability of the number of hospitalized people depending on the wave of the pandemic. We can see that the biggest difference between the second and the third wave in the number of hospitalized people occurs at the second half of the analyzed time - the third wave of the COVID-19 pandemic. We can see that the smallest differences between voivodeships are during the second wave of the pandemic. This may suggest that the prediction of the number of hospitalized people during the third wave of COVID-19 may be the hardest. In between the second and the third wave we see negative values of the coefficients, which tell us about a decrease in the number of hospitalized people. Voivodeships for which the value of \(\xi_{il}^{y_{1}}\) is high will have large differences between the number of hospitalized people in the second and third wave. During the third wave, the number of hospitalized people significantly exceed the number of such people during the second wave of the pandemic. It turns out that the highest value of this coefficient is achieved for the Lodzkie Voivodeship. Figure 4: Weight functions \(f_{l}^{x_{j}};j=1,2,3\) and \(f_{l}^{y_{k}};k=1,2\), \(l=1,2,3\) for the first three principal components. Since the weight function \(f_{2}^{y_{k}};k=1,2\) must be orthogonal to \(f_{1}^{y_{k}}\), we cannot expect that the second principal component will explain a greater percentage of the variance than the first principal component. In the case of the number of hospitalized people, the second principal component explain about 29.41 percentage of variability. We see that it achieves positive values during all of the time of the second and third waves of the pandemic. The second principal component can be interpreted as an indicator of the number of people hospitalized during the entire analyzed period. The first principal component for the number of people in a critical condition suggests that the difference in the number of people in a serious condition between vovideships was at a similar level for almost the entire second and third waves of COVID-19. At this time, the values of the coefficients are positive. The highest value of \(\xi_{i1}^{y_{2}}\) is obtained for the Kujawsko-Pomorskie Voivodeship. The third and further principal components explain a much smaller proportion of variance than the first two components. This is influenced by the fact that they must be orthogonal to the first two principal components and also orthogonal to each other. They are more difficult to interpret than first two components. Interpretation of weight functions is not always simple in the context of functional PCA. In the subsubsection (4.1.2) we show more commonly used form of presenting results. #### 4.1.2 Mean curve A method that is helpful during the analysis of functional principal components is to present the mean curve together with the functions obtained by the addition and subtraction of the properly multiplied harmonic (weight) functions of the principal components from the mean. Such a plot makes sense because the principal components represent the variation around the mean. Figures (5) and (6) presents the mean curves and the perturbations of the sample mean curves obtained by adding and subtracting a multiple of weight functions for hospitalized and in critical condition people. Analyzing the figure (5), we can see that in terms of the number of hospitalized people, the first principal component shows the differences between the second and third wave of the pandemic, while the second principal component shows the general number of hospitalized people during pandemic. The figure (6) suggests that for the number of people in a serious condition the situation was reversed. The first principal component focuses on the general number of people in critical condition during pandemic, while the second principal component shows the differences between the waves. The next components explain a smaller percentage of variance. This can be see by examining the plots of the fourth harmonic function, where the plots of (+) and (-) often coincide with the plot of the mean. The smaller the percentage of variance is explained by the principal component, the more the (+) and (-) plots coincide with the mean function. #### 4.1.3 Plotting principal component scores An important aspect of PCA is the examination of the scores of each curve on each component (see [3], section 8.3.2). Figures (7) and (8) show other graphs interesting during FPCA analysis. They represent the values of the variables achieved on the first and second scores. Figure 5: Mean curve of hospitalized people with curves resulting from adding (+) and subtracting (-) appropriately scaled harmonic coefficients from the mean. Figure 6: Mean curve of people in critical condition with curves resulting from adding (+) and subtracting (-) appropriately scaled harmonic coefficients from the mean. Figure 7: Values of the first and second scores for the number of hospitalized people (voivodeships from the training sample). with a small number of hospitalized people per 100,000 inhabitants. According to the figure (7), the largest number of hospitalized people was recorded in Lubuskie, Podlaskie, Lubelskie and Lodzkie vovodeships. Comparing with appropriately scaled, discrete, real observations, the largest number of hospitalized people per 100,000 inhabitants was achieved in the following vovodeships: Lodzkie, Lubuskie and Lubelskie. The vovodeships with the smallest number of hospitalized people, i.e. Slaskie, Pomorskie and Opolskie, also coincide with the real data. Hence, we can confirm the conclusion appearing during the analysis of figure (4), that the first principal component shows the differences between the second and third wave, and the second principal component is related to the number of hospitalized people. To conclude in the upper right corner, we have vovodeships with a large number of hospitalized cases and with large differences between the second and third wave. Therefore, Lubuskie and Lodzkie vovodeships had the highest number of hospitalized people per 100,000 inhabitants and additionally in these vovodeships both waves differed significantly from each other. We will now perform an analogous analysis for the figure (8), showing the values of the first and second scores for the number of people in a critical condition. In this case, at the top of the figure (8), we can see the vovodeships for which the difference between the second and third wave of the pandemic is high, and low at the bottom of the figure. On the left we have vovodeships for which the number of people in a critical condition was low, and on the right - high. Here the situation is the opposite compared to the analysis of the scores for the number of hospitalized people. According to the data presented on the figure (2), the largest number of people in a serious condition per 100,000 inhabitants of a given vovodeships was reached in the Kujawsko-Pomorskie, Lubuskie and Mazowieckie vovodeships, and the smallest in: Opolskie, Pomorskie and Slaskie. Thus, we can see that the analysis of figure (8) seems to be performed correctly. Hence, the Kujawsko-Pomorskie and Lubuskie vovodeships had the largest number of people in a critical condition, while the Podlaskie and Warminsko-Mazurskie vovodeships had the Figure 8: Values of the first and second scores for the number of people in critical condition (vovodeships from the training sample). largest differences between the second and third wave of the pandemic in the number of people in a serious condition. Similar analyzes can be made for the number of positive test results, recoveries and deaths. ### Function-on-Function Model Let us consider a training sample composed of all vovodeships except Swietokrzyskie, Malopolskie, Wielkopolskie and Podkarpackie. The listed vovodeships will be a test sample and we will make predictions for them. Equation (8) presents the reduction of the linear function-on-function to a linear model for the first principal components in terms of the first principal components of each predictor: \[\hat{\xi}_{i1}^{y_{k}}=\gamma_{0}+\xi_{i1}^{x_{1}}\gamma_{1}^{y_{k}}+\xi_{i1}^ {x_{2}}\gamma_{2}^{y_{k}}+\xi_{i1}^{x_{3}}\gamma_{3}^{y_{k}}+\epsilon_{i}^{y_ {k}},\ k=1,2,i=1,...,16, \tag{8}\] where \(\gamma_{0},\gamma_{1}^{y_{k}},\gamma_{2}^{y_{k}},\gamma_{3}^{y_{k}}\) are the appropriate coefficients obtained by fitting the linear model to the data. Based on such models, we estimate the first principal components \(Y_{1}(t)\) and \(Y_{2}(t)\) from the first principal components of the \(X_{1}(t)\), \(X_{2}(t)\) and \(X_{3}(t)\). We predict \(Y_{1}(t)\) and \(Y_{2}(t)\) using the following equation (9): \[\hat{y}_{ik}(t)=\overline{y}_{k}(t)+\hat{\xi}_{i1}^{y_{k}}f_{1}^{y_{k}}(t),\ k=1,2,i=1,...,16. \tag{9}\] In order to test the models on a training sample, we will use the mean squared error. Figure (9) shows the graphs of the observed curves, fitted in the section (2) together with the predicted curves obtained by applying formula (9) for several vovodeships selected from the training sample. In table (1) you can see the values of the mean squared error calculated for all vovodeships from the training sample. Figure 9: Observed and predicted curves for selected vivodeships from the training sample. In table (1) the lowest and highest MSE values are marked in bold, respectively. We can see that the values of the mean squared error for the number of people in a serious condition are much lower than the values of the mean squared error for the number of hospitalized people. The lowest value of \(\text{MSE}(y_{i2})\) was obtained for the Kujawsko-Pomorskie Voivodeship, and the highest for the Lubelskie Voivodeship. In turn, the value of the mean squared error for \(y_{i1}\) is the lowest for the Warminsko-Mazurskie Voivodeship and the highest for the Lubuskie Voivodeship. Figure (10) presents predictions for voivodeships from the test sample, and table (2) presents predicted and observed values during the first and last days of the analyzed period of the pandemic. In order to compare the predictions with the true observed data, the predictions in table (2) have been properly scaled and present the number of hospitalized and seriously ill people for voivodeship on a given day, and not the number of people per 100,000 inhabitants. Analyzing the figure (10) and table (2) we can see that the predictions for the Wielkopolskie Voivodeship turned out to be very close to the true values. The worst predictions we get for Malopolskie Voivodeship. For example, on November 1, 102 people in a serious condition were recorded in the Wielkopolskie Voivodeship, and the model predicted 107 people; 217 were observed in the Malopolskie Voivodeship, while the model predicted 126. On the same day, 95 people were found in a serious condition in the Podkarpackie Voivodeship, against the expected 76, and in the Swietokrzyskie Voivodeship, 55 against 41. In turn, the number of hospitalized people on November 1 in the Malopolskie Voivodeship amounted to 2,366 people, and the model predicted 1,591 cases; in the Podkarpackie Voivodeship 976 against the predicted 977; in the Swietokrzyskie Voivodeship 829 were observed against 589 predicted, and in the Wielkopolskie Voivodeship, 1,292 against 1,700. Analyzing the figure (10) and table (2) we can see that the closest results to the observed values were predicted at the beginning and at the end of the analyzed time. The largest difference between the observed and predicted values is for the Malopolskie Voivodeship. Here the model sometimes predicts almost 2 times lower values than observed. \begin{table} \begin{tabular}{|l||l|l|} \hline \multicolumn{1}{|l||}{voivodeship} & \(\text{MSE}(y_{i1})\) & \(\text{MSE}(y_{i2})\) \\ \hline \(\text{dolnoslaskie}\) & 4.087584 & 0.8424024 \\ \(\text{kujawsko-pomorskie}\) & 5.801758 & **0.5078765** \\ \(\text{lódzkie}\) & 8.146502 & 0.7999353 \\ \(\text{lubelskie}\) & 8.434905 & **1.1796471** \\ \(\text{lubuskie}\) & **10.79992** & 0.9807714 \\ \(\text{mazowieckie}\) & 4.569325 & 0.6134904 \\ \(\text{opolskie}\) & 9.170555 & 0.7959462 \\ \(\text{podlaskie}\) & 9.612532 & 1.0092404 \\ \(\text{pomorskie}\) & 7.526031 & 0.9541513 \\ \(\text{slaskie}\) & 7.451072 & 1.1456427 \\ \(\text{warminsko-mazurskie}\) & **3.906237** & 1.0330436 \\ \(\text{zachodnio-pomorskie}\) & 6.692964 & 0.9105502 \\ \hline \end{tabular} \end{table} Table 1: Values of the mean squared error for \(y_{i1}\) (the number of hospitalized people) and \(y_{i2}\) (the number of people in a serious condition) for the voivodeships from the training sample. Figure 10: Observed and predicted curves for the test sample. \begin{table} \begin{tabular}{|l||l|l|l|l|l|} \hline & \multicolumn{4}{|c|}{Hospitalized people} \\ \hline & malopolskie & podkarpackie & swietokrzyskie & wielkopolskie \\ \hline time & obs/pred & obs/pred & obs/pred & obs/pred \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: Predicted values for the number of hospitalized people and in the serious condition compared to the observed values. Observations are denoted by ”obs” and predictions by ”pred”. ## 5 Conclusions The COVID-19 pandemic has shocked the whole world. An epidemic of this scale is relatively a new phenomenon, which is why it has attracted the attention of a large number of analysts around the world. In the presented work, our target was to fit a model based on functional analysis data to data of the second and third wave of the COVID-19 pandemic in Poland. The aim of the model was to predict the number of people in a serious condition and hospitalized people. The following were used as predictors: the number of deaths, convalescents and positive test results. Estimations of the parameters were made on the training group consisting twelve vovodeships. Then we test the quality of these parameters. Next, predictions were made for the vovodeships: Malopolskie, Podkarpackie, Swietokrzyskie and Wielkopolskie. Model predicts numbers well in most vovodeships. The biggest problem turned out to be Malopolskie Voivodeship. The best predictions we observed for Wielkopolskie and Swietokrzyskie voivodeships. The analysis of principal components turned out to be interesting. The figures (7) and (8) showed that during the second and third wave, most people were hospitalized in the Lubuskie and Podlaskie voivodeships and the largest number of people in a serious condition was in vovodeships Kujawsko-Pomorskie and Lubuskie. The biggest difference between the second and third waves in the number of people hospitalized was in the Lodzkie and Slaskie voivodeships. The biggest difference between the second and third wave in the number of people in serious condition was in Podlaskie and Warminsko-Mazurskie voivodeships. ## Acknowledgements The author is grateful to Prof. Krzysztof Topolski (University of Wroclaw) for helpful comments on the master's thesis. ## Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. ## Declaration of interest None.
2303.10359
A conforming discontinuous Galerkin finite element method for Brinkman equations
In this paper, we present a conforming discontinuous Galerkin (CDG) finite element method for Brinkman equations. The velocity stabilizer is removed by employing the higher degree polynomials to compute the weak gradient. The theoretical analysis shows that the CDG method is actually stable and accurate for the Brinkman equations. Optimal order error estimates are established in $H^1$ and $L^2$ norm. Finally, numerical experiments verify the stability and accuracy of the CDG numerical scheme.
Haoning Dang, Qilong Zhai, Zhongshu Zhao
2023-03-18T08:21:46Z
http://arxiv.org/abs/2303.10359v1
# A conforming discontinuous Galerkin finite element method for Brinkman equations ###### Abstract In this paper, we present a conforming discontinuous Galerkin (CDG) finite element method for Brinkman equations. The velocity stabilizer is removed by employing the higher degree polynomials to compute the weak gradient. The theoretical analysis shows that the CDG method is actually stable and accurate for the Brinkman equations. Optimal order error estimates are established in \(H^{1}\) and \(L^{2}\) norm. Finally, numerical experiments verify the stability and accuracy of the CDG numerical scheme. keywords: Brinkman equations, discontinuous Galerkin, discrete weak gradient operators, polyhedral meshes. + Footnote †: journal: computational and applied mathematics ## 1 Introduction The Brinkman equations frequently appears in modeling the incompressible flow of a viscous fluid in complex porous medium. These equations extend Darcy's law to describe the dissipation of kinetic energy caused by viscous forces, similarly to the Navier-Stokes equations [5]. Brinkman equations are also applied in many other fields, such as environmental science, geophysics, petroleum engineering, biotechnology, and so on [3; 9; 10; 14; 18]. Mathematically speaking, the Brinkman equations combine the Darcy equations and the Stokes equations by a highly varied parameter. For simplicity, we consider the following Brinkman equations in a bounded polygonal domain \(\Omega\in\mathbb{R}^{2}\): find the unknown fluid velocity \(\mathbf{u}\) and pressure \(p\) satisfying \[-\mu\Delta\mathbf{u}+\mu\kappa^{-1}\mathbf{u}+\nabla p=\mathbf{f}\quad\text{ in }\Omega, \tag{1.1}\] \[\nabla\cdot\mathbf{u} =0\quad\text{in }\Omega, \tag{1.2}\] \[\mathbf{u} =\mathbf{0}\quad\text{on }\partial\Omega, \tag{1.3}\] where \(\kappa\) is permeability tensor, and \(\mu\) is the fluid viscosity coefficient. For the convenience of analysis, we assume the permeability tensor \(\kappa\) is piecewise constant. Since \(\mu\) can be used to scale the solution \(\mathbf{u}\), we can take \(\mu=1\) for simplicity. \(\mathbf{f}\) represents the momentum source term. In addition, assume that there exist two positive constants \(\lambda_{1}\) and \(\lambda_{2}\) such that \[\lambda_{1}\xi^{t}\xi\leq\xi^{t}\kappa^{-1}\xi\leq\lambda_{2}\xi^{t}\xi,\quad \forall\xi\in\mathbb{R}^{d}. \tag{1.4}\] The main challenge of designing the numerical algorithm comes from the different regularity requirements of the velocity in such two extreme cases: the variational form of Brinkman equations in the Stokes limit requires \(H^{1}\)-regularity, while the Darcy limit requires \(H(\text{div})\)-regularity. To overcome this difficulty, many scholars have made extensive research. A natural attempt is to modify the existing Stokes or Darcy elements. We can easily find corresponding work related to Stokes based elements [4, 8, 25] and Darcy based elements [7, 12]. Another strategy is to design new formulations for Brinkman equations, such as the dual-mixed formulation [11, 13], the pseudostress-velocity formulation [17] and the vorticity-velocity-pressure formulation [2]. In addition, some new numerical methods are introduced to Brinkman equations, such as weak Galerkin methods [16, 29], virtual element methods [6, 19, 30] and much more. The purpose of this paper is to introduce a new conforming discontinuous Galerkin (CDG) method for Brinkman equations. The CDG method based on the weak Galerkin (WG) method proposed in [20], is first proposed by Ye and Zhang in 2020 [26]. It retains the key idea of WG method, which uses the weak differential operators to approximate the classical differential operators in the variational form. In [26], the authors prove that no stabilizer is required for Poisson problem when the local Raviart-Thomas (RT) element is used to approximate the classic gradient operator. However, we know that the RT elements are only applicable to triangular and rectangular meshes. In subsequent work [27], they found that the stabilizer term can be removed from the numerical scheme constructed on polygon meshes by raising the degree of polynomial approximating the discrete weak gradient operator. The CDG method has been applied to Stokes equations [28], elliptic interface problem [23] and linear elasticity interface problem [24]. In this paper, we consider the same variational form based on gradient-gradient operators as [29]: find the unknown functions \(\mathbf{u}\in[H^{1}_{0}(\Omega)]^{d}\) and \(p\in L^{2}_{0}(\Omega)\) satisfying \[(\nabla\mathbf{u},\nabla\mathbf{v})+(\kappa^{-1}\mathbf{u}, \mathbf{v})+(\nabla p,\mathbf{v}) =(\mathbf{f},\mathbf{v}), \forall\mathbf{v}\in[H^{1}_{0}(\Omega)]^{d}, \tag{1.5}\] \[(\nabla q,\mathbf{u}) =0, \forall q\in L^{2}_{0}(\Omega). \tag{1.6}\] We construct the conforming discontinuous Galerkin scheme to discrete this variational form on polygon meshes. The stable term for the velocity \(\mathbf{u}\) is removed by a new definition of weak gradient operator, thus our numerical formulation is simpler compared with the standard WG method [29]. Furthermore, we prove the well-posedness of the CDG scheme and derive the optimal error estimates for velocity and pressure, which implies the optimal convergence order for both the Stokes and Darcy dominated problems. Some numerical experiments are provided to verify our theoretical analysis. The rest of the paper is organized as follows. In Section 2, we define two discrete weak gradient operators and construct the conforming discontinuous Galerkin scheme for Brinkman equations. Then, the well-posedness is proved in Section 3. The error equations for the CDG scheme are established in Section 4. And we prove optimal error estimates for both velocity and pressure in \(H^{1}\) and \(L^{2}\) norms in Section 5. In Section 6, we present some numerical experiments to verify the stability and accuracy of the CDG scheme. ## 2 Discrete Weak Gradient Operators In this section, we define two discrete weak gradient operators that we're going to use later. Let \(\mathcal{T}_{h}\) be a polygonal or polyhedral partition of the domain \(\Omega\) and \(\mathcal{E}_{h}\) be the set of all edges or faces in \(\mathcal{T}_{h}\). Assume that all cells in \(\mathcal{T}_{h}\) are closed and simply connected, and satisfy some specific shape regular conditions in [21]. Denote the set of all interior edges or faces by \(\mathcal{E}_{h}^{0}=\mathcal{E}_{h}\backslash\partial\Omega\). For each \(T\in\mathcal{T}_{h}\), \(e\subset\partial T\), let \(h_{T}\) and \(h_{e}\) be the diameter of \(T\) and \(e\), respectively. And we define the size of \(\mathcal{T}_{h}\) as \(h=\max\limits_{T\in\mathcal{T}_{h}}h_{T}\). For a given integer \(k\geq 1\), the space of polynomial with degree no more than \(k\) on a cell \(T\) denotes by \(P_{k}(T)\). We define the space for the vector-valued functions as \[V_{h}=\left\{\mathbf{v}\in[L^{2}(\Omega)]^{d}:\mathbf{v}|_{T}\in[P_{k}(T)]^{d},\forall T\in\mathcal{T}_{h}\right\}.\] Denote by \(V_{h}^{0}\) the subspace of \(V_{h}\) that \[V_{h}^{0}=\left\{\mathbf{v}:\mathbf{v}\in V_{h},\mathbf{v}=\mathbf{0}\text{ on }\partial\Omega\right\}.\] For the scalar-valued functions, we define \[W_{h}=\left\{q:q\in L^{2}_{0}(\Omega),q|_{T}\in P_{k-1}(T)\right\}.\] Let \(T_{1}\) and \(T_{2}\) be two cells in \(\mathcal{T}_{h}\) sharing \(e\in\mathcal{E}_{h}^{0}\), \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) be the unit outward normal vectors of \(T_{1}\) and \(T_{2}\) on \(e\). In particular, when \(e\subset\partial\Omega\), we denote the unit outward normal vector of \(T\) on \(e\) by \(\mathbf{n}_{e}\). For a vector-valued function \(\mathbf{v}\in V_{h}+[H^{1}(\Omega)]^{d}\), we define the average \(\{\cdot\}\) and jump \([\cdot]\) as follows \[\left\{\mathbf{v}\right\}=\left\{\begin{array}{ll}\frac{1}{2}(\mathbf{v}|_{T _{1}}+\mathbf{v}|_{T_{2}})&e\in\mathcal{E}_{h}^{0},\\ \mathbf{v}|_{e}&e\subset\partial\Omega,\end{array}\right.\quad\left[\mathbf{v }\right]=\left\{\begin{array}{ll}\mathbf{v}|_{T_{1}}\cdot\mathbf{n}_{1}+ \mathbf{v}|_{T_{2}}\cdot\mathbf{n}_{2}&e\in\mathcal{E}_{h}^{0},\\ \mathbf{v}|_{e}\cdot\mathbf{n}_{e}&e\subset\partial\Omega.\end{array}\right.\] For a scalar-valued function \(q\in W_{h}\), the average \(\{\cdot\}\) and the jump \([\![\cdot]\!]\) are \[\{q\}=\left\{\begin{array}{ll}\frac{1}{2}(q|_{T_{1}}+q|_{T_{2}})&e\in{\cal E}_{ h}^{0},\\ q|_{e}&e\subset\partial\Omega,\end{array}\right.\quad[\![q]\!]=\left\{\begin{array}{ ll}q|_{T_{1}}{\bf n}_{1}+q|_{T_{2}}{\bf n}_{2}&e\in{\cal E}_{h}^{0},\\ q|_{e}{\bf n}_{e}&e\subset\partial\Omega.\end{array}\right.\] In addition, for \({\bf v}\in V_{h}^{0}+[H_{0}^{1}(\Omega)]^{d}\), the following equations hold true: \[\|{\bf v}-\{{\bf v}\}\|_{e}=\|[{\bf v}]\|_{e},\quad\text{if }e\subset\partial \Omega,\quad\|{\bf v}-\{{\bf v}\}\|_{e}=\frac{1}{2}\|[{\bf v}]\|_{e},\quad \text{if }e\in{\cal E}_{h}^{0}. \tag{2.1}\] Similarly, for \(q\in W_{h}\), we have \[\|q-\{q\}\|_{e}=0,\quad\text{if }e\subset\partial\Omega,\quad\|q-\{q\}\|_{e}= \frac{1}{2}\|[\![q]\!]\|_{e},\quad\text{if }e\in{\cal E}_{h}^{0}. \tag{2.2}\] Then we give the definition of the discrete weak gradient operators. For a vector-valued function \({\bf v}\in V_{h}+[H^{1}(\Omega)]^{d}\), the discrete weak gradient \(\nabla_{w}{\bf v}\) on each cell \(T\) is a unique polynomial function in \([P_{j}(T)]^{d\times d}(j>k)\) satisfying \[(\nabla_{w}{\bf v},\tau)_{T}=-({\bf v},\nabla\cdot\tau)_{T}+\left\langle\{{ \bf v}\},\tau{\bf n}\right\rangle_{\partial T},\quad\forall\tau\in[P_{j}(T)]^ {d\times d}. \tag{2.3}\] Note that \(j\) depends on \(k\) and \(n\) is the number of edges of polygon cell \(T\). For a polygonal mesh, \(j=n+k-1\)[27], and in particular, \(j=k+1\) when the domain is partitioned into triangles [1]. Similarly, for a scalar-valued function \(q\in W_{h}\), the discrete weak gradient \(\tilde{\nabla}_{w}q\) on each cell \(T\) is a unique polynomial function in \([P_{k}(T)]^{d}\) satisfying \[(\tilde{\nabla}_{w}q,\boldsymbol{\phi})_{T}=-(q,\nabla\cdot\boldsymbol{\phi} )_{T}+\left\langle\{q\}\,,\boldsymbol{\phi}\cdot{\bf n}\right\rangle_{ \partial T},\quad\forall\boldsymbol{\phi}\in[P_{k}(T)]^{d}. \tag{2.4}\] For simplicity of notations, we introduce three bilinear forms as follows: \[a({\bf v},{\bf w}) =(\nabla_{w}{\bf v},\nabla_{w}{\bf w})+(\kappa^{-1}{\bf v},{\bf w }), \tag{2.5}\] \[b({\bf v},q) =({\bf v},\tilde{\nabla}_{w}q),\] (2.6) \[s(p,q) =\sum_{e\in{\cal E}_{h}^{0}}h\left\langle[\![p]\!],[\![q]\!] \right\rangle_{e}. \tag{2.7}\] Now we have the following conforming discontinuous Galerkin finite element scheme for Brinkman equations (1.1)-(1.3). **Conforming Discontinuous Galerkin Algorithm 2.1**.: _Find \({\bf u}_{h}\in V_{h}^{0}\) and \(p_{h}\in W_{h}\) such that_ \[a({\bf u}_{h},{\bf v})+b({\bf v},p_{h}) =({\bf f},{\bf v}), \forall{\bf v}\in V_{h}^{0}, \tag{2.8}\] \[b({\bf u}_{h},q)-s(p_{h},q) =0, \forall q\in W_{h}. \tag{2.9}\] ## 3 The Well-Posedness of CDG Scheme In this section, we show that the CDG numerical scheme (2.8)-(2.9) has a unique solution. In order to analyse the well-posedness of the CDG scheme, we first define the following tri-bar norm. For a vector-valued function \(\mathbf{v}\in V_{h}+[H^{1}(\Omega)]^{d}\), \[\llbracket\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\leq C\sum_{e\in\mathcal{E}_{h}}\left\|\frac{\mathbf{v}|_{T_{1}}- \mathbf{v}|_{T_{2}}}{2}\right\|_{e}^{2}\] \[\leq C\sum_{e\in\mathcal{E}_{h}}\left(\|\mathbf{v}\|_{\partial T_{ 1}\cap e}^{2}+\|\mathbf{v}\|_{\partial T_{2}\cap e}^{2}\right)\] \[\leq C\sum_{T\in\mathcal{T}_{h}}\|\mathbf{v}\|_{\partial T}^{2}.\] Using the above inequality, the trace inequality (A.4) and the inverse inequality (A.5), we obtain \[\|\nabla_{w}\mathbf{v}\|^{2} \leq\sum_{T\in\mathcal{T}_{h}}(\|\nabla\mathbf{v}\|_{T}^{2}+\| \mathbf{v}-\{\mathbf{v}\}\,\|_{\partial T}\|\nabla_{w}\mathbf{v}+\nabla \mathbf{v}\|_{\partial T})\] \[\leq\sum_{T\in\mathcal{T}_{h}}(\|\nabla\mathbf{v}\|_{T}^{2}+h^{- \frac{1}{2}}\|\mathbf{v}-\{\mathbf{v}\}\,\|_{\partial T}(\|\nabla_{w}\mathbf{ v}\|_{T}+\|\nabla\mathbf{v}\|_{T}))\] \[\leq\sum_{T\in\mathcal{T}_{h}}(\|\nabla\mathbf{v}\|_{T}^{2}+Ch^{- \frac{1}{2}}\|\mathbf{v}\|_{\partial T}(\|\nabla_{w}\mathbf{v}\|_{T}+\|\nabla \mathbf{v}\|_{T}))\] \[\leq\sum_{T\in\mathcal{T}_{h}}(\|\nabla\mathbf{v}\|_{T}^{2}+\frac {1}{2}\|\nabla_{w}\mathbf{v}\|_{T}^{2}+\frac{1}{2}\|\nabla\mathbf{v}\|_{T}^{2} +Ch^{-1}\|\mathbf{v}\|_{\partial T}^{2})\] \[\leq\sum_{T\in\mathcal{T}_{h}}\left(\frac{1}{2}\|\nabla_{w} \mathbf{v}\|_{T}^{2}+Ch^{-2}\|\mathbf{v}\|_{T}^{2}\right).\] Taking \(\mathbf{v}=\kappa\tilde{\nabla}_{w}q\in V_{h}\) and using (1.4), we get the following estimate \[\llbracket\mathbf{v}\rrbracket^{2} =\|\nabla_{w}\mathbf{v}\|^{2}+\|\kappa^{-\frac{1}{2}}\mathbf{v}\| ^{2}\] \[\leq Ch^{-2}\|\mathbf{v}\|^{2}+\|\kappa^{-\frac{1}{2}}\mathbf{v} \|^{2}\] \[\leq Ch^{-2}\|\kappa\tilde{\nabla}_{w}q\|^{2}+\|\kappa^{\frac{1}{ 2}}\tilde{\nabla}_{w}q\|^{2}\] \[\leq Ch^{-2}\lambda_{1}^{-1}\|\kappa^{\frac{1}{2}}\tilde{\nabla}_ {w}q\|^{2}+\|\kappa^{\frac{1}{2}}\tilde{\nabla}_{w}q\|^{2}\] \[\leq Ch^{-2}\|q\|_{1}^{2}.\] Therefore, we have \[\frac{b(\mathbf{v},q)}{\llbracket\mathbf{v}\rrbracket} =\frac{\llbracket q\rrbracket_{1}^{2}-h^{-2}\|q\|_{h}^{2}}{Ch^{-1} \rrbracket q\rrbracket_{1}}\] \[\geq\frac{\llbracket q\rrbracket_{1}^{2}-h^{-1}\llbracket q \rrbracket_{1}\|q\rrbracket_{h}}{Ch^{-1}\rrbracket q\rrbracket_{1}}\] \[\geq C_{1}h\|q\|_{1}-C_{2}\|q\|_{h}.\] **Lemma 3.3**.: _The conforming discontinuous Galerkin finite element scheme (2.8)-(2.9) has a unique solution._ Proof.: Consider the corresponding homogeneous equation \(\mathbf{f}=\mathbf{0}\), let \(\mathbf{v}=\mathbf{u}_{h}\) in (2.8) and \(q=p_{h}\) in (2.9). Then subtracting (2.9) form (2.8), we have \[|\!|\!|\mathbf{u}_{h}|\!|\!|^{2}+\|p_{h}\|_{h}^{2}=a(\mathbf{u}_{h},\mathbf{u}_ {h})+s(p_{h},p_{h})=0,\] which implies \(\mathbf{u}_{h}=\mathbf{0}\) and \(\|p_{h}\|_{h}=0\). Let \(\mathbf{u}_{h}=\mathbf{0}\) and \(\mathbf{f}=\mathbf{0}\) in (2.8), we have \(b(\mathbf{v},p_{h})=0\) for any \(\mathbf{v}\in V_{h}^{0}\). According to the definition of \(b(\cdot,\cdot)\) and \(\|p_{h}\|_{h}=0\), let \(\mathbf{v}=\nabla p_{h}\) on each cell \(T\), it holds that \[0=b(\mathbf{v},p_{h}) =\sum_{T\in\mathcal{T}_{h}}(\mathbf{v},\tilde{\nabla}_{w}p_{h})_ {T}\] \[=\sum_{T\in\mathcal{T}_{h}}(-(p_{h},\nabla\cdot\mathbf{v})_{T}+ \left\langle\{p_{h}\},\mathbf{v}\cdot\mathbf{n}\right\rangle_{\partial T})\] \[=\sum_{T\in\mathcal{T}_{h}}(-(p_{h},\nabla\cdot\mathbf{v})_{T}+ \left\langle p_{h},\mathbf{v}\cdot\mathbf{n}\right\rangle_{\partial T}-\left \langle p_{h}-\{p_{h}\},\mathbf{v}\cdot\mathbf{n}\right\rangle_{\partial T})\] \[=\sum_{T\in\mathcal{T}_{h}}(\mathbf{v},\nabla p_{h})_{T}-\sum_{e \in\mathcal{E}_{h}^{0}}\left\langle\llbracket p_{h}\rrbracket,\{\mathbf{v}\} \right\rangle_{\partial T}\] \[=\sum_{T\in\mathcal{T}_{h}}(\mathbf{v},\nabla p_{h})_{T}=\sum_{T \in\mathcal{T}_{h}}(\nabla p_{h},\nabla p_{h})_{T}.\] Then, we have \(\nabla p_{h}=0\) on each cell \(T\). Since \(p_{h}\) is continuous and \(p_{h}\in L_{0}^{2}(\Omega)\), we have \(p_{h}=0\) and complete the proof of the lemma. ## 4 Error Equations In this section, we establish the error equations between the numerical solution and the exact solution. Let \(Q_{h}\), \(\mathbf{Q}_{h}\) and \(\mathbb{Q}_{h}\) be the standard \(L^{2}\) projection operators onto \([P_{k}(T)]^{d}\), \([P_{j}(T)]^{d\times d}\) and \(P_{k-1}(T)\), respectively. First, we recall some properties about the projection operators. **Lemma 4.1**.: _For the projection operators \(Q_{h}\), \(\mathbf{Q}_{h}\) and \(\mathbb{Q}_{h}\), the following properties hold_ \[\nabla_{w}\mathbf{v} =\mathbf{Q}_{h}(\nabla\mathbf{v}),\quad\forall\mathbf{v}\in[H^{1 }(\Omega)]^{d}, \tag{4.1}\] \[(\tilde{\nabla}_{w}(\mathbb{Q}_{h}q),\boldsymbol{\phi})_{T} =(Q_{h}(\nabla q),\boldsymbol{\phi})_{T}-\left\langle q-\mathbb{Q }_{h}q,\boldsymbol{\phi}\cdot\mathbf{n}\right\rangle_{\partial T},\] (4.2) \[\qquad\qquad\forall q\in H^{1}(\Omega),\ \forall\boldsymbol{\phi}\in[P_{k}(T)]^{d}.\] Proof.: For any \(\tau\in[P_{j}(T)]^{d\times d}\), we have \[(\nabla_{w}\mathbf{v},\tau)_{T}=-(\mathbf{v},\nabla\cdot\tau)_{T}+\left\langle \left\{\mathbf{v}\right\},\tau\cdot\mathbf{n}\right\rangle_{\partial T}\] \[=-(\mathbf{v},\nabla\cdot\tau)_{T}+\left\langle\mathbf{v},\tau\cdot \mathbf{n}\right\rangle_{\partial T}\] \[=(\nabla\mathbf{v},\tau)_{T}\] \[=(\mathbf{Q}_{h}(\nabla\mathbf{v}),\tau)_{T}.\] Similarly, for any \(\boldsymbol{\phi}\in[P_{k}(T)]^{d}\), we have \[(\tilde{\nabla}_{w}(\mathbb{Q}_{h}q),\boldsymbol{\phi})_{T} =-(\mathbb{Q}_{h}q,\nabla\cdot\boldsymbol{\phi})_{T}+\left\langle \left\{\mathbb{Q}_{h}q\right\},\boldsymbol{\phi}\cdot\mathbf{n}\right\rangle_{ \partial T}\] \[=-(q,\nabla\cdot\boldsymbol{\phi})_{T}+\left\langle\mathbb{Q}_{h }q,\boldsymbol{\phi}\cdot\mathbf{n}\right\rangle_{\partial T}\] \[=-(q,\nabla\cdot\boldsymbol{\phi})_{T}+\left\langle\boldsymbol{ \phi}\cdot\mathbf{n}\right\rangle_{\partial T}-\left\langle q-\mathbb{Q}_{h}q, \boldsymbol{\phi}\cdot\mathbf{n}\right\rangle_{\partial T}\] \[=(\nabla q,\boldsymbol{\phi})_{T}-\left\langle q-\mathbb{Q}_{h} q,\boldsymbol{\phi}\cdot\mathbf{n}\right\rangle_{\partial T},\] which completes the proof of the lemma. Let \(\mathbf{e}_{h}=Q_{h}\mathbf{u}-\mathbf{u}_{h}\) and \(\varepsilon_{h}=\mathbb{Q}_{h}p-p_{h}\) be the error functions, where \((\mathbf{u};p)\) be the solution of (1.1)-(1.3) and \((\mathbf{u}_{h};p_{h})\) be the solution of (2.8)-(2.9). We shall derive the error equations that \(\mathbf{e}_{h}\) and \(\varepsilon_{h}\) satisfy. **Lemma 4.2**.: _For any \(\mathbf{v}\in V_{h}^{0}\) and \(q\in W_{h}\), the following equations hold true_ \[a(\mathbf{e}_{h},\mathbf{v})+b(\mathbf{v},\varepsilon_{h}) =-l_{1}(\mathbf{u},\mathbf{v})+l_{2}(\mathbf{u},\mathbf{v})-l_{3} (p,\mathbf{v}), \tag{4.3}\] \[b(\mathbf{e}_{h},q)-s(\varepsilon_{h},q) =l_{4}(\mathbf{u},q)-s(\mathbb{Q}_{h}p,q), \tag{4.4}\] _where_ \[l_{1}(\mathbf{u},\mathbf{v}) =\sum_{T\in\mathcal{T}_{h}}(\nabla_{w}(\mathbf{u}-Q_{h}\mathbf{u }),\nabla_{w}\mathbf{v})_{T},\] \[l_{2}(\mathbf{u},\mathbf{v}) =\sum_{T\in\mathcal{T}_{h}}\left\langle(\nabla\mathbf{u}-\mathbf{ Q}_{h}\nabla\mathbf{u})\cdot\mathbf{n},\mathbf{v}-\{\mathbf{v}\}\right\rangle_{ \partial T},\] \[l_{3}(p,\mathbf{v}) =\sum_{T\in\mathcal{T}_{h}}\left\langle p-\mathbb{Q}_{h}p, \mathbf{v}\cdot\mathbf{n}\right\rangle_{\partial T},\] \[l_{4}(\mathbf{u},q) =\sum_{T\in\mathcal{T}_{h}}\left\langle(\mathbf{u}-Q_{h}\mathbf{ u})\cdot\mathbf{n},q-\{q\}\right\rangle_{\partial T},\] \[s(\mathbb{Q}_{h}p,q) =\sum_{e\in\mathcal{E}_{h}^{0}}h\left\langle[\mathbb{Q}_{h}p],[q] \right\rangle.\] Proof.: Testing (1.1) by \(\mathbf{v}\in V_{h}^{0}\) yields \[-(\Delta\mathbf{u},\mathbf{v})+(\kappa^{-1}\mathbf{u},\mathbf{v})+(\nabla p, \mathbf{v})=(\mathbf{f},\mathbf{v}).\] Applying the definition of projection operators \(Q_{h}\), \(\mathbf{Q}_{h}\) and \(\mathbb{Q}_{h}\), the definition of the weak gradient \(\nabla_{w}\) and \(\tilde{\nabla}_{w}\) and (4.1), we get \[-(\Delta\mathbf{u},\mathbf{v})=\sum_{T\in\mathcal{T}_{h}}((\nabla\mathbf{u}, \nabla\mathbf{v})_{T}-\left\langle\nabla\mathbf{u}\cdot\mathbf{n},\mathbf{v} \right\rangle_{\partial T})\] \[=\sum_{T\in\mathcal{T}_{h}}((\mathbf{Q}_{h}\nabla\mathbf{u},\nabla \mathbf{v})_{T}-\left\langle\nabla\mathbf{u}\cdot\mathbf{n},\mathbf{v}\right\rangle _{\partial T})\] \[=\sum_{T\in\mathcal{T}_{h}}(-(\mathbf{v},\nabla\cdot(\mathbf{Q}_{ h}\nabla\mathbf{u}))_{T}+\left\langle\mathbf{v},(\mathbf{Q}_{h}\nabla\mathbf{u}- \nabla\mathbf{u})\cdot\mathbf{n}\right\rangle_{\partial T})\] \[=\sum_{T\in\mathcal{T}_{h}}((\nabla_{w}\mathbf{v},\mathbf{Q}_{h} \nabla\mathbf{u})_{T}+\left\langle\mathbf{v}-\left\{\mathbf{v}\right\},( \mathbf{Q}_{h}\nabla\mathbf{u}-\nabla\mathbf{u})\cdot\mathbf{n}\right\rangle_{ \partial T})\] \[=\sum_{T\in\mathcal{T}_{h}}(\nabla_{w}(Q_{h}\mathbf{u}),\nabla_{w }\mathbf{v})_{T}+l_{1}(\mathbf{u},\mathbf{v})-l_{2}(\mathbf{u},\mathbf{v}).\] According to the definition of \(Q_{h}\) and (4.2), we have \[(\nabla p,\mathbf{v}) =\sum_{T\in\mathcal{T}_{h}}(\nabla p,\mathbf{v})_{T}\] \[=\sum_{T\in\mathcal{T}_{h}}(Q_{h}(\nabla p),\mathbf{v})_{T}\] \[=\sum_{T\in\mathcal{T}_{h}}(\tilde{\nabla}_{w}(\mathbb{Q}_{h}p), \mathbf{v})_{T}+\sum_{T\in\mathcal{T}_{h}}\left\langle p-\mathbb{Q}_{h}p, \mathbf{v}\cdot\mathbf{n}\right\rangle_{\partial T}\] \[=\sum_{T\in\mathcal{T}_{h}}(\tilde{\nabla}_{w}(\mathbb{Q}_{h}p), \mathbf{v})_{T}+l_{3}(p,\mathbf{v}),\] and \[(\kappa^{-1}\mathbf{u},\mathbf{v})=(\mathbf{u},\kappa^{-1}\mathbf{v})=(Q_{h} \mathbf{u},\kappa^{-1}\mathbf{v})=(\kappa^{-1}Q_{h}\mathbf{u},\mathbf{v}).\] Using the definition of \(a(\cdot,\cdot)\) and \(b(\cdot,\cdot)\) and the above equations, we have \[a(Q_{h}\mathbf{u},\mathbf{v})+b(\mathbf{v},\mathbb{Q}_{h}p)+l_{1}(\mathbf{u}, \mathbf{v})-l_{2}(\mathbf{u},\mathbf{v})+l_{3}(p,\mathbf{v})=(\mathbf{f}, \mathbf{v}). \tag{4.5}\] Subtracting (2.8) from (4.5), we arrive at \[a(\mathbf{e}_{h},\mathbf{v})+b(\mathbf{v},\varepsilon_{h})+l_{1}(\mathbf{u}, \mathbf{v})-l_{2}(\mathbf{u},\mathbf{v})+l_{3}(p,\mathbf{v})=0,\] which completes the proof of (4.3). Similarly, testing (1.2) by \(q\in W_{h}\) yields \[0=(\nabla\cdot\mathbf{u},q) =\sum_{T\in\mathcal{T}_{h}}(-(\nabla q,\mathbf{u})_{T}+\left\langle q,\mathbf{u}\cdot\mathbf{n}\right\rangle_{\partial T})\] \[=\sum_{T\in\mathcal{T}_{h}}(-(\nabla q,Q_{h}\mathbf{u})_{T}+ \left\langle q,\mathbf{u}\cdot\mathbf{n}\right\rangle_{\partial T})\] \[=\sum_{T\in\mathcal{T}_{h}}((q,\nabla\cdot(Q_{h}\mathbf{u}))_{T}+ \left\langle q,(\mathbf{u}-Q_{h}\mathbf{u})\cdot\mathbf{n}\right\rangle_{ \partial T})\] \[=\sum_{T\in\mathcal{T}_{h}}(-(Q_{h}\mathbf{u},\tilde{\nabla}_{w}q)_{T}, \langle q-\{q\}\,,(\mathbf{u}-Q_{h}\mathbf{u})\cdot\mathbf{n}\rangle_{\partial T}).\] Using (2.9), we have Adding \(-s(\mathbb{Q}_{h}p,q)\) to both sides of this equation, it follows that \[\sum_{T\in\mathcal{T}_{h}}(Q_{h}\mathbf{u}-\mathbf{u}_{h},\tilde{\nabla}_{w}q) _{T}-\sum_{e\in\mathcal{E}_{h}^{0}}h\left\langle\llbracket\mathbb{Q}_{h}p-p_{h} \rrbracket,\llbracket q\rrbracket\right\rangle_{e}=l_{4}(\mathbf{u},q)-s( \mathbb{Q}_{h}p,q),\] which implies (4.4) and we complete the proof of this lemma. ## 5 Error Estimates In this section, we aim to deriving the error estimates to the numerical scheme (2.8)-(2.9). **Theorem 5.1**.: _Let \((\mathbf{u};p)\in[H_{0}^{1}(\Omega)\cap H^{k+1}(\Omega)]^{d}\times(L_{0}^{2}( \Omega)\cap H^{k}(\Omega))\) be the exact solution of (1.1)-(1.3) and \((\mathbf{u}_{h};p_{h})\in V_{h}^{0}\times W_{h}\) be the solution of (2.8)-(2.9), respectively. Then there exists a constant \(C\) such that_ \[\left\|\!\left|\mathbf{e}_{h}\right|\!\right\|+\|\varepsilon_{h}\| _{h} \leq Ch^{k}(\|\mathbf{u}\|_{k+1}+\|p\|_{k}), \tag{5.1}\] \[\left\|\!\left|\varepsilon_{h}\right|\!\right\|_{1} \leq Ch^{k-1}(\|\mathbf{u}\|_{k+1}+\|p\|_{k}). \tag{5.2}\] Proof.: Let \(\mathbf{v}=\mathbf{e}_{h}\) in (4.3) and \(q=\varepsilon_{h}\) in (4.4), we have \[\left\|\!\left|\mathbf{e}_{h}\right|\!\right|^{2}+\|\varepsilon_{h}\|_{h}^{2} =-l_{1}(\mathbf{u},\mathbf{e}_{h})+l_{2}(\mathbf{u},\mathbf{e}_{h})-l_{3}(p, \mathbf{e}_{h})-l_{4}(\mathbf{u},\varepsilon_{h})+s(\mathbb{Q}_{h}p, \varepsilon_{h}).\] From the estimates (A.7)-(A.11), we obtain \[\left\|\!\left|\mathbf{e}_{h}\right|\!\right|^{2}+\|\varepsilon_{h}\|_{h}^{2} \leq Ch^{k}(\|\mathbf{u}\|_{k+1}+\|p\|_{k})(\left\|\!\left|\mathbf{e}_{h} \right|\!\right\|+\|\varepsilon_{h}\|_{h}),\] which implies (5.1). Let \(p=\varepsilon_{h}\) in (4.3), from the inf-sup condition (3.4), Lemma (3.1) and (A.7)-(A.9), it follows that \[(C_{1}h\|\varepsilon_{h}\|\!\left|\!\left|{}_{1}-C_{2}\|\varepsilon _{h}\|_{h})\right|\!\left|\!\left|\mathbf{v}\right|\!\right|\!\right| \leq|b(\mathbf{v},\varepsilon_{h})|\] \[\leq a(\mathbf{e}_{h},\mathbf{v})+l_{1}(\mathbf{u},\mathbf{v})+l_ {2}(\mathbf{u},\mathbf{v})+l_{3}(p,\mathbf{v})\] \[\leq\left\|\!\left|\mathbf{e}_{h}\right|\!\left|\!\left|\mathbf{ v}\right|\!\right|\!\right|+Ch^{k}(\|\mathbf{u}\|_{k+1}+\|p\|_{k})\left|\!\left| \!\left|\mathbf{v}\right|\!\right|\!\right|\!.\] Combining the estimates above, we arrive at \[h\|\varepsilon_{h}\|\!\left|\!\left|{}_{1}\right. \leq\left\|\!\left|\mathbf{e}_{h}\right|\!\right|+\|\varepsilon_{h} \|_{h}+Ch^{k}(\|\mathbf{u}\|_{k+1}+\|p\|_{k})\] \[\leq Ch^{k}(\|\mathbf{u}\|_{k+1}+\|p\|_{k}),\] which completes the proof. In order to obtain \(L^{2}\) error estimate, we consider the following dual problem: seek \((\mathbf{\psi};\xi)\in[H^{2}(\Omega)]^{d}\times H^{1}(\Omega)\) satisfying \[-\Delta\mathbf{\psi}+\kappa^{-1}\mathbf{\psi}+\nabla\xi =\mathbf{e}_{h}\quad\text{in }\Omega, \tag{5.3}\] \[\nabla\cdot\mathbf{\psi} =0\quad\text{in }\Omega,\] (5.4) \[\mathbf{\psi} =\mathbf{0}\quad\text{on }\partial\Omega. \tag{5.5}\] Assume that the following regularity condition holds, \[\|\mathbf{\psi}\|_{2}+\|\xi\|_{1}\leq C\|\mathbf{e}_{h}\|. \tag{5.6}\] **Theorem 5.2**.: _Let \((\mathbf{u};p)\in[H^{1}_{0}(\Omega)\cap H^{k+1}(\Omega)]^{d}\times(L^{2}_{0}( \Omega)\cap H^{k}(\Omega))\) be the exact solution of (1.1)-(1.3) and \((\mathbf{u}_{h};p_{h})\in V^{0}_{h}\times W_{h}\) be the solution of (2.8)-(2.9), respectively. Then there exists a constant \(C\) such that_ \[\|\mathbf{e}_{h}\|\leq Ch^{k+1}(\|\mathbf{u}\|_{k+1}+\|p\|_{k}). \tag{5.7}\] Proof.: Testing (5.3) by \(\mathbf{e}_{h}\) yields \[\|\mathbf{e}_{h}\|^{2}=(\mathbf{e}_{h},\mathbf{e}_{h})=-(\Delta\mathbf{\psi}, \mathbf{e}_{h})+(\kappa^{-1}\mathbf{\psi},\mathbf{e}_{h})+(\nabla\xi,\mathbf{e}_{ h}).\] Similar to the proof of Lemma 4.2, we have \[-(\Delta\mathbf{\psi},\mathbf{e}_{h})=(\nabla_{w}(Q_{h}\mathbf{\psi}), \nabla_{w}\mathbf{e}_{h})+l_{1}(\mathbf{\psi},\mathbf{e}_{h})-l_{2}(\mathbf{\psi}, \mathbf{e}_{h}),\] and \[(\nabla\xi,\mathbf{e}_{h})=(\tilde{\nabla}_{w}(\mathbb{Q}_{h} \xi),\mathbf{e}_{h})+l_{3}(\xi,\mathbf{e}_{h}).\] Combining the definition of \(a(\cdot,\cdot)\) and \(b(\cdot,\cdot)\) gives \[\|\mathbf{e}_{h}\|^{2}=a(Q_{h}\mathbf{\psi},\mathbf{e}_{h})+b(\mathbf{e}_{h}, \mathbb{Q}_{h}\mathbf{\psi})+l_{1}(\mathbf{\psi},\mathbf{e}_{h})-l_{2}(\mathbf{\psi}, \mathbf{e}_{h})+l_{3}(\xi,\mathbf{e}_{h}). \tag{5.8}\] Similarly, testing (5.4) by \(\varepsilon_{h}\) yields \[b(Q_{h}\mathbf{\psi},\varepsilon_{h})=l_{4}(\mathbf{\psi},\varepsilon_{ h}). \tag{5.9}\] Let \(\mathbf{v}=Q_{h}\mathbf{\psi}\) and \(q=\mathbb{Q}_{h}\xi\) in (4.3) and (4.4), we have \[a(\mathbf{e}_{h},Q_{h}\mathbf{\psi})+b(Q_{h}\mathbf{\psi},\varepsilon_{ h}) =-l_{1}(\mathbf{u},Q_{h}\mathbf{\psi})+l_{2}(\mathbf{u},Q_{h}\mathbf{\psi})-l_{3}(p,Q_{h} \mathbf{\psi}), \tag{5.10}\] \[b(\mathbf{e}_{h},\mathbb{Q}_{h}\xi)-s(\varepsilon_{h},\mathbb{Q }_{h}\xi) =l_{4}(\mathbf{u},\mathbb{Q}_{h}\xi)-s(\mathbb{Q}_{h}p,\mathbb{Q }_{h}\xi). \tag{5.11}\] With (5.8)-(5.11), we obtain \[\|\mathbf{e}_{h}\|^{2}= -l_{1}(\mathbf{\psi},\mathbf{e}_{h})+l_{2}(\mathbf{\psi},\mathbf{e}_{h}) -l_{3}(\xi,\mathbf{e}_{h})+l_{4}(\mathbf{\psi},\varepsilon_{h})-s(\mathbb{Q}_{h} \mathbf{\psi},\varepsilon_{h})\] \[+(l_{1}(\mathbf{u},Q_{h}\mathbf{\psi})-l_{2}(\mathbf{u},Q_{h}\mathbf{ \psi})+l_{3}(p,Q_{h}\mathbf{\psi})-l_{4}(\mathbf{u},\mathbb{Q}_{h}\xi)+s(\mathbb{Q} _{h}p,\mathbb{Q}_{h}\xi)).\] Let \(\hat{\mathbf{Q}}_{h}\) be the projection operator from \([L^{2}(T)]^{d\times d}\) onto \([P_{1}(T)]^{d\times d}\). For any \(q\in P_{1}(T)\), we have \[(\hat{\mathbf{Q}}_{h}\nabla\boldsymbol{\psi},q)_{T}=(\nabla\boldsymbol{\psi},q) _{T}=-(\boldsymbol{\psi},\nabla\cdot q)_{T}+\left\langle\boldsymbol{\psi},q \mathbf{n}\right\rangle_{\partial T}=(\nabla_{w}\boldsymbol{\psi},q)_{T}=( \hat{\mathbf{Q}}_{h}\nabla_{w}\boldsymbol{\psi},q)_{T},\] which implies \(\hat{\mathbf{Q}}_{h}\nabla\boldsymbol{\psi}\) is equal to \(\hat{\mathbf{Q}}_{h}\nabla_{w}\boldsymbol{\psi}\) on each cell \(T\). According to the definition of \(\nabla_{w}\), the following equation holds true \[(\nabla_{w}(\mathbf{u}-Q_{h}\mathbf{u}),\hat{\mathbf{Q}}_{h} \nabla_{w}\boldsymbol{\psi})_{T}= -(\mathbf{u}-Q_{h}\mathbf{u},\nabla\cdot\hat{\mathbf{Q}}_{h} \nabla_{w}\boldsymbol{\psi})_{T}\] \[+\left\langle\left\{\mathbf{u}-Q_{h}\mathbf{u}\right\},\hat{ \mathbf{Q}}_{h}\nabla_{w}\boldsymbol{\psi}\cdot\mathbf{n}\right\rangle_{ \partial T}.\] Notice that \(k\) is an integer not less than one, from the definition of the projection operator \(Q_{h}\), we obtain \[(\nabla_{w}(\mathbf{u}-Q_{h}\mathbf{u}),\hat{\mathbf{Q}}_{h}\nabla \boldsymbol{\psi})_{T}=(\nabla_{w}(\mathbf{u}-Q_{h}\mathbf{u}),\hat{\mathbf{Q }}_{h}\nabla_{w}\boldsymbol{\psi})_{T}=0. \tag{5.12}\] Then, using the projection inequalities (A.1)-(A.2), and summing over all the cells, we arrive at \[\sum_{T\in\mathcal{T}_{h}}(\nabla_{w}(\mathbf{u}-Q_{h}\mathbf{u}),\nabla_{w}\boldsymbol{\psi})_{T}\] \[= \sum_{T\in\mathcal{T}_{h}}(\nabla_{w}(\mathbf{u}-Q_{h}\mathbf{u}),\nabla_{w}\boldsymbol{\psi}-\hat{\mathbf{Q}}_{h}\nabla\boldsymbol{\psi})_{T}\] \[= \sum_{T\in\mathcal{T}_{h}}(\nabla_{w}(\mathbf{u}-Q_{h}\mathbf{u}),\nabla\boldsymbol{\psi}-\hat{\mathbf{Q}}_{h}\nabla\boldsymbol{\psi})_{T}\] \[\leq \left(\sum_{T\in\mathcal{T}_{h}}\|\nabla_{w}(\mathbf{u}-Q_{h} \mathbf{u})\|_{T}^{2}\right)^{\frac{1}{2}}\left(\sum_{T\in\mathcal{T}_{h}}\| \nabla\boldsymbol{\psi}-\hat{\mathbf{Q}}_{h}\nabla\boldsymbol{\psi}\|_{T}^{2} \right)^{\frac{1}{2}}\] \[\leq Ch^{k+1}\|\mathbf{u}\|_{k+1}\|\boldsymbol{\psi}\|_{2}.\] For \(l_{1}(\mathbf{u},Q_{h}\boldsymbol{\psi})\), we get \[|l_{1}(\mathbf{u},Q_{h}\boldsymbol{\psi})|\] \[= \Bigg{|}\sum_{T\in\mathcal{T}_{h}}(\nabla_{w}(\mathbf{u}-Q_{h} \mathbf{u}),\nabla_{w}Q_{h}\boldsymbol{\psi})_{T}\Bigg{|}\] \[\leq \Bigg{|}\sum_{T\in\mathcal{T}_{h}}(\nabla_{w}(\mathbf{u}-Q_{h} \mathbf{u}),\nabla_{w}\boldsymbol{\psi})_{T}\Bigg{|}+\Bigg{|}\sum_{T\in \mathcal{T}_{h}}(\nabla_{w}(\mathbf{u}-Q_{h}\mathbf{u}),\nabla_{w}(Q_{h} \boldsymbol{\psi}-\boldsymbol{\psi}))_{T}\Bigg{|}\] \[\leq Ch^{k+1}\|\mathbf{u}\|_{k+1}\|\boldsymbol{\psi}\|_{2}.\] Similarly, we have \[|l_{2}(\mathbf{u},Q_{h}\boldsymbol{\psi})|=\Bigg{|}\sum_{T\in \mathcal{T}_{h}}\left\langle(\nabla\mathbf{u}-\mathbf{Q}_{h}\nabla\mathbf{u}) \cdot\mathbf{n},Q_{h}\boldsymbol{\psi}-\{Q_{h}\boldsymbol{\psi}\}\right\rangle _{\partial T}\Bigg{|}\] \[= \Bigg{|}\sum_{T\in\mathcal{T}_{h}}\left\langle(\nabla\mathbf{u}- \mathbf{Q}_{h}\nabla\mathbf{u})\cdot\mathbf{n},Q_{h}\boldsymbol{\psi}- \boldsymbol{\psi}+\left\{\boldsymbol{\psi}-Q_{h}\boldsymbol{\psi}\right\}\right\rangle _{\partial T}\Bigg{|}\] \[\leq C\left(\sum_{T\in\mathcal{T}_{h}}h\|\nabla\mathbf{u}- \mathbf{Q}_{h}\nabla\mathbf{u}\|_{\partial T}^{2}\right)^{\frac{1}{2}}\left( \sum_{T\in\mathcal{T}_{h}}h^{-1}\|Q_{h}\boldsymbol{\psi}-\boldsymbol{\psi}\|_ {\partial T}^{2}\right)^{\frac{1}{2}}\] \[\leq Ch^{k+1}\|\mathbf{u}\|_{k+1}\|\boldsymbol{\psi}\|_{2}.\] According the projection inequality (A.3), we have \[|l_{3}(p,Q_{h}\boldsymbol{\psi})| =\Bigg{|}\sum_{T\in\mathcal{T}_{h}}\left\langle p-\mathbb{Q}_{h} p,Q_{h}\boldsymbol{\psi}\cdot\mathbf{n}\right\rangle_{\partial T}\Bigg{|}\] \[=\] \[\leq C\left(\sum_{T\in\mathcal{T}_{h}}h\|p-\mathbb{Q}_{h}p\|_{ \partial T}^{2}\right)^{\frac{1}{2}}\left(\sum_{T\in\mathcal{T}_{h}}h^{-1}\|Q _{h}\boldsymbol{\psi}-\boldsymbol{\psi}\|_{e}^{2}\right)^{\frac{1}{2}}\] \[\leq Ch^{k+1}\|p\|_{k}\|\boldsymbol{\psi}\|_{2}.\] In the above derivation, we have used the following fact \[\sum_{T\in\mathcal{T}_{h}}\left\langle p-\mathbb{Q}_{h}p,\boldsymbol{\psi} \cdot\mathbf{n}\right\rangle_{\partial T}=0.\] Similar to \(l_{2}(\mathbf{u},Q_{h}\boldsymbol{\psi})\), we get \[|l_{4}(\mathbf{u},\mathbb{Q}_{h}\xi)|\leq Ch^{k+1}\|\mathbf{u}\|_{k+1}\|\xi\|_ {1}.\] Using the definition of \(s(\cdot,\cdot)\) and the projection inequality (A.3), we have \[|s(\mathbb{Q}_{h}p,\mathbb{Q}_{h}\xi)| =\Bigg{|}\sum_{e\in\mathcal{E}_{h}^{0}}h\left\langle[\mathbb{Q}_{ h}p],[\mathbb{Q}_{h}\xi]\right\rangle_{e}\Bigg{|}\] \[=\Bigg{|}\sum_{e\in\mathcal{E}_{h}^{0}}h\left\langle[\mathbb{Q}_ {h}p-p],[\mathbb{Q}_{h}\xi-\xi]\right\rangle_{e}\Bigg{|}\] \[\leq C\left(\sum_{T\in\mathcal{T}_{h}}h\|p-\mathbb{Q}_{h}p\|_{ \partial T}^{2}\right)^{\frac{1}{2}}\left(\sum_{T\in\mathcal{T}_{h}}h\|\xi- \mathbb{Q}_{h}\xi\|_{\partial T}^{2}\right)^{\frac{1}{2}}\] \[\leq Ch^{k+1}\|p\|_{k}\|\xi\|_{1}.\] According to (A.7)-(A.11) and the above five estimates, it follows that \[\|\mathbf{e}_{h}\|^{2}\leq(Ch(\|\mathbf{e}_{h}\|+\|\varepsilon_{h}\|_{h})+Ch^ {k+1}(\|\mathbf{u}\|_{k+1}+\|p\|_{k}))(\|\boldsymbol{\psi}\|_{2}+\|\xi\|_{1}).\] With the regularity assumption (5.6) and \(H^{1}\) error estimates (5.1), we have \[\|\mathbf{e}_{h}\|^{2}\leq Ch^{k+1}(\|\mathbf{u}\|_{k+1}+\|p\|_{k})\|\mathbf{e }_{h}\|,\] which implies (5.7). ## 6 Numerical Experiments In this section, we present several examples in two dimensional domains to verify the stability and order of convergence established in Section 5. As before, let \((\mathbf{u};p)\) be the exact solution of (1.1)-(1.3), \((\mathbf{u}_{h};p_{h})\) be the solution of (2.8)-(2.9). Denote \(\mathbf{e}_{h}=Q_{h}\mathbf{u}-\mathbf{u}_{h}\) and \(\varepsilon_{h}=\mathbb{Q}_{h}p-p_{h}\). ### Example 1 Taking \(\Omega=(0,1)\times(0,1)\), the exact solution is given as follows: \[\mathbf{u}=\begin{pmatrix}sin(2\pi x)cos(2\pi y)\\ -cos(2\pi x)sin(2\pi y)\end{pmatrix},\quad p=x^{2}y^{2}-\frac{1}{9}.\] Consider the following permeability \[\kappa^{-1}=a(sin(2\pi x)+1.1),\] where \(a\) is a given positive constant. According to the above parameters, the momentum source term \(\mathbf{f}\) and the boundary value \(\mathbf{g}=\mathbf{u}|_{\partial\Omega}\) can be calculated. For uniform triangular partition, we choose \(k=1\), \(\mu=1\), \(0.01\) and \(a=1\), \(10^{4}\). Table 1-4 show the errors and orders of convergence accordingly. For uniform rectangular partition and polygonal partition, we choose \(k=2\), \(3\), \(\mu=10^{4}\) and \(a=1\). Table 5-8 show the errors and orders of convergence accordingly. As can be seen from the data in the tables, the numerical experiment results are consistent with the theoretical analysis, and both reach the optimal order of convergence. At the same time, the accuracy and stability of the numerical scheme is verified when the permeability \(\kappa\) is highly varying. The rest of examples in this section have the following setting: \[k=1,\quad\Omega=(0,1)\times(0,1),\quad\mu=0.01,\quad\mathbf{f}=\begin{pmatrix}0 \\ 0\end{pmatrix},\quad\mathbf{g}=\begin{pmatrix}1\\ 0\end{pmatrix}.\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(h\) & \(\|\mathbf{e}_{h}\|\) & order & \(\|\mathbf{e}_{h}\|\) & order & \(\|\varepsilon_{h}\|\) & order \\ \hline \(1/4\) & 1.5213e+00 & & 1.02630e-01 & & 4.3130e-01 & \\ \hline \(1/8\) & 6.5903e-01 & 1.2069 & 3.7485e-02 & 1.4531 & 3.1397e-01 & 0.4581 \\ \hline \(1/16\) & 2.5867e-01 & 1.3493 & 1.0962e-02 & 1.7738 & 1.3347e-01 & 1.2341 \\ \hline \(1/32\) & 1.0300e-01 & 1.3285 & 2.8870e-03 & 1.9249 & 4.7939e-02 & 1.4773 \\ \hline \(1/64\) & 4.3307e-02 & 1.2499 & 7.3057e-04 & 1.9825 & 1.8019e-02 & 1.4116 \\ \hline \(1/128\) & 1.9323e-02 & 1.1643 & 1.8291e-04 & 1.9979 & 7.5708e-03 & 1.2510 \\ \hline \end{tabular} \end{table} Table 1: Errors and orders of convergence on triangular partition as \(k=1\), \(j=2\), \(\mu=1\), \(a=1\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(h\) & \(\|\mathbf{e}_{h}\|\) & order & \(\|\mathbf{e}_{h}\|\) & order & \(\|\varepsilon_{h}\|\) & order \\ \hline 1/4 & 1.0369e+00 & & 6.7825e-02 & & 1.2495e-01 & \\ \hline 1/8 & 4.4656e-01 & 1.2154 & 2.0320e-02 & 1.7389 & 8.4347e-02 & 0.5670 \\ \hline 1/16 & 1.7634e-01 & 1.3405 & 7.3711e-03 & 1.4630 & 4.7227e-02 & 0.8367 \\ \hline 1/32 & 6.9966e-02 & 1.3336 & 2.4262e-03 & 1.6032 & 1.9975e-02 & 1.2414 \\ \hline 1/64 & 2.9008e-02 & 1.2702 & 6.8638e-04 & 1.8216 & 6.4779e-03 & 1.6246 \\ \hline 1/128 & 1.2675e-02 & 1.1945 & 1.7914e-04 & 1.9379 & 1.9103e-03 & 1.7618 \\ \hline \end{tabular} \end{table} Table 4: Errors and orders of convergence on triangular partition as \(k=1,\ j=2,\ \mu=0.01,\ a=10^{4}\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(h\) & \(\|\mathbf{e}_{h}\|\) & order & \(\|\mathbf{e}_{h}\|\) & order & \(\|\varepsilon_{h}\|\) & order \\ \hline 1/4 & 1.2093e+00 & & 2.2360e-01 & & 4.1153e-02 & \\ \hline 1/8 & 4.8033e-01 & 1.3321 & 7.8492-02 & 1.5103 & 1.8900e-02 & 1.1226 \\ \hline 1/16 & 1.8275e-01 & 1.3941 & 2.2513e-02 & 1.8018 & 8.5580e-03 & 1.1430 \\ \hline 1/32 & 7.1280e-02 & 1.3583 & 5.8547e-03 & 1.9431 & 3.9645e-03 & 1.1101 \\ \hline 1/64 & 2.9236e-02 & 1.2858 & 1.4752e-03 & 1.9886 & 1.8795e-03 & 1.0768 \\ \hline 1/128 & 1.2709e-02 & 1.2019 & 3.6902e-04 & 1.9992 & 9.1017e-04 & 1.0462 \\ \hline \end{tabular} \end{table} Table 2: Errors and orders of convergence on triangular partition as \(k=1,\ j=2,\ \mu=0.01,\ a=1\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(h\) & \(\|\mathbf{e}_{h}\|\) & order & \(\|\mathbf{e}_{h}\|\) & order & \(\|\varepsilon_{h}\|\) & order \\ \hline 1/4 & 1.2093e+00 & & 2.2360e-01 & & 4.1153e-02 & \\ \hline 1/8 & 4.8033e-01 & 1.3321 & 7.8492-02 & 1.5103 & 1.8900e-02 & 1.1226 \\ \hline 1/16 & 1.8275e-01 & 1.3941 & 2.2513e-02 & 1.8018 & 8.5580e-03 & 1.1430 \\ \hline 1/32 & 7.1280e-02 & 1.3583 & 5.8547e-03 & 1.9431 & 3.9645e-03 & 1.1101 \\ \hline 1/64 & 2.9236e-02 & 1.2858 & 1.4752e-03 & 1.9886 & 1.8795e-03 & 1.0768 \\ \hline 1/128 & 1.2709e-02 & 1.2019 & 3.6902e-04 & 1.9992 & 9.1017e-04 & 1.0462 \\ \hline \end{tabular} \end{table} Table 3: Errors and orders of convergence on triangular partition as \(k=1,\ j=2,\ \mu=1,\ a=10^{4}\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(h\) & \(\|\mathbf{e}_{h}\|\) & order & \(\|\mathbf{e}_{h}\|\) & order & \(\|\varepsilon_{h}\|\) & order \\ \hline 1/4 & 6.4024e-01 & & 5.3190e-03 & & 4.8987e-01 & \\ \hline 1/8 & 7.5994e-02 & 3.0746 & 3.6500e-04 & 3.8652 & 4.0092e-02 & 3.6110 \\ \hline 1/16 & 7.9936e-03 & 3.2490 & 2.2508e-05 & 4.0194 & 2.7254e-03 & 3.8788 \\ \hline 1/32 & 8.2601e-04 & 3.2746 & 1.3259e-06 & 4.0854 & 1.8280e-04 & 3.8981 \\ \hline 1/64 & 8.8561e-05 & 3.2214 & 7.8119e-08 & 4.0852 & 1.1620e-05 & 3.9756 \\ \hline \end{tabular} \end{table} Table 6: Errors and orders of convergence on rectangular partition as \(k=3,\ j=6,\ \mu=1,\ a=10^{4}\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(h\) & \(\|\mathbf{e}_{h}\|\) & order & \(\|\mathbf{e}_{h}\|\) & order & \(\|\varepsilon_{h}\|\) & order \\ \hline 1/4 & 1.3518e+00 & & 1.5787e-02 & & 5.9253e-01 & \\ \hline 1/8 & 3.4537e-01 & 1.9687 & 2.3238e-03 & 2.7642 & 1.8185e-01 & 1.7041 \\ \hline 1/16 & 1.0534e-01 & 1.7130 & 3.8524e-04 & 2.5927 & 3.8610e-02 & 2.2357 \\ \hline 1/32 & 2.7395e-02 & 1.9431 & 5.2055e-05 & 2.8876 & 4.9117e-03 & 2.9747 \\ \hline 1/64 & 7.0084e-03 & 1.9668 & 6.6314e-06 & 2.9727 & 7.1932e-04 & 2.7715 \\ \hline \end{tabular} \end{table} Table 7: Errors and orders of convergence on polygonal partition as \(k=2,\ j=8,\ \mu=1,\ a=10^{4}\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(h\) & \(\|\mathbf{e}_{h}\|\) & order & \(\|\mathbf{e}_{h}\|\) & order & \(\|\varepsilon_{h}\|\) & order \\ \hline 1/4 & 6.2524e-01 & & 5.2453e-03 & & 4.0361e-01 & \\ \hline 1/8 & 7.9629e-02 & 2.9730 & 3.7527e-04 & 3.8050 & 3.5666e-02 & 3.5003 \\ \hline 1/16 & 8.0684e-03 & 3.3029 & 2.4399e-05 & 3.9431 & 2.1461e-03 & 4.0548 \\ \hline 1/32 & 8.2383e-04 & 3.2919 & 1.5631e-06 & 3.9644 & 1.5572e-04 & 3.7847 \\ \hline 1/64 & 8.9162e-05 & 3.2078 & 9.5552e-08 & 4.0319 & 1.1355e-05 & 3.7775 \\ \hline \end{tabular} \end{table} Table 8: Errors and orders of convergence on polygonal partition as \(k=3,\ j=9,\ \mu=1,\ a=10^{4}\) 6.2 Example 2 In this example, the permeability coefficient \(\kappa\) is selected as the piecewise constant function with highly varying. The profile of the permeability inverse is shown in Fig. 1(a). As we know, this example has no analytic solutions. In Fig. 1, a 128\(\times\)128 rectangular partition is used to solve this example. The profiles of the pressure and the two components of the velocity for CDG are plotted in Fig. 1(b)-1(d) ### Example 3 In this example, we choose a vuggy medium with the permeability coefficient \(\kappa\) highly varying. The profile of the permeability inverse is plotted in Fig. 2(a). For solving this example, a 128\(\times\)128 rectangular partition is used. And the pressure obtained by CDG method is present in Fig. 2(b). The velocity profiles are shown in Fig. 2(c)-2(d). Figure 1: Profiles of \(\kappa^{-1}\) and numerical solution in Ex. 2 Figure 2: Profiles of \(\kappa^{-1}\) and numerical solution in EX. 3 6.4 Example 4 The fluid flowing in a fibrous material can also be described by Brinkman equations. Fig. 3(a) shows the inverse of permeability in a common fibrous material. The parameters are the same as in the previous example. We can get the corresponding pressure in Fig. 3(b) and velocity in Fig. 3(c)-3(d) by CDG method. ## Appendix A Some Inequality Estimates In this Appendix, we provide inequalities for projection operators \(Q_{h}\), \(\mathbf{Q}_{h}\), \(\mathbb{Q}_{h}\) and inequality estimates used in the previous paper. **Lemma A.1**.: _[_22_]_ _Let \(\mathcal{T}_{h}\) be a shape regular partition of \(\Omega\), \(\mathbf{w}\in[H^{k+1}(\Omega)]^{d}\) and \(\rho\in H^{k}(\Omega)\). Then we have the following projection inequalities_ \[\sum_{T\in\mathcal{T}_{h}}\|\mathbf{w}-Q_{h}\mathbf{w}\|_{T}^{2}\leq Ch^{2(k+ 1)}\|\mathbf{w}\|_{k+1}^{2},\] (A.1) Figure 3: Profiles of \(\kappa^{-1}\) and numerical solution in EX. 4 \[\sum_{T\in\mathcal{T}_{h}}\|\nabla\mathbf{w}-\mathbf{Q}_{h}(\nabla \mathbf{w})\|_{T}^{2} \leq Ch^{2k}\|\mathbf{w}\|_{k+1}^{2},\] (A.2) \[\sum_{T\in\mathcal{T}_{h}}\|\rho-\mathbb{Q}_{h}\rho\|_{T}^{2} \leq Ch^{2k}\|\rho\|_{k}^{2}.\] (A.3) _where \(C\) is a constant independent of the size of mesh \(h\) and the functions \(\mathbf{w}\) and \(\rho\)._ Let \(T\) be a cell with \(e\) as an edge/face. For any function \(\rho\in H^{1}(T)\), the following trace inequality has been proved to be valid in [22]: \[\|\rho\|_{e}^{2}\leq C(h_{T}^{-1}\|\rho\|_{T}^{2}+h_{T}\|\nabla\rho\|_{T}^{2}).\] (A.4) Furthermore, if \(\rho\in P_{k}(T)\), we have the following inverse inequality \[\|\nabla\rho\|_{T}\leq Ch_{T}^{-1}\|\rho\|_{T}.\] (A.5) **Lemma A.2**.: _For any \(\mathbf{v}\in V_{h}\), the following inequality holds true_ \[\sum_{e\in\mathcal{E}_{h}}h^{-1}\|[\mathbf{v}]\|_{e}^{2}\leq C\| \!|\mathbf{v}|\!|^{2},\] (A.6) _where \(C\) is a positive constant._ The proof of this lemma is given in Lemma 3.2 in [27]. **Lemma A.3**.: _For any \(\mathbf{w}\in[H^{k+1}(\Omega)]^{d}\), \(r\in H^{k}(\Omega)\)\(\mathbf{v}\in V_{h}\) and \(q\in W_{h}\). Then we have_ \[|l_{1}(\mathbf{w},\mathbf{v})| \leq Ch^{k}\|\mathbf{w}\|_{k+1}|\!|\!|\mathbf{v}|\!|,\] (A.7) \[|l_{2}(\mathbf{w},\mathbf{v})| \leq Ch^{k}\|\mathbf{w}\|_{k+1}|\!|\!|\mathbf{v}|\!|,\] (A.8) \[|l_{3}(r,\mathbf{v})| \leq Ch^{k}\|r\|_{k}|\!|\!|\mathbf{v}|\!|,\] (A.9) \[|l_{4}(\mathbf{w},q)| \leq Ch^{k}\|\mathbf{w}\|_{k+1}\|q\|_{h},\] (A.10) \[|s(\mathbb{Q}_{h}r,q)| \leq Ch^{k}\|r\|_{k}\|q\|_{h}.\] (A.11) Proof.: Using the definition of \(\nabla_{w}\), the Cauchy-Schwarz inequality, the trace inequality (A.4) and the projection inequality (A.1), we obtain \[|l_{1}(\mathbf{w},\mathbf{v})| =\Bigg{|}\sum_{T\in\mathcal{T}_{h}}(\nabla_{w}(\mathbf{w}-Q_{h} \mathbf{w}),\nabla_{w}\mathbf{v})_{T}\Bigg{|}\] \[=\Bigg{|}\sum_{T\in\mathcal{T}_{h}}(-(\mathbf{w}-Q_{w}\mathbf{w},\nabla\cdot\nabla_{w}\mathbf{v})_{T}+\left\langle\left\{\mathbf{w}-Q_{w} \mathbf{w}\right\},\nabla_{w}\mathbf{v}\cdot\mathbf{n}\right\rangle_{\partial T })\Bigg{|}\] \[=\Bigg{|}\sum_{T\in\mathcal{T}_{h}}((\nabla(\mathbf{w}-Q_{w} \mathbf{w}),\nabla_{w}\mathbf{v})_{T}+\left\langle Q_{w}\mathbf{w}-\left\{Q_ {w}\mathbf{w}\right\},\nabla_{w}\mathbf{v}\cdot\mathbf{n}\right\rangle_{ \partial T})\Bigg{|}\] \[=\Bigg{|}\sum_{T\in\mathcal{T}_{h}}((\nabla(\mathbf{w}-Q_{w} \mathbf{w}),\nabla_{w}\mathbf{v})_{T}+\left\langle Q_{w}\mathbf{w}-\mathbf{w}+\left\{ \mathbf{w}-Q_{w}\mathbf{w}\right\},\nabla_{w}\mathbf{v}\cdot\mathbf{n}\right\rangle _{\partial T})\] \[\leq\sum_{T\in\mathcal{T}_{h}}(\|\nabla(\mathbf{w}-Q_{w}\mathbf{ w})\|_{T}\|\nabla_{w}\mathbf{v}\|_{T}+Ch^{-\frac{1}{2}}\|\mathbf{w}-Q_{w} \mathbf{w}\|_{\partial T}\|\nabla_{w}\mathbf{v}\|_{T})\] \[\leq\left(\sum_{T\in\mathcal{T}_{h}}(\|\nabla(\mathbf{w}-Q_{w} \mathbf{w})\|_{T}+Ch^{-1}\|\mathbf{w}-Q_{w}\mathbf{w}\|_{T})^{2}\right)^{\frac {1}{2}}\left(\sum_{T\in\mathcal{T}_{h}}\|\nabla_{w}\mathbf{v}\|_{T}^{2}\right) ^{\frac{1}{2}}\] \[\leq Ch^{k}\|\mathbf{w}\|_{k+1}\|\mathbf{v}\|.\] Combining (A.6), (2.1) and the projection inequality (A.2) gives \[|l_{2}(\mathbf{w},\mathbf{v})| =\Bigg{|}\sum_{T\in\mathcal{T}_{h}}\left\langle(\nabla\mathbf{w} -\mathbf{Q}_{h}\nabla\mathbf{w})\cdot\mathbf{n},\mathbf{v}-\left\{\mathbf{v} \right\}\right\rangle_{\partial T}\Bigg{|}\] \[\leq C\left(\sum_{T\in\mathcal{T}_{h}}h\|\nabla\mathbf{w}- \mathbf{Q}_{h}\nabla\mathbf{w}\|_{\partial T}^{2}\right)^{\frac{1}{2}}\left( \sum_{e\in\mathcal{E}_{h}}h^{-1}\|[\mathbf{v}]\|_{e}^{2}\right)^{\frac{1}{2}}\] \[\leq Ch^{k}\|\mathbf{w}\|_{k+1}\|\mathbf{v}\|.\] According to the projection inequality (A.3), we get \[|l_{3}(r,\mathbf{v})| =\Bigg{|}\sum_{T\in\mathcal{T}_{h}}\left\langle r-\mathbb{Q}_{h}r,\mathbf{v}\cdot\mathbf{n}\right\rangle_{\partial T}\Bigg{|}\] \[\leq C\left(\sum_{T\in\mathcal{T}_{h}}h\|r-\mathbb{Q}_{h}r\|_{ \partial T}^{2}\right)^{\frac{1}{2}}\left(\sum_{e\in\mathcal{E}_{h}}h^{-1}\|[ \mathbf{v}]\|_{e}^{2}\right)^{\frac{1}{2}}\] \[\leq Ch^{k}\|r\|_{k}\|\mathbf{v}\|.\] Similarly, with (2.2), then \[|l_{4}(\mathbf{w},q)| =\Bigg{|}\sum_{T\in\mathcal{T}_{h}}\left\langle(\mathbf{u}-Q_{h} \mathbf{u})\cdot\mathbf{n},q-\left\{q\right\}\right\rangle_{\partial T}\Bigg{|}\] \[\leq C\left(\sum_{T\in\mathcal{T}_{h}}h^{-1}\|\mathbf{w}-Q_{h} \mathbf{w}\|_{\partial T}^{2}\right)^{\frac{1}{2}}\left(\sum_{e\in\mathcal{E} _{h}^{0}}h\|[\![q]\!]\|_{e}^{2}\right)^{\frac{1}{2}}\] \[\leq Ch^{k}\|\mathbf{w}\|_{k+1}\|q\|_{h}.\] Using the definition of \(s(\cdot,\cdot)\), we get \[|s(\mathbb{Q}_{h}r,q)| =\Bigg{|}\sum_{e\in\mathcal{E}_{h}^{0}}h\left\langle[\mathbb{Q}_{ h}r],[\![q]\!]\right\rangle_{e}\Bigg{|}\] \[=\Bigg{|}\sum_{e\in\mathcal{E}_{h}^{0}}h\left\langle[\mathbb{Q}_{ h}r-r],[\![q]\!]\right\rangle_{e}\Bigg{|}\] \[\leq C\left(\sum_{T\in\mathcal{T}_{h}}h\|r-\mathbb{Q}_{h}r\|_{ \partial T}^{2}\right)^{\frac{1}{2}}\left(\sum_{e\in\mathcal{E}_{h}^{0}}h\|[q]\|_ {e}^{2}\right)^{\frac{1}{2}}\] \[\leq Ch^{k}\|r\|_{k}\|q\|_{h},\] which completes the proof. ## Acknowledgments This work was supported by National Natural Science Foundation of China (Grant No. 1901015, 12271208), and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University. We sincerely thank the anonymous reviewers for their insightful comments, which have helped improve the quality of this paper.
2302.10070
Metrization of powers of the Jensen-Shannon divergence
Metrization of statistical divergences is useful in both theoretical and practical aspects. Considering the fractional powers of statistical divergences is one way to obtain metrics associated with divergences. With this motivation, Os\'an, Bussandri, and Lamberti (2018) considered metrizations for the fractional powers of the Jensen-Shannon divergences between multinomial distributions and gave an open problem. In this short note, we give an affirmative answer to their conjecture. This method is also applicable to powers of f-divergences between the Cauchy distribution.
Kazuki Okamura
2023-02-20T16:21:55Z
http://arxiv.org/abs/2302.10070v2
# Metrization of powers of the Jensen-Shannon divergence ###### Abstract Metrization of statistical divergences is useful in both theoretical and practical aspects. Considering the fractional powers of statistical divergences is one way to obtain metrics associated with divergences. With this motivation, Osan, Bussandri, and Lamberti (2018) considered metrizations for the fractional powers of the Jensen-Shannon divergences between multinomial distributions and gave an open problem. In this short note, we give an affirmative answer to their conjecture. This method is also applicable to powers of \(f\)-divergences between the Cauchy distribution. Keywords:Jensen-Shannon divergence metrization multinomial distribution Cauchy distribution. ## 1 Introduction Dissimilarity between distributions is an important topic in probability and statistics and related fields such as machine learning, and has been investigated extensively. ([15]). Statistical divergences are canonical measures of dissimilarity. One of the most standard divergences is the Kullback-Leibler divergence (KLD). It is also known as relative entropy. It has many applications, both theoretical and practical. It naturally appears as a rate function of Sanov's theorem in large deviation theory, which describes a decay rate of rate events. In the context of information geometry, it is a generalization of squared distance, and for an exponential family, it satisfies a Pythagorean theorem. In general, however, the square root of the KLD is not a metric. In fact, it can be asymmetric and may not satisfy the triangle inequality. It is non-negative but can be infinite. Another widely-used divergence is the total variation distance (TVD). It is a bounded metric. However, the TVD between two singular distributions is always 2. It is often difficult to find explicit formulae and we have to rely on numerical computations. The Jensen-Shannon divergence (JSD) is a divergence defined by the Kullback-Leibler divergence. It is also known as the information radius or total divergence from the average. It is always well-defined, symmetric, and bounded ([8]). The JSD has been applied in many research disciplines and has statistical and information-theoretical interpretations. In statistical inference theory, the JSD gives both the lower and upper bounds of Bayes' probability error, and in information theory, the JSD can be related to mutual information ([6]). Some generalizations and related notions of the JSD have been considered ([10, 12]). Theoretically, metric spaces are one of the most fundamental mathematical frameworks. Considering the metrization of divergences is also important in practical applications, particularly for designing efficient algorithms in computational geometry ([2]). Indeed, the triangle inequality can be used to speed up proximity queries ([18]) and \(k\)-means clustering ([3]). In general, symmetric divergences themselves are not metrics, so it is natural to consider fractional powers (moments) in order to obtain metrics associated with the symmetric divergences. [7, 14] give sufficient conditions for powers of Csiszar's \(f\)-divergences to be metrics. It is well-known that the square root of the JSD is a metric ([4, 16, 1]). This metric as well as the TVD are canonical statistical metric distances. Recently, Osan, Bussandri, and Lamberti [13] regarded the JSD as a special case of a Csiszar divergence and gave a sufficient condition for the power of the JSD between multinomial distributions to be a metric. [13, Conjecture 1] states that the \(p\)-th power of the JSD between multinomial distributions is not a metric if \(p>1/2\). The square root of the JSD can be isometrically embedded into Hilbert spaces ([5]). On a Hilbert space, \(p\)-th power of the distance is not a distance for every \(p>1\). This observation indirectly supports that the conjecture is true. However, the embedding is far from being surjective, so the fact is not used in the proof. To our knowledge, the problem is open. The aim of this paper is to show the conjecture. We take a bare-handed approach, completely different from [13]. This is somewhat similar to the proof of [11, Theorem 28], but we cannot consider the metric transformation in it. Furthermore, we give an alternative proof of [13, Proposition 1], which is much simpler than the proof in [13]. Our bare-handed approach is also applicable to the Cauchy distribution. The Cauchy distribution is known as a canonical example of heavy-tailed distributions. \(f\)-divergences between Cauchy distributions are always symmetric ([11, 17]), so it is natural to consider whether powers of \(f\)-divergences are metrics or not, for a general convex function \(f\). We show that the \(p\)-th power of \(f\)-divergence between Cauchy distributions is not a metric if \(p>1/2\), for \(f\) in a large class of differentiable convex functions on \((0,\infty)\) including the KLD and the JSD, but not including the TVD. Our proof depends on an expression of \(f\)-divergences recently obtained by Verdu [17]. ## 2 Framework and main result Let \(X\) be a sample space with a sigma-algebra and \(\mu\) be a reference measure on \(X\). For the discrete and continuous distributions, \(\mu\) is usually taken as the counting measure and the Lebesgue measure, respectively. Let \(P\) and \(Q\) be two probability measures on \(X\) with density functions \(p\) and \(q\) with respect to \(\mu\), respectively. The Kullback-Leibler divergence between \(P\) and \(Q\) is defined by \[D_{KL}(P:Q):=\int_{X}\log\left(\frac{p(x)}{q(x)}\right)p(x)\mu(dx).\] The Jensen-Shannon divergence between \(P\) and \(Q\) is defined by \[D_{JS}(P:Q):=\frac{1}{2}\left(D_{KL}\left(P:\frac{P+Q}{2}\right)+D_{KL}\left(Q: \frac{P+Q}{2}\right)\right).\] We can define them by using the Radon-Nikodym derivative. The Kullback-Leibler divergence is asymmetric in general, but the Jensen-Shannon divergence is always symmetric. We also remark that \(P\) and \(Q\) are both absolutely continuous with respect to \((P+Q)/2\), so the Jensen-Shannon divergence is always defined. If \(P\) is not absolutely continuous with respect to \(Q\), then, \(D_{KL}(P:Q)=+\infty\). They are canonical examples of \(f\)-divergences. We let the entropy be \[H(P):=\int_{X}-p(x)\log p(x)\mu(dx).\] Then, \[D_{JS}(P:Q)=H\left(\frac{P+Q}{2}\right)-\frac{H(P)+H(Q)}{2}.\] We consider the logarithm to the base 2, instead of the natural logarithm. Then, \(0\leq D_{JS}(P:Q)\leq 1\), and, \(2D_{JS}(P:Q)\) is smaller than the total variation distance. Hereafter, we let \(X=\{1,2,\cdots,n\}\) and \(\mu\) be the counting measure. We let \(p_{i}=p(\{i\})\) for ease of notation. For \(n\geq 2\), let \(\mathcal{P}_{n}:=\{(p_{i})_{i}:\sum_{i}p_{i}=1,p_{i}>0\}\) and \(\overline{\mathcal{P}_{n}}:=\{(p_{i})_{i}:\sum_{i}p_{i}=1,p_{i}\geq 0\}\). For \(P=(p_{i})_{i=1}^{n}\) and \(Q=(q_{i})_{i=1}^{n}\) in \(\mathcal{P}_{n}\), \[D_{KL}(P:Q)=\sum_{i=1}^{n}p_{i}\log_{2}\left(\frac{p_{i}}{q_{i}}\right),\] and \[D_{JS}(P:Q)=\sum_{i=1}^{n}-\frac{p_{i}}{2}\log_{2}\left(\frac{p_{i}+q_{i}}{2p_ {i}}\right)-\frac{q_{i}}{2}\log_{2}\left(\frac{p_{i}+q_{i}}{2q_{i}}\right).\] We now recall the definition of a metric. Let \(S\) be a non-empty set. We call a function \(d:S\times S\to[0,\infty)\) a distance function if it satisfies the following three conditions: (1) (Identity of indiscernibles) \(d(x,y)=0\) if and only if \(x=y\). (2) (symmetry) \(d(x,y)=d(y,x)\) for \(x,y\in S\). (3) (triangle inequality) \(d(x,z)\leq d(x,y)+d(y,z)\) for \(x,y,z\in S\). For such \(d\), we call a pair \((S,d)\) a metric space. This is a very fundamental notion in geometry. Our main result is Theorem 2.1: _Let \(\alpha>1/2\). Then, \(D_{JS}(P:Q)^{\alpha}\) is not a metric on \(\mathcal{P}_{n}\)._ We show this in the following section. ## 3 Proof We first deal with the case that \(n=2\). Let \(P_{t}:=(t,1-t)\), \(0\leq t\leq 1\). For \(t\in[0,1/2)\), let \(f(t):=D_{JS}(P_{1/2-t}:P_{1/2+t})\) and \(g(t):=D_{JS}(P_{1/2-t}:P_{1/2})=D_{JS}(P_{1/2}:P_{1/2+t})\). Let \(F(t):=f(t)^{\alpha}-2g(t)^{\alpha}\). It suffices to show that \(F(t)>0\) for some \(t\). Since \(F(0)=0\), by the mean-value theorem, it suffices to show that \(F^{\prime}(t)>0\) for every \(t\) sufficiently close to \(0\). Since \(F^{\prime}(t)=\alpha(f^{\prime}(t)f(t)^{\alpha-1}-2g^{\prime}(t)g(t)^{\alpha- 1})\), it suffices to show that \[\left(\frac{g(t)}{f(t)}\right)^{1-\alpha}>2\frac{g^{\prime}(t)}{f^{\prime}(t)}, \tag{1}\] for every \(t\) sufficiently close to \(0\). We see that \(f(t)=1-H(P_{1/2+t})\) and \(g(t)=H(P_{(1+t)/2})-\frac{H(P_{1/2+t})+1}{2}\). Hence, \[f^{\prime}(t)=-\frac{d}{dt}H(P_{1/2+t})\text{ and }g^{\prime}(t)=\frac{d}{dt}H( P_{(1+t)/2})-\frac{1}{2}\frac{d}{dt}H(P_{1/2+t}).\] Since \(H(P_{s})=-s\log_{2}s-(1-s)\log_{2}(1-s)\), \(0\leq s\leq 1\), we see that \(\frac{d}{dt}H(P_{(1+t)/2})=-\frac{1}{2}\log\left(\frac{1+t}{1-t}\right)\) and \(\frac{d}{dt}H(P_{1/2+t})=-\log\left(\frac{1+2t}{1-2t}\right).\) Hence, \[2\frac{g^{\prime}(t)}{f^{\prime}(t)}=1-2\frac{\frac{d}{dt}H(P_{(1+t)/2})}{ \frac{d}{dt}H(P_{1/2+t})}=1-\frac{\log\frac{1+t}{1-t}}{\log\frac{1+2t}{1-2t}}.\] Since \(\lim_{s\to 0}\frac{\log\left(\frac{1+s}{1-s}\right)}{2s}=1\), we see that \(\lim_{t\to 0}2\frac{g^{\prime}(t)}{f^{\prime}(t)}=\frac{1}{2}\). We recall that \(f(0)=g(0)=0\). Then, by l'Hospital's theorem, \[\lim_{t\to 0}\frac{g(t)}{f(t)}=\lim_{t\to 0}\frac{g^{\prime}(t)}{f^{\prime}(t)}= \frac{1}{4}.\] Hence, \(\lim_{t\to 0}\left(\frac{g(t)}{f(t)}\right)^{1-\alpha}=\left(\frac{1}{4}\right) ^{1-\alpha}.\) Since \(\alpha>1/2\), \(\left(\frac{1}{4}\right)^{1-\alpha}>\frac{1}{2}\). Thus we have Eq. (1). The proof of Theorem 1 is completed for \(n=2\). We now deal with the case of \(n\geq 3\). We can naturally embed \(\overline{\mathcal{P}_{2}}\) into \(\overline{\mathcal{P}_{n}}\) by a map \((p_{1},p_{2})\mapsto(p_{1},p_{2},0,\cdots,0)\). Since \(P\mapsto H(P)\) is continuous with respect to \(P\) on \(\overline{\mathcal{P}_{n}}\), we can find \(P_{1},P_{2}\) and \(P_{3}\) in \(\mathcal{P}_{n}\) such that \(D_{JS}(P_{1}:P_{3})^{\alpha}>D_{JS}(P_{1}:P_{2})^{\alpha}+D_{JS}(P_{2}:P_{3}) ^{\alpha}\). The proof of Theorem 1 is completed for \(n\geq 3\). Remark 1: In general, \(x^{\beta}+y^{\beta}\leq(x+y)^{\beta}\) for \(x,y\geq 0\) and \(\beta\geq 1\), and, if \(x^{\beta}+y^{\beta}=(x+y)^{\beta}\), then, \(\beta=1\) or \(xy=0\). Hence, if a function \(d:S\times S\rightarrow[0,\infty)\) is _not_ a metric function on a set \(S\), then, \(d^{\beta}\) is _not_ a metric function \(S\). Since it is known that \(D_{JS}(P:Q)^{1/2}\) is a metric, this gives an alternative proof of [13, Proposition 1]. which is much easier than the proof given in it. ## 4 Metrization of \(f\)-divergences between the Cauchy distribution For \(\mu\in\mathbb{R}\) and \(\sigma>0\), the density function of the univariate Cauchy distribution is given by \(p_{\mu,\sigma}(x):=\frac{\sigma}{\pi}\frac{1}{(x-\mu)^{2}+\sigma^{2}},\ x\in \mathbb{R}\). For a continuous function \(f\) on \((0,\infty)\), the \(f\)-divergence is defined by \[D_{f}(p_{\mu_{1},\sigma_{1}}:p_{\mu_{2},\sigma_{2}}):=\int_{\mathbb{R}}f\left( \frac{p_{\mu_{2},\sigma_{2}}(x)}{p_{\mu_{1},\sigma_{1}}(x)}\right)p_{\mu_{1}, \sigma_{1}}(x)dx.\] The following result is crucial in our proof. Theorem 4.1 ([17, Eq. (189) in Theorem 10]): _Let \(f\) be a continuous function on \((0,\infty)\). Then,_ \[D_{f}(p_{\mu_{1},\sigma_{1}}:p_{\mu_{2},\sigma_{2}})=\int_{0}^{\pi}f\left( \frac{1}{\zeta+\sqrt{\zeta^{2}-1}\cos\theta}\right)\frac{d\theta}{\pi},\] _where \(\zeta:=1+\frac{(\mu_{2}-\mu_{1})^{2}+(\sigma_{2}-\sigma_{1})^{2}}{2\sigma_{1 }\sigma_{2}}\)._ In particular, every \(f\)-divergence is a function of \(\zeta\). This quantity is also known as maximal invariant with respect to an action of the special linear group \(SL(2,\mathbb{R})\) to the complex parameter space \(\mathbb{H}:=\{\mu+\sigma i:\mu\in\mathbb{R},\sigma>0\}\), considered by McCullagh [9]. For example, we obtain the JSD if we let \(f(u)=f_{JS}(u):=(u\log\frac{2u}{1+u}-\log\frac{1+u}{2})/2\). Theorem 4.2: _Let \(f\) be a convex function on \((0,\infty)\). Assume that \(f(1)=0\), \(f\) is in \(C^{2}\) on an open neighborhood of \(1\), and \(f^{\prime\prime}(1)>0\). Let \(\alpha>1/2\). Then, \(D_{f}(p_{0,\sigma_{1}}:p_{0,\sigma_{2}})^{\alpha}\) is not a metric on \((0,\infty)\)._ This result is applicable to a large class of \(f\)-divergences including the KLD and the JSD. However, the regularity assumption for \(f\) is crucial. Obviously, the conclusion fails for the TVD, which is obtained by \(f(u)=f_{TV}(u):=|u-1|/2\). Proof: We will show that \[D_{f}(p_{0,\sigma_{1}}:p_{0,\sigma_{2}})^{\alpha}+D_{f}(p_{0,\sigma_{2}}:p_{0,\sigma_{3}})^{\alpha}<D_{f}(p_{0,\sigma_{1}}:p_{0,\sigma_{3}})^{\alpha}\] where \((\sigma_{1},\sigma_{2},\sigma_{3})=(e^{-t},1,e^{t})\) for sufficiently small \(t>0\). For \(t>0\), let \[h(t):=\int_{0}^{\pi}f\left(\frac{1}{\cosh(t)+\sinh(t)\cos\theta}\right)\frac{d \theta}{\pi}.\] Then, by Theorem 4.2, \(h(t)=D_{f}(p_{0,\sigma_{1}}:p_{0,\sigma_{2}})=D_{f}(p_{0,\sigma_{2}}:p_{0, \sigma_{3}})\) and \(h(2t)=D_{f}(p_{0,\sigma_{1}}:p_{0,\sigma_{3}})\). Hence, it suffices to show that \(2h(t)^{\alpha}<h(2t)^{\alpha}\) for some \(t>0\). We remark that \[\lim_{t\rightarrow+0}\cosh(t)+\sinh(t)\cos\theta=1 \tag{2}\] and \[\lim_{t\rightarrow+0}\sinh(t)+\cosh(t)\cos\theta=\cos\theta\in[-1,1]. \tag{3}\] Under the assumption of \(f\), we can exchange the derivative wit respect to \(t\) and the integral wit respect to \(\theta\), so we obtain that there exists a sufficiently small \(\delta_{0}>0\) such that for every \(0<t<\delta_{0}\), \[h^{\prime}(t)=\int_{0}^{\pi}-\frac{\sinh(t)+\cosh(t)\cos\theta}{(\cosh(t)+ \sinh(t)\cos\theta)^{2}}f^{\prime}\left(\frac{1}{\cosh(t)+\sinh(t)\cos\theta} \right)\frac{d\theta}{\pi},\] and, \[h^{\prime\prime}(t)=\int_{0}^{\pi}\frac{(\sinh(t)+\cosh(t)\cos\theta)^{2}}{( \cosh(t)+\sinh(t)\cos\theta)^{4}}f^{\prime\prime}\left(\frac{1}{\cosh(t)+ \sinh(t)\cos\theta}\right)\frac{d\theta}{\pi}\] \[+\int_{0}^{\pi}\frac{2(\sinh(t)+\cosh(t)\cos\theta)^{2}-(\cosh(t)+\sinh(t) \cos\theta)^{2}}{(\cosh(t)+\sinh(t)\cos\theta)^{3}}f^{\prime}\left(\frac{1}{ \cosh(t)+\sinh(t)\cos\theta}\right)\frac{d\theta}{\pi}.\] We recall that \(\int_{0}^{\pi}\cos\theta d\theta=\int_{0}^{\pi}\cos(2\theta)d\theta=0\) and \(\int_{0}^{\pi}\cos^{2}\theta d\theta=\pi/2\). By this, (2), and (3), we see that \[\lim_{t\rightarrow+0}h(t)=\lim_{t\rightarrow+0}h^{\prime}(t)=0\] and \[\lim_{t\rightarrow+0}h^{\prime\prime}(t)=f^{\prime\prime}(1)/2>0.\] By l'Hospital's theorem, \[\lim_{t\rightarrow+0}\frac{h(2t)}{h(t)}=\lim_{t\rightarrow+0}\frac{2h^{ \prime}(2t)}{h^{\prime}(t)}=\lim_{t\rightarrow+0}\frac{4h^{\prime\prime}(2t)} {h^{\prime\prime}(t)}=4.\] Since \(\alpha>1/2\), we see that \(2h(t)^{\alpha}<h(2t)^{\alpha}\) for sufficiently small \(t>0\). This completes the proof. Remark 2: (i) In the case of the TVD, \(\lim_{t\rightarrow+0}h^{\prime}(t)=\frac{1}{\pi}>0\), and hence, by l'Hospital's theorem, we have that \(\lim_{t\rightarrow+0}\frac{h(2t)}{h(t)}=2\). (ii) In [17, Theorem 10], it is assumed that \(f\) is convex and right-continuous at \(0\). However, for every \((\mu_{1},\sigma_{1})\) and \((\mu_{2},\sigma_{2})\), \(0<\inf_{x\in\mathbb{R}}\frac{p_{\mu_{2},\sigma_{2}}(x)}{p_{\mu_{1},\sigma_{1} }(x)}\leq\sup_{x\in\mathbb{R}}\frac{p_{\mu_{2},\sigma_{2}}(x)}{p_{\mu_{1}, \sigma_{1}}(x)}<+\infty\), so we do not need to assume that \(f\) is defined at \(0\). This property does not hold for normal distributions. ###### Acknowledgements. The author thanks Prof. Frank Nielsen for notifying me of the conjecture by Osan, Bussandri, and Lamberti.
2303.08270
Meta-Diagrams for 2-Parameter Persistence
We first introduce the notion of meta-rank for a 2-parameter persistence module, an invariant that captures the information behind images of morphisms between 1D slices of the module. We then define the meta-diagram of a 2-parameter persistence module to be the M\"{o}bius inversion of the meta-rank, resulting in a function that takes values from signed 1-parameter persistence modules. We show that the meta-rank and meta-diagram contain information equivalent to the rank invariant and the signed barcode. This equivalence leads to computational benefits, as we introduce an algorithm for computing the meta-rank and meta-diagram of a 2-parameter module $M$ indexed by a bifiltration of $n$ simplices in $O(n^3)$ time. This implies an improvement upon the existing algorithm for computing the signed barcode, which has $O(n^4)$ runtime. This also allows us to improve the existing upper bound on the number of rectangles in the rank decomposition of $M$ from $O(n^4)$ to $O(n^3)$. In addition, we define notions of erosion distance between meta-ranks and between meta-diagrams, and show that under these distances, meta-ranks and meta-diagrams are stable with respect to the interleaving distance. Lastly, the meta-diagram can be visualized in an intuitive fashion as a persistence diagram of diagrams, which generalizes the well-understood persistence diagram in the 1-parameter setting.
Nate Clause, Tamal K. Dey, Facundo Mémoli, Bei Wang
2023-03-14T23:16:45Z
http://arxiv.org/abs/2303.08270v1
# Meta-Diagrams for 2-Parameter Persistence ###### Abstract We first introduce the notion of meta-rank for a 2-parameter persistence module, an invariant that captures the information behind images of morphisms between 1D slices of the module. We then define the meta-diagram of a 2-parameter persistence module to be the Mobius inversion of the meta-rank, resulting in a function that takes values from signed 1-parameter persistence modules. We show that the meta-rank and meta-diagram contain information equivalent to the rank invariant and the signed barcode. This equivalence leads to computational benefits, as we introduce an algorithm for computing the meta-rank and meta-diagram of a 2-parameter module \(M\) indexed by a bifiltration of \(n\) simplices in \(O(n^{3})\) time. This implies an improvement upon the existing algorithm for computing the signed barcode, which has \(O(n^{4})\) runtime. This also allows us to improve the existing upper bound on the number of rectangles in the rank decomposition of \(M\) from \(O(n^{4})\) to \(O(n^{3})\). In addition, we define notions of erosion distance between meta-ranks and between meta-diagrams, and show that under these distances, meta-ranks and meta-diagrams are stable with respect to the interleaving distance. Lastly, the meta-diagram can be visualized in an intuitive fashion as a persistence diagram of diagrams, which generalizes the well-understood persistence diagram in the 1-parameter setting. Multiparameter persistence modules, persistent homology, Mobius inversion, barcodes, computational topology, topological data analysis 10.4230/LIPIcs.SoCG.2023.275 Nate Clause: NC is partially supported by NSF CCF 1839356 and NSF DMS 1547357. Tamal K. Dey: TD is partially supported by NSF CCF 2049010. Facundo Memoli: FM is partially supported by BSF 2020124, NSF CCF 1740761, NSF CCF 1839358, and NSF IIS 1901360. Bei Wang: BW is partially supported by NSF IIS 2145499, NSF IIS 1910733, and DOE DE SC0021015. ## 1 Introduction In the case of a 1-parameter persistence module, the persistence diagram (or barcode) captures its complete information up to isomorphism via a collection of intervals. The persistence diagram is represented as a multi-set of points in the plane, whose coordinates are the birth and death times of intervals, each of which encodes the lifetime of a topological feature. This compact representation of a persistence module enables its interpretability and facilitates its visualization. When moving to the multiparameter setting, the situation becomes much more complex as a multiparameter persistence module may contain indecomposable pieces that are not entirely determined by intervals or do not admit a finite discrete description [10]. Such an increased complexity has led to the study of other invariants for multiparameter persistence modules. The first invariant is the _rank invariant_[10], which captures the information from the images of internal linear maps in a persistence module across all dimensions. Patel noticed that the persistence diagram in the 1-parameter setting is equivalent to the _Mobius inversion_[25] of the rank function [24]. He then defined the generalized persistence diagram as the Mobius inversion of a function defined on a subset of intervals of \(\mathbb{R}\), denoted Dgm, with values in some abelian group. The idea of Mobius inversion has been extended in many directions. Kim and Memoli defined generalized persistence diagrams for modules on posets [12, 17]. Patel and McCleary extended Patel's generalized persistence diagrams to work for persistence modules indexed over finite lattices [22]. Botnan et al. [7] implicitly studied the Mobius inversion of the rank function for 2-parameter modules, leading to a notion of a diagram with domain all rectangles in \(\mathbb{Z}^{2}\). Asashiba et al. used Mobius inversion on a finite 2D grid to define interval-decomposable approximations [1]. Morozov and Patel [23] defined a generalized persistence diagram in the 2-parameter setting via Mobius inversion of the birth-death function and provided an algorithm for its computation. Their algorithm has some similarity with ours: it utilizes the vineyards algorithm [13] to study a 2-parameter persistence module, by slicing it over 1D paths. Our work also involves the idea of slicing a 2-parameter module. This idea of slicing appears in the fibered barcode [11, 20], which is equivalent to the rank function. To obtain insight into the structure of a 2-parameter persistence module \(M\), Lesnick and Wright [20] explored a set of 1-parameter modules obtained via restricting \(M\) onto all possible lines of non-negative slope. Buchet and Escolar [9] showed that any 1-parameter persistence module with finite support could be found as a restriction of some indecomposable 2-parameter persistence module with finite support. Furthermore, Dey et al. [15] showed that certain zigzag (sub)modules of a 2-parameter module can be used to compute the generalized rank invariant, whose Mobius inversion is the generalized persistence diagram defined by Kim and Memoli. Our work considers the images between slices of a 2-parameter module, which is related to the work by Bauer and Schmal [4]. In [8], Botnan et al. introduced the notion of _rank decomposition_, which is equivalent to the generalized persistence diagram formed by Mobius inversion of the rank function, under some additional conditions. Botnan et al. further demonstrated that the process of converting a module to a rank decomposition is stable with respect to the matching distance [18]. Additionally, they introduced a visualization of this rank decomposition via a _signed barcode_, which highlights the diagonals of rectangles appearing in the rank decomposition, along with their multiplicity. They visualized the value of the signed barcode with a 2-parameter persistence module generated by clustering a point cloud with a scale and a density parameter. Unlike the previous results that perform Mobius inversion over a higher-dimensional poset such as \(\mathbb{Z}^{2}\), our work involves Mobius inversion over a finite subcollection of intervals of \(\mathbb{R}\), which leads to a simpler inversion formula. In this work, we introduce the notion of _meta-rank_ for a 2-parameter persistence module, which is a map from Dgm to isomorphism classes of persistence modules. Instead of looking at images of linear maps between vector spaces (as with the usual rank invariant), the meta-rank considers images of the maps between 1-parameter persistence modules formed by slicing a 2-parameter persistence module along vertical and horizontal lines, see Figure 1. We then define the meta-diagram as the Mobius inversion of the meta-rank, giving a map from Dgm to isomorphism classes of signed persistence modules. This contrasts Botnan et al.'s approach [8] of using Mobius inversion in 2D, as our Mobius inversion formula over Dgm is simpler and consists of fewer terms. **Contributions.** The meta-rank and meta-diagram turn out to contain information equivalent to the rank invariant (Proposition 12) and signed barcode (Proposition 27) respectively. Therefore, both meta-rank and meta-diagram can be regarded as these known invariants seen from a different perspective. However, this different viewpoint brings forth several advantages as listed below that make the meta-rank and meta-diagram stand out on their own right: 1. The meta-rank and meta-diagram of a 2-parameter persistence module \(M\) induced by a bifiltration of a simplicial complex with \(n\) simplices can be computed in \(O(n^{3})\) time. 2. This immediately implies an improvement of the \(O(n^{4})\) algorithm of Botnan et al. [8] for computing the signed barcodes. 3. The \(O(n^{3})\) time algorithm for computing meta-rank and meta-diagram also implicitly improves the bound on the number of signed bars in the rank decomposition of \(M\) to \(O(n^{3})\) from the current known bound of \(O(n^{4})\). This addresses an open question whether the size of the signed barcode is bounded tightly by the number of rectangles or not. Figure 1: Slicing a 2-parameter module \(M\) along vertical lines yields 1-parameter modules, such as \(M_{x}^{a},M_{x}^{b}\), and \(M_{x}^{c}\). There are morphisms between these 1-parameter modules induced by the internal morphisms of \(M\), and the meta-rank captures the information about these morphisms. For example, if \(M\) is defined as the direct sum of the two interval modules given by the two shaded rectangles, then the meta-rank of \(M\) on \([a,b)\) is the image of \(\phi_{x}(a\leq b)\), which has a barcode consisting of the red interval. The meta-rank of \(M\) on \([b,c)\) has a barcode consisting of the blue interval, and the meta-rank of \(M\) on \([a,c)\) is 0, as \(\phi_{x}(a\leq c)=\phi_{x}(b\leq c)\circ\phi_{x}(a\leq b)=0\). Figure 2: A meta-diagram viewed as a persistence diagram of signed diagrams (red and blue mean positive and negative signs respectively). 4. The meta-diagram can be viewed as a persistence diagram of signed diagrams as illustrated in Figure 2. Such an intuitive visualization generalizes the classic persistence diagram - a known technique in topological data analysis (TDA) - to summarize persistent homology. 5. The meta-diagram also generalizes the concept of a sliced barcode well-known in TDA [20]. It ensembles sliced bars on a set of lines, but not forgetting the maps between slices induced by the module \(M\) being sliced. ## 2 Preliminaries We regard a poset \((P,\leq)\) as a category, with objects the elements \(p\in P\), and a unique morphism \(p\to q\) if and only if \(p\leq q\); this is referred to as the _poset category_ for \((P,\leq)\). When it is clear from the context, we will denote the poset category by \(P\). Fix a field \(\mathbf{k}\), and assume all vector spaces have coefficients in \(\mathbf{k}\) throughout this paper. Let \(\mathbf{vec}\) denote the category of finite-dimensional vector spaces with linear maps between them. A _persistence module_, or _module_ for short, is a functor \(M:P\to\mathbf{vec}\). For any \(p\in P\), we denote the vector space \(M_{p}:=M(p)\), and for any \(p\leq q\in P\), we denote the linear map \(\varphi_{M}(p\leq q):=M(p\leq q)\). When \(M\) is apparent, we drop the subscript from \(\varphi_{M}\). We call \(P\) the _indexing poset_ for \(M\). We focus on the cases when the indexing poset is \(\mathbb{R}\) or \(\mathbb{R}^{2}\), equipped with the usual order and product order, respectively. Definitions and statements we make follow analogously when the indexing poset is \(\mathbb{Z}\) or \(\mathbb{Z}^{2}\), which we will cover briefly in Section 5. If the indexing poset for \(M\) is \(P\subseteq\mathbb{R}\), then \(M\) is a _1-parameter (or 1D) persistence module_. If the indexing poset for \(M\) is \(P\subseteq\mathbb{R}^{2}\), with \(P\) not totally-ordered, then \(M\) is a _2-parameter (or 2D) persistence module_, or a _bimodule_ for short. Following [21], we require that persistence modules be _constructible_: A module \(M:\mathbb{R}\to\mathbf{vec}\) is _constructible_ if there exists a finite set \(S:=\{s_{1}<\ldots<s_{n}\}\subset\mathbb{R}\) such that: * For \(a<s_{1}\), \(M(a)=0\); * For \(s_{i}\leq a\leq b<s_{i+1}\), \(\varphi_{M}(a\leq b)\) is an isomorphism; * For \(s_{n}\leq a\leq b\), \(\varphi_{M}(a\leq b)\) is an isomorphism. Similarly, a bimodule \(M:\mathbb{R}^{2}\to\mathbf{vec}\) is _constructible_ if there exists a finite set \(S:=\{s_{1}<\ldots<s_{n}\}\subset\mathbb{R}\) such that: * If \(x<s_{1}\) or \(y<s_{1}\), then \(M((x,y))=0\), * For \(s_{i}\leq x_{1}\leq x_{2}<s_{i+1}\) and \(s_{j}\leq y_{1}\leq y_{2}<s_{j+1}\), \(\varphi_{M}((x_{1},y_{1})\leq(x_{2},y_{2}))\) is an isomorphism, * If \(x_{1}\geq s_{n}\) or \(y_{1}\geq s_{n}\) and \((x_{1},y_{1})\leq(x_{2},y_{2})\), then \(\varphi_{M}((x_{1},y_{1})\leq(x_{2},y_{2}))\) is an isomorphism. In either case, such a module is \(S\)-constructible. If a module is \(S\)-constructible, unless otherwise stated, assume \(S=\{s_{1}<\ldots<s_{n}\}\). If \(M\) is \(S\)-constructible, then \(M\) is \(S^{\prime}\)-constructible for any \(S^{\prime}\supseteq S\). For the rest of the paper, we assume any given persistence module is constructible. Of particular importance in the study of 1- and 2-parameter persistence modules are the notions of interval modules and interval decomposable modules. We state the definitions: For a poset \((P,\leq)\), an _interval_ of \(P\) is a non-empty subset \(I\subset P\) such that: * (convexity) If \(p,r\in I\) and \(q\in P\) with \(p\leq q\leq r\), then \(q\in I\). * (connectivity) For any \(p,q\in I\), there is a sequence \(p=r_{0},r_{1},\ldots,r_{n}=q\) of elements of \(I\), where for all \(0\leq i\leq n-1\), either \(r_{i}\geq r_{i+1}\) or \(r_{i}\leq r_{i+1}\). We denote the collection of all intervals of \(P\) as \(\mathbf{Int}(P)\). For \(I\in\mathbf{Int}(P)\), the _interval module_\(\mathbf{k}^{I}\) is the persistence module indexed over \(P\), with: \[\mathbf{k}^{I}_{p}=\begin{cases}\mathbf{k}&\text{if}\,p\in I\\ 0&\text{otherwise}\end{cases},\qquad\quad\varphi_{\mathbf{k}^{I}}(p\leq q)= \begin{cases}\operatorname{id}_{\mathbf{k}}&\text{if}\,p\leq q\in I\\ 0&\text{otherwise}\end{cases}.\] Given any \(M,N:P\to\mathbf{vec}\), the direct sum \(M\oplus N\) is defined point-wise at each \(p\in P\). We say a nontrivial \(M:P\to\mathbf{vec}\) is _decomposable_ if \(M\) is isomorphic to \(N_{1}\oplus N_{2}\) for some non-trivial \(N_{1},N_{2}:P\to\mathbf{vec}\), which we denote by \(M\cong N_{1}\oplus N_{2}\). Otherwise, \(M\) is _indecomposable_. Interval modules are indecomposable [6]. A persistence module \(M:P\to\mathbf{vec}\) is _interval decomposable_ if it is isomorphic to a direct sum of interval modules. That is, if there is a multiset of intervals \(\operatorname{barc}(M)\), such that: \[M\cong\bigoplus_{I\in\operatorname{barc}(M)}\mathbf{k}^{I}\] If this multiset exists, we call it the _barcode_ of \(M\). If it exists, \(\operatorname{barc}(M)\) is well-defined as a result of the Azumaya-Krull-Remak-Schmidt theorem [2]. Thus, in the case where \(M\) is interval decomposable, \(\operatorname{barc}(M)\) is a complete descriptor of the isomorphism type of \(M\). Of particular importance in this work are _right-open rectangles_, which are intervals \(R\subset\mathbb{R}^{2}\) of the form \(R=[a_{1},b_{1})\times[a_{2},b_{2})\). If \(M\) can be decomposed as a direct sum of interval modules \(\mathbf{k}^{R}\) with \(R\) a right-open rectangle, then we say \(M\) is _rectangle decomposable_. 1-parameter persistence modules are particularly nice, as they are always interval decomposable [14]. As a result, the barcode is a complete invariant for 1-parameter persistence modules. On the other hand, bimodules do not necessarily decompose in this way. In fact, there is no complete discrete descriptor for bimodules [10]. A number of invariants have been proposed to study bimodules. One of the first and the most notable invariant is the _rank invariant_[10] recalled in Definition 3. [[10]] For \(P\) a poset, define \(\mathbf{D}(P):=\{(a,b)\in P\times P\,|\,a\leq b\}\). For \(M:P\to\mathbf{vec}\), the _rank invariant of \(M\), \(\operatorname{rank}_{M}:\mathbf{D}(P)\to\mathbb{Z}_{\geq 0}\), is defined point-wisely as: \[\operatorname{rank}_{M}(a,b):=\operatorname{rank}(\varphi_{M}(a\leq b))\] For a bimodule, the rank invariant is inherently a 4D object, making it difficult to visualize directly. RIVET [20] visualizes the rank invariant indirectly through the fibered barcode. In [8], Botnan et al. defined the _signed barcode_ based on the notion of a _rank decomposition_: [[8]] Let \(M:\mathbb{R}^{n}\to\mathbf{vec}\) be a persistence module with rank function \(\operatorname{rank}_{M}\). Suppose \(\mathscr{R},\mathscr{S}\) are multisets of intervals from \(\mathbb{R}^{n}\). Define \(\mathbf{k}_{\mathscr{R}}:=\oplus_{I\in\mathscr{R}}\mathbf{k}^{R}\), and similarly \(\mathbf{k}_{\mathscr{S}}\). Then \((\mathscr{R},\mathscr{S})\) is a _rank decomposition_ for \(\operatorname{rank}_{M}\) if as integral functions: \[\operatorname{rank}_{M}=\operatorname{rank}_{\mathscr{R}}-\operatorname{rank} _{\mathscr{S}}.\] If \(\mathscr{R},\mathscr{S}\) consist of right-open rectangles, then the pair is a rank decomposition by rectangles. We have: [[8], Theorem 3.3] Every finitely presented \(M:\mathbb{R}^{2}\to\mathbf{vec}\) admits a unique minimal rank decomposition by rectangles. Here minimality comes in the sense that \(\mathscr{R}\cap\mathscr{S}=\emptyset\). The signed barcode then visualizes the rank function in \(\mathbb{R}^{2}\) by showing the diagonals of the rectangles in \(\mathscr{R}\) and \(\mathscr{S}\). ## 3 Meta-Rank In this section, we introduce the _meta-rank_. While the rank invariant captures the information of images between pairs of vector spaces in a persistence module, the meta-rank captures the information of images between two 1-parameter persistence modules obtained via slicing a bimodule. We describe the results for modules over \(\mathbb{R}^{2}\) and \(\mathbb{R}\), but they hold in direct analogue in the \(\mathbb{Z}^{2}\) and \(\mathbb{Z}\) setting, which are briefly covered in Section 5. For omitted proofs, see Appendix A. We begin with some preliminary definitions: Let \(M:\mathbb{R}^{2}\to\mathbf{vec}\) be a bimodule. For \(s\in\mathbb{R}\), define the vertical slice \(M^{s}_{x}:\mathbb{R}\to\mathbf{vec}\) point-wise as \(M^{s}_{x}(a):=M(s,a)\), and with morphisms from \(a\) to \(b\) as \(\varphi^{s}_{x}(a\leq b):=\varphi((s,a)\leq(s,b))\). Analogously, define the horizontal slice \(M^{s}_{y}:\mathbb{R}\to\mathbf{vec}\) by setting \(M^{s}_{y}(a):=M(a,s)\) and \(\varphi^{s}_{y}(a\leq b):=\varphi((a,s)\leq(b,s))\) for all \(a\leq b\in\mathbb{R}\). Define a morphism of 1-parameter persistence modules \(\phi_{x}(s\leq t):M^{s}_{x}\to M^{t}_{x}\) for \(s\leq t\in\mathbb{R}\) by \(\phi_{x}(s\leq t)(a):=\varphi((s,a)\leq(t,a))\). Analogously, define \(\phi_{y}(s\leq t):M^{s}_{y}\to M^{t}_{y}\) for \(s\leq t\in\mathbb{R}\) by \(\phi_{y}(s\leq t)(a):=\varphi((a,s)\leq(a,t))\). Denote by \(\mathbf{Pvec}\) the isomorphism classes of persistence modules over \(\mathbb{R}\). Each element of \(\mathbf{Pvec}\) can be uniquely represented by its barcode, which is what we do in practice. We recall the definition of [24], which serves as the domain for the meta-rank: [[24]] Define \(\mathrm{Dgm}\) as the poset of all half-open intervals \([p,q)\subset\mathbb{R}\) for \(p<q\), and all half-infinite intervals \([p,\infty)\subset\mathbb{R}\). The poset relation is inclusion. Suppose \(M:\mathbb{R}^{2}\to\mathbf{vec}\) is \(S\)-constructible. Define the _horizontal meta-rank_\(\mathbf{mrk}_{M,x}:\mathrm{Dgm}\to\mathbf{Pvec}\) as follows: * For \(I=[s,s_{i})\) with \(s_{i}\in S\), \(\mathbf{mrk}_{M,x}(I):=[\mathrm{im}(\phi_{x}(s\leq s_{i}-\delta))]\), for some \(\delta>0\) such that \(s_{i}-\delta\geq s\) and \(s_{i}-\delta\geq s_{i-1}\). * For \(I=[s,\infty)\), \(\mathbf{mrk}_{M,x}(I):=[\mathrm{im}(\phi_{x}(s\leq s_{n}))]\). * For all other \(I=[s,t)\), \(\mathbf{mrk}_{M,x}(I):=[\mathrm{im}(\phi_{x}(s\leq t))]\). Analogously, define the _vertical meta-rank_, \(\mathbf{mrk}_{M,y}:\mathrm{Dgm}\to\mathbf{Pvec}\) by replacing each instance of \(x\) above with \(y\). The results in this paper are stated in terms of the horizontal meta-rank, but hold analogously for the vertical meta-rank. To simplify notation, we henceforth denote \(\mathbf{mrk}_{M,x}\) as \(\mathbf{mrk}_{M}\). When there is no confusion, we drop the subscript from \(\mathbf{mrk}_{M}\). As illustrated in Figure 3, let \(I\) be the single gray interval and define the bimodule \(M:=\mathbf{k}^{I}\). The barcodes for the 1-parameter modules \(M^{a}_{x},M^{b}_{x}\), and \(M^{c}_{x}\) are shown Figure 3: An illustration of \(M\) and its barcode for some values of \(\mathbf{mrk}_{M}\) in Example 9. in red next to their corresponding vertical slices. The barcode for \(\mathbf{mrk}_{M}([a,b))\) consists of the blue interval, which is the overlap of the bars in \(M_{x}^{a}\) and \(M_{x}^{b}\), \(\mathrm{barc}(M_{x}^{a})\cap\mathrm{barc}(M_{x}^{b})\). Similarly, \(\mathbf{mrk}_{M}([b,c))\) has a barcode consisting of the purple interval, which is the overlap of the bars in \(M_{x}^{b}\) and \(M_{x}^{c}\). As the bars in the barcodes for \(M_{x}^{a}\) and \(M_{x}^{c}\) have no overlap, \(\mathrm{im}(\phi_{x}(a\leq c))=0\), therefore \(\mathbf{mrk}_{M}([a,c))=0\). In general, \(\mathbf{mrk}_{x}\neq\mathbf{mrk}_{y}\). For example, consider the right-open rectangle \(R\) with the lower-left corner the origin, and the upper right corner \((1,2)\), as in Figure 4. Let \(M:=\mathbf{k}^{R}\). As illustrated, \(\mathbf{mrk}_{M,x}([0,1))=[0,2)\neq[0,1)=\mathbf{mrk}_{M,y}([0,1))\). The following Proposition 4.1 allows us to compute the meta-rank of a bimodule via the meta-ranks of its indecomposable summands: For \(M,N:\mathbb{R}^{2}\to\mathbf{vec}\), we have: \[\mathbf{mrk}_{M}\oplus\mathbf{mrk}_{N}=\mathbf{mrk}_{M\oplus N}\] where \(\mathbf{mrk}_{M}\oplus\mathbf{mrk}_{N}:\mathrm{Dgm}\to\mathbf{Pvec}\) is defined as: \[(\mathbf{mrk}_{M}\oplus\mathbf{mrk}_{N})([s,t)):=[\mathbf{mrk}_{M}([s,t)) \oplus\mathbf{mrk}_{N}([s,t))].\] For a finite \(S\subseteq\mathbb{R}\), let \(\overline{S}:=S\cup\{\infty\}\). Define \(\overline{S}_{>}:\mathbb{R}\cup\{\infty\}\to\overline{S}\) as \(\overline{S}_{>}(t):=\min\{s\in\overline{S}\,|\,s>t\}\). For \(M\in\mathbf{Pvec}\), \([b,d)\in\mathrm{Dgm}\), let \(\#[b,d)\in M\) denote the multiplicity of \([b,d)\in\mathrm{barc}(M)\). The rank invariant and the meta-rank contain equivalent information: For \(M:\mathbb{R}^{2}\to\mathbf{vec}\), one can compute \(\mathbf{rank}_{M}\) from \(\mathbf{mrk}_{M}\) and one can compute \(\mathbf{mrk}_{M}\) from \(\mathrm{rank}_{M}\). In particular, given \((s,y)\leq(t,y^{\prime})\in\mathbb{R}^{2}\), \[\mathrm{rank}_{M}((s,y),(t,y^{\prime}))=\#[b_{i},d_{i})\in\mathbf{mrk}_{M}([s,\overline{S}_{>}(t)))\text{ s.t. }b_{i}\leq y\leq y^{\prime}<d_{i}.\] That is, the rank is the number of intervals in \(\mathrm{barc}(\mathbf{mrk}_{M}([s,\overline{S}_{>}(t))))\) containing \([y,y^{\prime}]\). The reason for needing \(\overline{S}_{>}(t)\) for the right endpoint is that if \(t\in S\), \(\mathbf{mrk}_{M}([s,t))\) does not capture the information of the image of \(\phi_{x}(s\leq t)\), only the image of \(\phi_{x}(s\leq t-\delta)\). Finally, we discuss the stability of the meta-rank. The meta-rank is stable with respect to a notion of erosion distance, based on that of Patel [24]. We introduce truncated barcode: For \(\epsilon\geq 0\), and \(I=[s,t)\in\mathrm{Dgm}\), define \(I[\epsilon:]:=[s+\epsilon,t)\). For \(M:\mathbb{R}\to\mathbf{vec}\) define: \(\mathrm{barc}_{\epsilon}(M):=\{I[\epsilon:]\,|\,I\in\mathrm{barc}(M)\}\). If \(I=[s,t)\in\mathrm{barc}(M)\) has \(t-s\leq\epsilon\), then \(I\) has no corresponding interval in \(\mathrm{barc}_{\epsilon}(M)\). **Definition 14**.: _For \(M,N:\mathbb{R}\to\mathbf{vec}\), we say \(M\preceq_{\epsilon}N\) if there exists an injective function on barcodes \(\iota:\mathrm{barc}_{\epsilon}(M)\hookrightarrow\mathrm{barc}(N)\) such that for all \(J\in\mathrm{barc}_{\epsilon}(M)\), \(J\subseteq\iota(J)\)._ For \(\epsilon\geq 0\), \(M\in\mathbf{Pvec}\), let \(M^{\epsilon}\) refer to the \(\epsilon\)-shift of \(M\)[19], with \(M^{\epsilon}(a):=M(a+\epsilon)\) and \(\varphi_{M^{\epsilon}}(a\leq b):=\varphi_{M}(a+\epsilon\leq b+\epsilon)\). For \(I=[s,t)\in\mathrm{Dgm}\) and \(a,b\in\mathbb{R}\), let \(I^{b}_{a}:=[s+a,t+b)\), with the convention \(\infty+b:=\infty\) for any \(b\in\mathbb{R}\). We now define the erosion distance: **Definition 15**.: _Let \(M,N:\mathbb{R}^{2}\to\mathbf{vec}\). Define the erosion distance as follows:_ \[\mathrm{d}_{\mathrm{E}}(\mathbf{mrk}_{M},\mathbf{mrk}_{N}):=\inf \{\epsilon>0\,|\,\forall I\in\mathrm{Dgm},\mathbf{mrk}_{M}(I^{\epsilon}_{- \epsilon})^{\epsilon}\preceq_{2\epsilon}\mathbf{mrk}_{N}(I)\,\mathrm{and}\] \[\mathbf{mrk}_{N}(I^{\epsilon}_{-\epsilon})^{\epsilon}\preceq_{2 \epsilon}\mathbf{mrk}_{M}(I)\}\] _if the set we are infimizing over is empty, we set \(\mathrm{d}_{\mathrm{E}}(\mathbf{mrk}_{M},\mathbf{mrk}_{N}):=\infty\)._ **Proposition 16**.: \(\mathrm{d}_{\mathrm{E}}\) _as defined in Definition 15 is an extended pseudometric on the collection of meta-ranks of constructible bimodules \(M:\mathbb{R}^{2}\to\mathbf{vec}\)._ We compare bimodules \(M\) and \(N\) using the multiparameter interleaving distance [19]. The \(\epsilon\)-shift and the truncation of the barcode in Definition 13 are necessary for stability, due to the interleaving distance being based on diagonal shifts of bimodules, whereas the meta-rank is based on horizontal maps instead of diagonal ones. We have the following: **Theorem 17**.: _For constructible \(M,N:\mathbb{R}^{2}\to\mathbf{vec}\), we have:_ \[\mathrm{d}_{\mathrm{E}}(\mathbf{mrk}_{M},\mathbf{mrk}_{N})\leq\mathrm{d}_{ \mathrm{I}}(M,N)\] ## 4 Meta-Diagram We use the Mobius inversion formula from Patel [24] on the meta-rank function to get a _meta-diagram_. This formula involves negative signs, so we need a notion of signed persistence modules. Our ideas are inspired by the work of Betthauser et al. [5], where we consider breaking a function into positive and negative parts. For omitted proofs, see Appendix B. A _signed 1-parameter persistence module_ is an ordered pair \((M,N)\), where \(M,N:\mathbb{Z}\to\mathbf{vec}\) are 1-parameter persistence modules. \(M\) is the _positively signed_ module, and \(N\) is the _negatively signed_ module. View \(\mathbf{Pvec}\) as a commutative monoid with operation \(\oplus\) given by \([M]\oplus[N]:=[M\oplus N]\), and identity element \([0]\). Define \(\mathbf{SPvec}\) to be the Grothendieck group of \(\mathbf{Pvec}\). Each element of \(\mathbf{SPvec}\) is an isomorphism class of ordered pairs \([([M^{+}],[M^{-}])]\). From the completeness of barcodes for 1-parameter persistence modules, we assume without loss of generality that each element \(M^{+}\), \(M^{-}\) is given by \(*:=\oplus_{I\in\mathrm{barc}(*)}\mathbf{k}^{I}\) and drop the internal equivalence class notation to write an element of \(\mathbf{SPvec}\) as \([(M^{+},M^{-})]\). Proposition 20 allows us to make a canonical choice of representative for each element of \(\mathbf{SPvec}\): **Proposition 20**.: _Let \(A\in\mathbf{SPvec}\). Then there is a unique representative \(A=[(M^{+},M^{-})]\) with \(\mathrm{barc}(M^{+})\cap\mathrm{barc}(M^{-})=\emptyset\)._ As a result of Proposition 20, when convenient, we represent an element of \(\mathbf{SPvec}\) uniquely by the sum of barcodes of this special representative, as in the following example: **Example 21**.: Consider \([(N^{+},N^{-})]\in\mathbf{SPvec}\) where \(\text{barc}(N^{+})=\{[0,4],[1,3],[2,4]\}\) and \(\text{barc}(N^{-})=\{[1,3],[3,4]\}\). By Proposition 20, \([(N^{+},N^{-})]\) is uniquely represented by \([(M^{+},M^{-})]\) with \(\text{barc}(M^{+})=\{[0,4],[2,4]\}\) and \(\text{barc}(M^{-})=\{[3,4]\}\). In practice, we will denote this element of \(\mathbf{SPvec}\) as \([0,4]+[2,4]-[3,4]\in\mathbf{SPvec}\). If \(M,N\in\mathbf{Pvec}\), denote by \(M+N\) the element \([(M\oplus N,0)]\in\mathbf{SPvec}\), and denote by \(M-N\) the element \([(M,N)]\in\mathbf{SPvec}\). For an illustration, see Figure 5. With this notion of signed persistence module in hand, we now use a modified version of the Mobius inversion formula from [24] to define a meta-diagram: **Definition 22**.: _Let \(M:\mathbb{R}^{2}\to\mathbf{vec}\) be \(S\)-constructible. Define the horizontal meta-diagram to be the function \(\mathbf{mdgm}_{M}:\operatorname{Dgm}\to\mathbf{SPvec}\) via the Mobius inversion formula:_ \[\mathbf{mdgm}_{M,x}([s_{i},s_{j})) :=\mathbf{mrk}_{M,x}([s_{i},s_{j}))-\mathbf{mrk}_{M,x}([s_{i},s_{ j+1}))\] \[\quad+\mathbf{mrk}_{M,x}([s_{i-1},s_{j+1}))-\mathbf{mrk}_{M,x}([ s_{i-1},s_{j}))\] \[\quad\mathbf{mdgm}_{M,x}([s_{i},\infty)) :=\mathbf{mrk}_{M,x}([s_{i},\infty))-\mathbf{mrk}_{M,x}([s_{i-1}, \infty))\] _where \(s_{0}\) is any value \(s_{0}<s_{1}\) and \(s_{n+1}\) is any value \(s_{n+1}>s_{n}\). For any other \([s,t)\in\operatorname{Dgm}\), set \(\mathbf{mdgm}_{M,x}([s,t)):=0\). Define the vertical meta-diagram by replacing each instance of \(x\) above with \(y\)._ We henceforth let \(\mathbf{mdgm}\) refer to the horizontal meta-diagram of \(M\), dropping the subscript when there is no confusion. The following Mobius inversion formula describes the relation between the meta-rank and meta-diagram. It is the direct analogue of [24, Theorem 4.1]. **Proposition 23**.: _For \([s,t)\in\operatorname{Dgm}\), we have:_ \[\mathbf{mrk}([s,t))=\sum_{\begin{subarray}{c}I\in\operatorname{Dgm}\\ I\supseteq[s,t)\end{subarray}}\mathbf{mdgm}(I)\] **Proposition 24**.: _For \(M,N:\mathbb{R}^{2}\to\mathbf{vec}\), we have:_ \[\mathbf{mdgm}_{M}\oplus\mathbf{mdgm}_{N}=\mathbf{mdgm}_{M\oplus N},\] _where \(\mathbf{mdgm}_{M}\oplus\mathbf{mdgm}_{N}:\operatorname{Dgm}\to\mathbf{SPvec}\) is defined by_ \[(\mathbf{mdgm}_{M}\oplus\mathbf{mdgm}_{N})([s,t)) :=[\mathbf{mdgm}_{M}([s,t))^{+}\oplus\mathbf{mdgm}_{N}([s,t))^{+},\] \[\quad\mathbf{mdgm}_{M}([s,t))^{-}\oplus\mathbf{mdgm}_{N}([s,t))^{-}].\] Figure 5: Illustration of the barcodes for \(M,N\in\mathbf{Pvec}\) and \(M+N,M-N\in\mathbf{SPvec}\). For \(M+N\) and \(M-N\), a red interval is positively signed and a blue interval is negatively signed. Proposition 24 allows us to compute meta-diagrams straightforwardly if we have an indecomposable decomposition of a module. In particular, by Proposition 25, meta-diagrams are simply computable for rectangle decomposable modules. Suppose \(M=\mathbf{k}^{R}\) is an \(\mathbb{R}^{2}\)-indexed interval module supported on the right-open rectangle \(R\), with lower-left corner \((s,t)\) and upper-right corner \((s^{\prime},t^{\prime})\). We have: \[\mathbf{mdgm}_{M}([a,b))=\begin{cases}[t,t^{\prime})&\text{if }a=s\,\text{ and }\,b=s^{\prime};\\ 0&otherwise.\end{cases}\] Let \(M=\oplus_{R\in\operatorname{barc}(M)}\mathbf{k}^{R}\) be rectangle decomposable. Then the interval \([t,t^{\prime})\) appears in \(\mathbf{mdgm}([s,s^{\prime}))\) with multiplicity \(n\) if and only if the right-open rectangle with lower-left corner \((s,t)\) and upper right corner \((s^{\prime},t^{\prime})\) appears in \(\operatorname{barc}(M)\) with multiplicity \(n\). ### Equivalence With Rank Decomposition via Rectangles For \(M:\mathbb{R}^{2}\to\mathbf{vec}\), the rank decomposition by rectangles contains the same information as the rank invariant, which by Proposition 12 contains the same information as the meta-rank. We now show one can directly go from the meta-diagram to the rank decomposition: Let \(M:\mathbb{R}^{2}\to\mathbf{vec}\) be constructible. Define as follows: \[\mathscr{R}:=\bigcup_{I\in\operatorname{Dgm}}\left(\bigcup_{[a,b)\in\mathbf{ mdgm}_{M}(I)}I\times[a,b)\right),\] \[\mathscr{S}:=\bigcup_{I\in\operatorname{Dgm}}\left(\bigcup_{-[a,b)\in\mathbf{ mdgm}_{M}(I)}I\times[a,b)\right),\] where all unions are the multiset union. Then \((\mathscr{R},\mathscr{S})\) is a rank decomposition for \(M\). Proof.: It suffices to show that for all \(w_{1}:=(x_{1},y_{1})\leq w_{2}:=(x_{2},y_{2})\in\mathbb{R}^{2}\), \(\operatorname{rank}_{M}(w_{1},w_{2})=\operatorname{rank}_{\mathbf{k}\omega}( w_{1},w_{2})-\operatorname{rank}_{\mathbf{k}\omega}(w_{1},w_{2})\). Suppose \(w_{1}\leq w_{2}\in\mathbb{R}^{2}\) as above. By Proposition 12, \[\operatorname{rank}_{M}(w_{1},w_{2})=\#[b_{i},d_{i})\in\mathbf{mrk}_{M}([x_{1},x_{2}^{\prime}))\,\,\text{ s.t. }\,\,b_{i}\leq y_{1}\leq y_{2}<d_{i},\] where for notational simplicity, \(x_{2}^{\prime}:=\overline{S}_{>}(x_{2})\). Now fix \([b,d)\) such that \(b\leq y_{1}\leq y_{2}<d\). By Proposition 23, we have: \[\#[b,d)\in\mathbf{mrk}_{M}([x_{1},x_{2}^{\prime}))=\#[b,d)\in \sum_{\begin{subarray}{c}I\in\operatorname{Dgm}\\ I\supseteq[x_{1},x_{2}^{\prime})\end{subarray}}\mathbf{mdgm}_{M}(I)\] \[=\left(\#[b,d)\in\sum_{\begin{subarray}{c}I\in\operatorname{Dgm}\\ I\supseteq[x_{1},x_{2}^{\prime})\end{subarray}}\mathbf{mdgm}_{M}^{+}(I)\right) -\left(\#[b,d)\in\sum_{\begin{subarray}{c}I\in\operatorname{Dgm}\\ I\supseteq[x_{1},x_{2}^{\prime})\end{subarray}}\mathbf{mdgm}_{M}^{-}(I)\right)\] By Proposition 25 and Corollary 26, the term \(\#[b,d)\in\sum\limits_{\begin{subarray}{c}I\in\operatorname{Dgm}\\ I\supseteq[x_{1},x_{2}^{\prime})\end{subarray}}\mathbf{mdgm}^{+}(I)\) is the number of times \(I\times[b,d)\) appears in \(\mathscr{R}\) across all \(I\supseteq[x_{1},x_{2}^{\prime})\), and the term \(\#[b,d)\in\sum\limits_{\begin{subarray}{c}I\in\operatorname{Dgm}\\ I\supseteq[x_{1},x_{2}^{\prime})\end{subarray}}\mathbf{mdgm}^{-}(I)\) is the number of times \(I\times[b,d)\) appears in \(\mathscr{S}\) across all \(I\supseteq[x_{1},x_{2}^{\prime})\). Thus, we see that \(\operatorname{rank}_{M}(w_{1},w_{2})\) is equal to the number of rectangles in \(\mathscr{R}\) containing \(w_{1}\) and \(w_{2}\) minus the number of rectangles in \(\mathscr{S}\) containing \(w_{1}\) and \(w_{2}\). From the definition of rectangle module and the fact that rank commutes with direct sums, the first term is \(\operatorname{rank}(\mathbf{k}_{\mathscr{R}})(w_{1},w_{2})\) and the second term is \(\operatorname{rank}(\mathbf{k}_{\mathscr{S}})(w_{1},w_{2})\), and so we get: \[\operatorname{rank}_{M}(w_{1},w_{2})=\operatorname{rank}_{\mathbf{k}_{ \mathscr{R}}}(w_{1},w_{2})-\operatorname{rank}_{\mathbf{k}_{\mathscr{S}}}(w_{1 },w_{2})\qed\] ### Stability of Meta-Diagrams We now show a stability result for meta-diagrams. We need to modify the notion of erosion distance to do so, as meta-diagrams have negatively signed parts. We proceed by adding the positive part of one meta-diagram to the negative part of the other. This idea stems from Bethhauser et al.'s work [5], and was also used in the stability of rank decompositions in [8]. For \(M,N:\mathbb{R}^{2}\to\mathbf{vec}\), define \(\operatorname{PN}(M,N):\operatorname{Dgm}\to\mathbf{vec}\) as \[\operatorname{PN}(M,N)([s,t)):=\mathbf{mdgm}_{M}^{+}([s,t))+\mathbf{mdgm}_{N}^ {-}([s,t))\] \(\operatorname{PN}(M,N)\) is a non-negatively signed 1-parameter persistence module for all \([s,t)\in\operatorname{Dgm}\), allowing us to make use of the previous notion of \(\preceq_{\epsilon}\) (Definition 14) to define an erosion distance for meta-diagrams. Unlike meta-ranks which have a continuous support, a meta-diagram is only supported on \((\overline{S})^{2}\) for some finite \(S\subset\mathbb{R}\). As a result, we first modify the notion of erosion distance to fit the discrete setting. Define maps \(\overline{S}_{\leq},\overline{S}_{\leq}:\mathbb{R}\cup\{\infty\}\to\overline{S}\) by \(\overline{S}_{\geq}(x):=\min\{s\in\overline{S}\,|\,x\geq s\}\) and \(\overline{S}_{\leq}(x):=\max\{s\in\overline{S}\,|\,x\leq s\}\), or some value less than \(s_{1}\) if this set is empty. We say \(S\) is _evenly-spaced_ if there exists \(c\in\mathbb{R}\) such that \(s_{i+1}-s_{i}=c\) for all \(1\leq i\leq n-1\). In the following, fix an evenly-spaced finite \(S\subset\mathbb{R}\). For \(S\)-constructible \(M,N:\mathbb{R}^{2}\to\mathbf{vec}\), define the erosion distance: \[\operatorname{d}_{\mathrm{E}}^{S}(\mathbf{mdgm}_{M},\mathbf{mdgm }_{N}):= \inf\{\epsilon\geq 0\,|\,\forall s\leq t\in\overline{S},\] \[\operatorname{PN}(M,N)([\overline{S}_{\leq}(s-\epsilon), \overline{S}_{\geq}(s+\epsilon))^{\epsilon}\preceq_{2\epsilon}\operatorname{ PN}(N,M)([s,t))\,\text{and}\] \[\operatorname{PN}(N,M)([\overline{S}_{\leq}(s-\epsilon), \overline{S}_{\geq}(s+\epsilon))^{\epsilon}\preceq_{2\epsilon}\operatorname{ PN}(M,N)([s,t))\}\] We have the following stability result for meta-diagrams, For \(S\)-constructible \(M,N:\mathbb{R}^{2}\to\mathbf{vec}\), with \(S\) evenly-spaced, we have \[\operatorname{d}_{\mathrm{E}}^{S}(\mathbf{mdgm}_{M},\mathbf{mdgm}_{N})\leq \operatorname{d}_{\mathrm{I}}(M,N).\] For details and a stability result when \(S\) is not evenly-spaced, see Appendix B. ## 5 Algorithms In this section, we provide algorithms for computing meta-ranks and meta-diagrams. The input to these algorithms is a simplex-wise bifiltration: Let \(n\in\mathbb{Z}\), and \([n]\) denote the poset \(\{1,\dots,n\}\) with the usual order. Let \(K\) be a simplicial complex, and \(\operatorname{sub}(K)\) denote all subsets of \(K\) which are themselves simplicial complexes. A filtration is a function \(F:[n]\to\operatorname{sub}(K)\) such that for \(a\leq b\), \(F(a)\subseteq F(b)\). We say a filtration is simplex-wise if for all \(1\leq a\leq n-1\), either \(F(a+1)=F(a)\) or \(F(a+1)=F(a)\cup\{\sigma\}\) for some \(\sigma\in K\setminus F(a)\). In the latter case, we denote this with \(F(a)\xrightarrow{\sigma}F(a+1)\). We say \(\sigma\in\operatorname{sub}(K)\) arrives at \(a\) if \(\sigma\in F(a)\) and \(\sigma\notin F(a-1)\). _Define \(P_{n}:=[n]\times[n]\) equipped with the product order. A bifiltration is a function \(F:P_{n}\to\operatorname{sub}(K)\). We say a bifiltration is simplex-wise if for all \((a,b)\in P_{n}\), for \((x,y)=(a+1,b)\) or \((a,b+1)\), if \((x,y)\in P_{n}\), then either \(F((x,y))=F((a,b))\), or \(F((a,b))\xrightarrow{\sigma}F((x,y))\) for some \(\sigma\notin F((a,b))\)._ Applying homology to a bifiltration yields a bimodule defined on \(P_{n}\). Our theoretical background in previous sections focused on the case of bimodules defined over \(\mathbb{R}^{2}\). The same ideas and major results follow similarly for a module defined over \(P_{n}\). We quickly highlight the differences in definitions when working with modules defined on \(P_{n}\). The following definitions are re-phrasings of the horizontal meta-rank and horizontal meta-diagram for modules indexed over \(P_{n}\), but as before, the statements are directly analogous in the vertical setting. Let \(\mathbf{Int}([n])\) refer to all intervals of \([n]\), which consists of \(\{[a,b]\,|\,a\leq b,\,a,b\in[n]\}\). **Definition 32**.: _For \(M:P_{n}\to\mathbf{vec}\), define the meta-rank, \(\mathbf{mrk}_{M}:\mathbf{Int}([n])\to\mathbf{Pvec}\) by_ \[\mathbf{mrk}_{M}([s,t]):=[\operatorname{im}(\phi_{x}(s\leq t))]\] **Definition 33**.: _For \(M:P_{n}\to\mathbf{vec}\), define the meta-diagram, \(\mathbf{mdgm}_{M}:\mathbf{Int}([n])\to\mathbf{SPvec}\) as follows: if \(1<s\leq t<n\), define:_ \[\mathbf{mdgm}_{M}([s,t]) :=\mathbf{mrk}_{M}([s,t])-\mathbf{mrk}_{M}([s,t+1])\] \[\quad+\mathbf{mrk}_{M}([s-1,t+1])-\mathbf{mrk}_{M}([s-1,t]),\] \[\mathbf{mdgm}_{M}([s,n]) :=\mathbf{mrk}_{M}([s,n])-\mathbf{mrk}_{M}([s-1,n]),\] \[\mathbf{mdgm}_{M}([1,t]) :=\mathbf{mrk}_{M}([1,t])-\mathbf{mrk}_{M}([1,t+1]),\text{ and}\] \[\mathbf{mdgm}_{M}([1,n]) :=\mathbf{mrk}_{M}([1,n]).\] ### Overview of the Algorithm Henceforth, assume \(F:P_{n}\to\operatorname{sub}(K)\) is a simplex-wise bifiltration, and \(M\) is the result of applying homology to \(F\). Our algorithm to compute the meta-rank relies on the vineyards algorithm from [13]. The algorithm starts with \(F\) as the input. Define \(\gamma_{1}\) to be the path in \(P\) going from \((1,1)\to(1,n)\to(n,n)\), i.e. the path along the top-left boundary of \(P\). We compute the \(D=RU\) decomposition for the interval decomposition of the persistence module given by the \(1\)-parameter filtration found by slicing \(F\) over \(\gamma_{1}\), which we denote \(F_{\gamma_{1}}\). This decomposition gives us all the persistence intervals and persistence pairs \((\sigma_{i},\sigma_{j})\) and unpaired simplices corresponding to each interval, the former corresponding to a finite interval and the latter an infinite interval. To simplify notation, for every unpaired simplex corresponding to an infinite interval, we pair it with an implicit simplex arriving in an extended \(F\) at \((n+1,n+1)\). We store the persistence intervals in an ordered list, which we denote intervals. All intervals in intervals restricted to \([1,n]\) constitute together the \(1\)-parameter persistence module \(M^{1}_{x}\), which is precisely \(\mathbf{mrk}_{M}([1,1])\). We then store \(\mathbf{mrk}_{M}([1,1])\) as a list, with the same ordering as intervals, leaving an empty placeholder whenever an interval does not intersect \([1,n]\). We sweep \(\gamma\) through \(P\), over one square at a time, going down through the first column, until we reach \(\gamma_{2}\), the path \((1,1)\to(2,1)\to(2,n)\to(n,n)\). From there, we repeat the process column-by-column until we reach \(\gamma_{n}\), the path \((1,1)\to(n,1)\to(n,n)\); see Figure 6. After each change of a single vertex in our intermediary paths \(\gamma\) stemming from swapping the upper-left boundary of a single square to the lower-right one, the resulting filtration \(F_{\gamma}\) either remains the same, or changes in one of the ways illustrated in Figure 7. After passing through each square, we update each interval in intervals in-place. If \(F_{\gamma}\) remains the same, then there is no change to intervals. If \(F_{\gamma}\) changes by altering the arrival time of a single simplex, then the pairings do not change, and the interval corresponding to the shifted simplex either extends by one or shrinks by one. If a transposition occurs, see Figure 7 (left), then we use the transposition update process from the vineards algorithm. If we start at \(\gamma_{1}\), then when we reach \(F_{\gamma_{2}}\), we can restrict each interval in intervals to \([2,n+1]\) and shift it back down one, and this corresponds to \(\mathbf{mrk}_{M}([2,2])\), which we store using the same rules as we did with \(\mathbf{mrk}_{M}([1,1])\). Since we are storing all intervals in meta-ranks in this ordered fashion, we can take any interval in \(\mathbf{mrk}_{M}([2,2])\), and see where it came from in \(\mathbf{mrk}_{M}([1,1])\), which would be the interval stored at the same index in both lists. By taking the intersection, we get the corresponding interval which we put into this location in the list \(\mathbf{mrk}_{M}([1,2])\). We repeat the process of modifying \(\gamma\) one vertex at a time to get the paths \(\gamma_{i}\) from \((1,1)\to(i,1)\to(i,n)\to(n,n)\) as above, updating intervals and getting \(\mathbf{mrk}_{M}([i,i])\) by taking appropriate intersections and shifts. Since every list of intervals we store maintains this ordering, we can take any interval in \(\mathbf{mrk}_{M}([i,i])\), and see the corresponding interval it was previously (if any) in \(\mathbf{mrk}_{M}([k,i-1])\) for all \(1\leq k\leq i-1\). Then by intersecting the interval in \(\mathbf{mrk}_{M}([i,i])\) with its corresponding interval in \(\mathbf{mrk}_{M}([k,i-1])\), we get a new corresponding interval in \(\mathbf{mrk}_{M}([k,i])\). We repeat this process iteratively with \(i\) going from \(1\) to \(n\), which at the end computes all of \(\mathbf{mrk}_{M}:\mathbf{Int}([n])\to\mathbf{Pvec}\). We now describe what can happen to the intervals as we pass over a single square in which a transposition occurs, swapping \(\sigma_{i}\) and \(\sigma_{i+1}\). From the analysis in [13, Section 3], if the pairing function changes, then the intervals themselves do not change. If the pairing function remains the same, then two of the persistence intervals will change. Suppose \(\sigma_{i}\) is paired with \(\tau_{i}\) and \(\sigma_{i+1}\) is paired with \(\tau_{i+1}\). There are four possibilities, see Figure 8. We describe the algorithm in Algorithm 1. The output of Algorithm 1 will be \(\mathbf{mrk}_{M}\), stored as a collection of lists of the barcodes \(\mathbf{mrk}_{M}([s,t])\) for all \(s\leq t\in[n]\). We now prove the correctness of Algorithm 1. Figure 6: We start with \(\gamma_{1}\) on the left, and then push \(\gamma_{1}\) through the square to track along the lower-right corner of the square (in blue). We repeat this process, descending down each square in the first column until we reach \(\gamma_{2}\) (middle). Then we repeat this process column-by-column until we’ve reached \(\gamma_{n}\) (right). Figure 7: Three possible ways in which \(F_{\gamma}\) can change via being pushed through a one-by-one square. In our algorithm, \(\gamma\) always starts along the red path, then shifts to the blue path. * Step 1. Compute \(D=RU\) for \(F_{\gamma_{1}}\), getting the ordered list intervals and the pairing for each interval. * Step 2. for each interval in intervals, intersect the interval with \([1,n]\), and store the result in the ordered list \(\mathbf{mrk}([1,1])\). * Step 3. For \(i:=1\) to \(n-1\), do * Step 3. For \(j:=n\) down to \(2\), do * update \(D\), \(R\), \(U\), and intervals via the vineyards algorithm, as \(\gamma\) sweeps through the square with upper-left corner \((i,j)\) and lower-right corner \((i+1,j-1)\). * Step 3. For each interval in intervals, shift the interval down by \(i-1\), and intersect the interval with \([1,n]\), storing the result in the ordered list \(\mathbf{mrk}([i,i])\). * Step 3. For \(k:=1\) to \(i-1\), do * For each interval in \(\mathbf{mrk}([i,i])\), intersect with the corresponding interval in \(\mathbf{mrk}([k,i-1])\). Store this intersection in the ordered list \(\mathbf{mrk}([k,i])\). For \(i\in[n]\), \(\mathbf{mrk}_{M}([i,i])\) is found by taking each interval in the barcode for \(F_{\gamma_{i}}\), shifting it down by \(i-1\), and then taking the intersection with \([1,n]\). Let \(1<i\leq n\), and suppose we know \(\mathbf{mrk}_{M}([i,i])\) and \(\mathbf{mrk}_{M}([k,i-1])\) for all \(1\leq k\leq i-1\), and that these lists of intervals are stored in the ordered fashion previously described. From this information, we can compute \(\mathbf{mrk}_{M}([k,i])\). Algorithm 1 correctly computes the meta-rank for the bimodule \(M\) induced by homology of the input bifiltration \(F\), and runs in time \(O(n^{3})\). As a result, the number of rectangles in the rank decomposition for \(M\) is also \(O(n^{3})\). Proof. By Proposition 3, we can compute \(\mathbf{mrk}_{M}([1,1])\), and further \(\mathbf{mrk}_{M}([i,i])\) for all \(i\in[n]\). Then we can use Proposition 3 iteratively to fill in \(\mathbf{mrk}_{M}([k,i])\) for all \(1\leq k<i\leq n\), and we are done. For the runtime analysis, first observe that the initial \(D=RU\) computation in Step 1 takes \(O(n^{3})\) time, and intervals can be computed from the decomposition in linear time. The loop in Step 2 also takes linear time, as the size of intervals is \(O(n)\) which is fixed throughout. Step 3 consists of a for loop with \(O(n)\) iterations. Step 3 consists of a for loop with \(O(n)\) iterations, and each loop inside performs an update over a square using the Figure 8: Four cases in which intervals change after a transposition. Observe that in each case, both intervals change, and this change is in exactly one coordinate. vineyards approach. A single update takes \(O(n)\) time in the worst case, so Step 3.1 takes \(O(n^{2})\) time. Step 3.2 runs in linear time for the same reason as Step 2. Step 3.3 consists of a for loop with \(O(n)\) iterations, with each iteration taking \(O(n)\) operations as the size of each \(\mathbf{mrk}_{M}([k,i])\) is the same as intervals. Hence, Step 3.3 has total runtime \(O(n^{2})\). Thus, each loop in Step 3 consists of substeps that run in \(O(n^{2})\) time, \(O(n)\) time, and \(O(n^{2})\) time respectively, incurring a total cost of \(O(n^{3})\) over \(O(n)\) iterations. To summarize, we have a step with \(O(n^{3})\) cost, followed by a step with \(O(n)\) cost, followed by a step with \(O(n^{3})\) cost, so the algorithm runs in \(O(n^{3})\) time. By Definition 33, we can compute \(\mathbf{mdgm}_{M}\) from \(\mathbf{mrk}_{M}\) in \(O(n^{3})\) time, implying the number of non-zero intervals in \(\mathbf{mdgm}_{M}\) is \(O(n^{3})\). By Proposition 27, each non-zero interval in \(\mathbf{mdgm}_{M}\) corresponds uniquely to a single rectangle in the rank decomposition of \(M\), and so the number of such rectangles is likewise \(O(n^{3})\). ## 6 Discussion We conclude with some open questions. First, we would like to extend our approach to the \(d\)-parameter setting. We expect that a proper extension would satisfy relationships with the rank invariant and rank decompositions similar to Proposition 12 and Proposition 27. Such an extension would also lead to a "recursive" formulation of the persistence diagram of diagrams illustrated in Figure 2. Next, Theorem 36 implies that the number of rectangles needed in a rank decomposition for a bimodule is bounded above by \(O(n^{3})\). It is not known whether this bound is tight. Lastly, there have been multiple recent works that use algorithmic ideas from 1-parameter persistence to compute invariants in the multiparameter setting [15, 16, 23]. We wish to explore in what ways these approaches can create new algorithms or improve upon existing ones for computing the invariants of multi-parameter persistence modules.
2310.08348
LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios
Building agents based on tree-search planning capabilities with learned models has achieved remarkable success in classic decision-making problems, such as Go and Atari. However, it has been deemed challenging or even infeasible to extend Monte Carlo Tree Search (MCTS) based algorithms to diverse real-world applications, especially when these environments involve complex action spaces and significant simulation costs, or inherent stochasticity. In this work, we introduce LightZero, the first unified benchmark for deploying MCTS/MuZero in general sequential decision scenarios. Specificially, we summarize the most critical challenges in designing a general MCTS-style decision-making solver, then decompose the tightly-coupled algorithm and system design of tree-search RL methods into distinct sub-modules. By incorporating more appropriate exploration and optimization strategies, we can significantly enhance these sub-modules and construct powerful LightZero agents to tackle tasks across a wide range of domains, such as board games, Atari, MuJoCo, MiniGrid and GoBigger. Detailed benchmark results reveal the significant potential of such methods in building scalable and efficient decision intelligence. The code is available as part of OpenDILab at https://github.com/opendilab/LightZero.
Yazhe Niu, Yuan Pu, Zhenjie Yang, Xueyan Li, Tong Zhou, Jiyuan Ren, Shuai Hu, Hongsheng Li, Yu Liu
2023-10-12T14:18:09Z
http://arxiv.org/abs/2310.08348v1
# LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios ###### Abstract Building agents based on tree-search planning capabilities with learned models has achieved remarkable success in classic decision-making problems, such as _Go_ and _Atari_. However, it has been deemed challenging or even infeasible to extend Monte Carlo Tree Search (MCTS) based algorithms to diverse real-world applications, especially when these environments involve complex action spaces and significant simulation costs, or inherent stochasticity. In this work, we introduce _LightZero_, the first unified benchmark for deploying _MCTS/MuZero_ in general sequential decision scenarios. Specificially, we summarize the most critical challenges in designing a general MCTS-style decision-making solver, then decompose the tightly-coupled algorithm and system design of tree-search RL methods into distinct sub-modules. By incorporating more appropriate exploration and optimization strategies, we can significantly enhance these sub-modules and construct powerful LightZero agents to tackle tasks across a wide range of domains, such as board games, _Atari_, _MuJoCo_, _MiniGrid_ and _GoBigger_. Detailed benchmark results reveal the significant potential of such methods in building scalable and efficient decision intelligence. The code is available as part of OpenDILab at [https://github.com/opendilab/LightZero](https://github.com/opendilab/LightZero). ## 1 Introduction General decision intelligence needs to solve tasks in many distinct domains. Recent advances in reinforcement learning (RL) algorithms have addressed several challenging decision-making problems [1; 2] and even surpassed top-level human experts in performance [3]. However, these state-of-the-art RL agents often exhibits poor data efficiency and face significant challenges when handling a wide range of diverse problems. Different environments present specific learning requirements and difficulties that prompted currently various algorithms (e.g. DQN [4], PPO [5], R2D2 [6], SAC [7]) and system architectures such as IMPALA [8] and others [9; 10; 11]. Designing a general and data-efficient decision solver needs to tackle various challenges, while ensuring that the proposed algorithm can be universally deployed anywhere without domain-specific knowledge requirements. Monte Carlo Tree Search (MCTS) is a powerful approach that utilizes a search tree with simulation and backpropogation mechanisms to train agents with a small data budget [12]. To model high-dimensional observation spaces and complex policy behaviour, AlphaGo [13] enhances MCTS with deep neural networks and designs the policy and value network that identify optimal actions and winning rates respectively, which was the first to defeat the strongest professional human player in Go. Despite the impressive results, MCTS-style algorithms rely on a series of necessary conditions, such as knowledge of game rules and simulators, discrete action space and deterministic state transition, which severely restrict the application scope of these methods. In recent years, several successors to AlphaGo have attempted to extend its capabilities in various directions. MuZero [14] relaxes the requirements for prior knowledge of environments by training a set of neural networks to reconstruct reward, value and policy. Sampled MuZero [15] successfully applies MCTS to various complex action space with a novel planning mechanism based on sampled actions. [16; 17; 18] improve MuZero in terms of planning stochasticity, representation learning effectiveness and simulation efficiency respectively. These emerging algorithm insights and techniques have contributed to the development of more general MCTS algorithms and toolchains. In this paper, we present a unified algorithm benchmark named _LightZero_ that first comprehensively integrates different MCTS/MuZero algorithm branches, including 9 algorithms and more than 20 decision environments with detailed evaluation. To better understand the potential of MCTS as an efficient general-purpose sequential decision solver, we revisit the development history of MCTS methods [19] and the diverse criterions of newly proposed RL environments [20; 21; 22]. As shown in Figure 2, we outline the six most challenging dimensions in developing LightZero as a general method, including multi-modal and high-dimensional observation space [23], complex action space, reliance on prior knowledge, inherent stochasticity, simulation cost, and hard exploration. Furthermore, highly coupled algorithm and system architectures greatly increase the cost and barriers of migrating and improving MCTS-style methods. Some special mechanisms like tree search and data reanalyze [24] seriously hinder the simplification and parallel acceleration of code implementation. To overcome these difficulties, LightZero designs a modularly pipeline to enable distinct algorithm components as plug-ins. For example, the chance node planning for modelling stochasticity can also be used in continuous control or hybrid action environments. From the unified viewpoint provided by LightZero, we can systematically divide the whole training scheme of MCTS-style methods into four sub-modules: data collector, data arranger, agent learner, and agent evaluator. LightZero's decoupled architecture empowers developers to focus intensively on the customization of environments and algorithms. Meanwhile, some techniques like off-policy correction and data throughput limiter can ensure the steady convergence of the algorithm while achieving runtime speedups. Based on these supports, LightZero also explores the advantages of combining some novel insights from model-based RL with MCTS approaches. In particular, the misalignment problem [25] of state representation learning and dynamics learning can result in the problematic optimization for MuZero, thus a simple self-consistency loss can significantly speed up convergence without special tuning. Besides, intrinsic reward mechanism [26][27][28] can address the exploration deficiency of tree-search methods with hand-crafted noises. Subsequently, we evaluate the ability of LightZero as a general solver for various decision problems. Experiments on different types of environments demonstrate LightZero's rich application ranges and data efficiency regimes with few hyper-parameter adjustments. At last, we provide discussions on the future optimization directions of each sub-module. In general, we summarize the three key contributions of this paper as follows: * We present LightZero, the first general MCTS/MuZero algorithm benchmark that systematically evaluates related algorithms and system designs. * We outline the most critical challenges of real-world decision applications. To address these issues, we decouple the algorithm and system design of MCTS methods and design a modular training pipeline, which can easily integrate novel insights for better scalability. * We demonstrate the capability and future potential of LightZero as a general sequential decision solver, which can be trained and deployed across diverse domains. ## 2 Background **Reinforcement Learning** models a decision-making problem as a Markov Decision Process (MDP) \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma,\rho_{0})\), where \(\mathcal{S}\) and \(\mathcal{A}\) denote the state space and action space, respectively. The transition function \(\mathcal{P}\) maps \(\mathcal{S}\times\mathcal{A}\) to \(\mathcal{S}\), while the expected reward function \(\mathcal{R}\) maps \(\mathcal{S}\times\mathcal{A}\) to \(\mathbb{R}\). The discount factor \(\gamma\in[0,1)\) determines the importance of future rewards, and \(\rho_{0}\) represents the initial state distribution. The goal of RL is to learn a policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) that maximizes the expected discounted return over the trajectory distribution \(J(\pi)=\mathbb{E}_{\pi,\rho_{0},\mathcal{P},\mathcal{R}}[\sum_{t=0}^{\infty} \gamma^{t}r_{t}]\). **AlphaZero**[29] is a generalized version of AlphaGo [13], eliminating the reliance on supervised learning from game records. It is trained entirely through unsupervised self-play and achieves superhuman performance in various board games, such as chess, shogi, and Go. This approach replaces the handcrafted features and heuristic priors commonly used in traditional intelligent programs. Specifically, AlphaZero employs a deep neural network parameterized by \(\theta\), represented as \((\mathbf{p},v)=f_{\theta}(s)\). Given a board position \(s\), the network produces a action probability \(p_{a}=P_{r}(a|s)\) for each action \(a\) and a scalar value \(v\) to predict the expected return \(z\), i.e. \(v\to z\). **MuZero**[14] achieves superhuman performance in more complex domains with visual input [30], without knowledge of the environment's transition rules. It combines tree search with a learned model, using three networks: 1 Representation Network: \(s^{0}=h_{\theta}(o_{1},\dots,o_{t})\). This network represents the root node (at time \(t\)) as a latent state, obtained by processing past observations \(o_{1},\dots,o_{t}\), 2 Dynamics Network: \(r^{k},s^{k}=g_{\theta}(s^{k-1},a^{k})\). This network simulates the dynamics of the environment. Given a state and selected action, it outputs the transitioned next state and corresponding reward. 3 Prediction Network: \(\mathbf{p}^{k},v^{k}=f_{\theta}(s^{k})\). Given a latent state, this network predicts the action probability and value. Notably, MuZero searches within the learned latent space. For the MCTS process in MuZero, assume the initial root node \(s_{0}\) is generated from the original board state through the representation network, each edge stores the following information: \(N(s,a),P(s,a),Q(s,a),R(s,a),S(s,a)\), respectively representing visit counts, policy, mean value, reward, and state transition. The MCTS process in the latent space can be divided into three phases: * **Selection**: Actions are chosen according to the Upper Confidence Bound (UCB) [31] formula: \[a^{*}=\operatorname*{arg\,max}_{a}Q(s,a)+P(s,a)\frac{\sqrt{\sum_{b}N(s,b)}}{1 +N(s,a)}[c_{1}+\log(\frac{\sum_{b}N(s,b)+c_{2}+1}{c_{2}})]\] where, \(N\) represents the visit count, \(Q\) is the estimated average value, and \(P\) is the policy's prior probability. \(c_{1}\) and \(c_{2}\) are constants that control the relative weight of \(P\) and \(Q\). * **Expansion**: The selected action is executed in the learned model, continuing until a leaf node is encountered. At this point, a new state node \(s^{l}\) is generated, and its associated predicted reward \(r^{l}\) is determined. Utilizing the prediction function, we obtain the predicted values \(p^{l}\) and \(v^{l}\). Subsequently, this node is incorporated into the search tree. * **Backup**: The estimated cumulative reward at step \(k\) is calculated based on \(v^{l}\), denoted as: \(G^{k}=\sum\limits_{\tau=0}^{l-1-k}\gamma^{\tau}r_{k+1+\tau}+\gamma^{l-k}v^{l}\). Subsequently, \(Q\) and \(N\) are updated along the search path. After the search is completed, the visit count set \(N(s,a)\) is returned at the root node \(s_{0}\). These visit counts are normalized to obtain the improved policy: \[\mathcal{I}_{\pi}(a|s)=N(s,a)^{1/T}/\sum_{b}N(s,b)^{1/T}\] Figure 1: Overview of LightZero. The left side depicts the development of MCTS, while the right side showcases various RL environments. LightZero incorporates and extends recent advances within the MCTS/MuZero sub-domain and effectively applies them across diverse environments. where \(T\) is the temperature coefficient controlling the exploration degree. Finally, an action is sampled from this distribution for interaction with the environment or self-play. During the learning phase, MuZero perform end-to-end training with the following loss function, where \(l^{p}\), \(l^{v}\) and \(l^{v}\) are loss functions for policy, value and reward respectively, and the final term is weight decay. \[l_{t}(\theta)=\sum_{k=0}^{K}l^{p}(\pi_{t+k},p_{t}^{k})+l^{v}(z_{t+k},v_{t}^{k})+ l^{r}(u_{t+k},r_{t}^{k})+c||\theta||^{2}\] ## 3 LightZero In this section, we will first introduce the overview of LightZero, followed by a comprehensive analysis of challenges in various decision environments. Additionally, we propose a specific training pipeline design for a modular and scalable MCTS toolchain. We will conclude this section with two algorithm insights inspired by the decoupled design of LightZero. ### Overview As is shown in Figure 1, LightZero is the first benchmark that integrates almost all recent advances in the MCTS/MuZero sub-domain. Specifically, LightZero incorporates nine key algorithms derived from the original AlphaZero [29], establishing a standardized interface for training and deployment across diverse decision environments. Unlike the original versions of these derived algorithms, which focused on specific avenues of improvement, LightZero provides a unified viewpoint and interface. This unique feature enables exploration and comparison of all possible combinations of these techniques, offering an comprehensive baseline for reproducible and accessible research. The concrete experimental results are thoroughly described in Section 4 and Appendix B. ### How to Evaluate A General MCTS Algorithm: 6 Environment Challenges The algorithm extensions integrated in LightZero have greatly relaxed the constraints and broadened the applicability of MCTS-style methods. In the following part, we hope to delve deeper into the key Figure 2: Radar chart comparison of MCTS-style methods and model-free RL (e.g. PPO) on six environment challenges and another data efficiency dimensions. We categorize the critical capabilities of general decision solvers as follows: multi-modal observation space, complex action space, inherent stochasticity, reliance on prior knowledge, simulation cost, hard exploration and data efficiency. Each curve in the graph represents the score of an algorithm across these six categories. A score of 1 indicates that the algorithm perform poorly in this dimension and is only applicable to limited scenarios, while a higer score means a large application scope and better performance. In particular, model-free RL methods need no simulation and have little dependence on priors, so it achieves high score in corresponding dimensions. Please note that within this context, the term _LightZero_ refers to the special algorithm that embodies the optimal combination of techniques and hyperparameter settings within our framework. Details about qualitative score rules can be found in Appendix D. issues in the design of general and efficient MCTS algorithms. In order to systematically complement this endeavor, we conducted an analysis of a set of classic and newly proposed RL environments to identify common characteristics. Based on this analysis, we have summarized six core challenging dimensions, which are presented in a radar plot depicted in Figure 2. Concretely, The intentions and goals of six types of environmental capabilities are: _1) Multi-modal observation spaces_ pose a challenge for agents as they must be able to extract different representation modalities (e.g., low-dimensional vectors, visual images, and complex relationships) while effectively fusing distinct embeddings. _2) Complex action space_ necessitates the agent's proficiency in generating diverse decision signals, encompassing discrete action selection, continuous control, and hybrid structured action space. _3) Reliance on prior knowledge_ is a major drawback of methods like AlphaZero. These approaches inherently require accessibility to a perfect simulator and specific rules of the environment. In contrast, MuZero and its derived methods address this limitation by learning an environment model to substitute the simulator and related priors. _4) Inherent stochasticity_ presents a fundamental challenge in tree-search-based planning methods. The uncertainty of environment dynamics and partially observable state spaces both can lead to misalignment of planning trajectories, resulting in a large number of useless or conflicting search results. _5) Simulation cost_ stands as the primary contributor to wall-time consumption for MCTS-style methods. At the same time, the algorithm performance will degrade a lot if the algorithm fails to visit all the necessary actions during the simulation process. _6) Hard exploration_ represents a crucial challenge that is often overlooked. While search trees can enhance efficiency by reducing the scope of exploration, MCTS-style methods are susceptible to difficulties in environments with numerous non-terminating cases, such as mazes. ### How to Simplify A General MCTS Algorithm: Decouple Pipeline into 4 Sub-Modules The impressive performance of MCTS-style methods is often accompanied by a notable drawback: the complexity of implementations, which greatly restricts their applicability. In contrast to some classic model-free RL algorithms like DQN [32] and PPO [5], MCTS-style methods require multi-step simulations using search trees at each agent-environment interaction. Also, to improve the quality of training data, MuZero Unplugged [24] introduce a data reanalyze mechanism that uses the newly obtained model to compute improved training targets on old data. However, both of these techniques require multiple calls to simulators or neural networks, increasing the complexity across various aspects of the overall system, including code, distributed training, and communication topology. Figure 3: Four core sub-modules of the training pipeline in LightZero. _Context Exchanger_ is responsible for transporting configurations, models and trajectories among different sub-modules. Therefore, it is necessary to simplify the whole framework based on the integration of algorithms. Figure 3 presents a depiction of the complete pipeline of LightZero with four core sub-modules. Firstly, LightZero offers support for both online and offline RL [33] training schemes. The distinction between them lies in the utilization of either an online interaction data collector or direct usage of an offline dataset. Secondly, LightZero restructures its components and organizes them into four main sub-modules, based on the principle of _high cohesion and low coupling_. **Data collector** is responsible for efficient action selection using policy network and search tree. It also contains various exploration strategies, data pre-processing and packaging operations. **Data arrangement** plays a unique role in MCTS by effectively storing and preparing valuable data for training purposes. This sub-module involves the data reanalyze technique [24] to correct off-policy and even offline data. Furthermore, the modified priority sampling [34] ensures training mini-batches have both sufficient variety and high learning potential. To balance these tricks with efficiency, the throughput limiter controls the ratio of adding and sampling data to ensure optimal data utilization within a fixed communication bandwith. **Agent learner** is responsible for training multiple networks. It can be enhanced through various optimization techniques, such as self-supervised representation learning [35; 36], model-based rollout [37; 38], distributional predicton [39] and normalization [40; 41]. These techniques contribute to the policy improvement and further enhance the overall performance of the agent. **Agent evaluator** periodically provides the diverse evaluation metrics [42] to monitor the training procedure and assess policy behaviour. It also integrates some inference-time tricks like beam search [43] to enhance test performance. We provide a detailed analysis of how these sub-modules are implemented in specific algorithms in Appendix F. Built upon these abstractions, LightZero serves as a valuable toolkit, enabling researchers and engineers to develop enhanced algorithms and optimize systems effectively. For example, the exploration strategies and ensuring the alignment of a learned model in MCTS is crucial, and this will be discussed in the subsequent sub-section. In addition, exploring parallel schemes for multiple vectorized environments and search trees can be an insightful topic for machine learning system. The associated dataflow and overhead analysis will be presented in the Appendix E. ### How to Improve A General MCTS Algorithm: 2 Examples In this section, we present two algorithm improvement examples inspired by LightZero. The below dimensions pose necessary challenges in designing a comprehensive MCTS solver. LightZero addresses these challenges through various improvements, resulting in superior performance compared to individual algorithm variants across different domains (Section 4 and 5). **Intrinsic Exploration** While tree-search-based methods perform well in board games with only eventual reward, they may encounter challenges or perform poorly in other environments with sparse rewards, such as _MiniGrid_[44]. One crucial distinction between these two problems is that in the former, the search tree can always reach several deterministic final states, whereas in the latter, it may encounter various non-termination states due to the limitation of maximum episode length. To address this issue, LightZero incorporates the idea of intrinsic reward methods [28] and implement it efficiently within MuZero's learned models. Further details can be found in Section 5.1. **Alignment in Environment Model Learning** MuZero employs a representation network to generate latent states and a dynamics network to predict next latent states. However, there is no explicit supervision guiding the desired properties of the latent space. Traditional self-supervised representation learning methods often fail to align these proxy tasks with RL objectives. The difference of rollouts between the perfect simulator and the learned model is also a problems that can not be ignored. Further exploration of misalignments across different environments are discussed in Section 5.2. ## 4 Experiments In Section 4.1, we initially present some representative experiments of LightZero, with detailed experimental settings and more comprehensive results outlined in the Appendix B. Subsequently, in Section 4.2, we delve into key observations and reflections based on these benchmark results, introducing some critical insights. Particularly regarding the exploration and the alignment issues of environment model learning, we conduct an in-depth experimental analysis in Section 5. ### Benchmark Results To benchmark the difference among distinct algorithms and the capability of LightZero as a general decision solver, we conduct an extensive comparisons across a diverse range of RL environments. The algorithm variants list contains AlphaZero [29], MuZero [14], EfficientZero [17], Sampled MuZero [15], Stochastic MuZero [16], Gumbel MuZero [18] and other improved versions of LightZero. For each scenario, we evaluate all the possible variants on corresponding environments. In Figure 4, we show some selected results as examples. For detailed settings, metrics, comprehensive benchmark results and related analysis, please refer to Appendix B. ### Key Observations and Insights Building on the unified design of LightZero and the benchmark results, we have derived some key insights about the strengths and weaknesses of each algorithm, providing a comprehensive understanding of these algorithms' performance and potential applications. **O1**: In board game environments, AlphaZero's sample efficiency greatly exceeds that of MuZero. This suggests that employing AlphaZero directly is advantageous when an environment simulator is available; however, MuZero can still achieve satisfactory results even in the absence of a simulator. **O2**: Self-supervised loss substantially enhances performance in most Atari environments with image inputs. Figure 7 demonstrates that MuZero with SSL performs similarly to MuZero in _MsPacman_, while outperforming it in the other five environments. This result highlights the importance of SSL for aligning the model and accelerating the learning process in image input environments. **O3**: Predicting _value_prefix_ instead of reward does not guarantee performance enhancement. For example, in Figure 7, EfficientZero outperforms MuZero with SSL only in the _MsPacman_ and _Breakout_ environments, while showing similar performance in the other environments. In certain specific scenarios, such as the sparse reward environments depicted in Figure 12, EfficientZero's performance is significantly inferior to that of MuZero with SSL. Therefore, we should prudently decide whether to predict _value_prefix_, taking into account the attributes of the environment. **O4**: MuZero with SSL and EfficientZero demonstrate similar performance across most _Atari_ environments and in complex structured observation settings, such as _GoBigger_. This observation suggests Figure 4: Comparisons of mean episode return for algorithm variants in LightZero across diverse environments: _Atari_ with discrete action and partial-observable state (_Qbert_, _Breakout_, _MsPacman_), _GoBigger_[23] with complex observation and multi-agent cooperation, continuous control with environment stochasticity (_Bipedalwalker_), and _Gomoku_ with varying accessibility to simulator. that environments with complex structured observations can benefit from representation learning and contrastive learning techniques [35] to enhance sample efficiency and robustness. **O5**: In discrete action spaces, Sampled EfficientZero's performance is correlated with action space dimensions. For instance, Sampled EfficientZero performs on par with EfficientZero in _Breakout_ (action space dimension of 4), but its performance decreases in _MsPacman_ (dimension of 9). **O6**: Sampled EfficientZero with _Gaussian policy representation_ is more scalable in continuous action spaces. The Gaussian version performs well in traditional continuous control and _MuJoCo_ environments, while _factored discretization_[45] is limited to low-dimensional actions. **O7**: Gumbel MuZero achieves notably better performance than MuZero when the number of simulations is limited, which exhibits its potential in designing low time-cost MCTS agent. **O8**: In environments with stochastic state transitions or partial observable states (such as Atari without stacked frames), Stochastic MuZero can obtain slightly better performance than MuZero. **O9**: The self-supervised loss proposed in [17], sampling-related techniques in Sampled MuZero [45], computational improvements in Gumbel MuZero [18] for utilizing MCTS searched information, and environment stochasticity modeling in Stochastic MuZero [16] are orthogonal to each other, exhibiting minimal interference. LightZero is exploring and developing ways to seamlessly integrate these characteristics to design a universal decision-making algorithm. ## 5 Two Algorithm Case Studies for LightZero ### Exploration Strategies in MCTS **Motivation** Finding the optimal trade-off between exploration and exploitation is a fundamental challenge in RL. It is well-known that MCTS can reduce the policy search space and facilitate exploration. However, there exists limited research on the performance of MCTS algorithms in hard-exploration environments. Based on the above benchmark results, we conduct a detailed analysis of the algorithm behaviours between challenging sparse reward environments and board games, as well as insights behind the selection of exploration mechanisms in this section and Appendix C.1. **Settings** We performed experiments in _MiniGrid_ environment, mainly on the KeyCorridorS3R3 and FourRooms scenarios. Expanding upon the naive setting (handcrafted temperature decay), we conducted a comprehensive investigation of six distinct exploration strategies in LightZero. A detailed description of each exploration mechanism is provided in Appendix C.1. **Analysis** Figure 5 indicate that simply increasing search budgets does not yield improved performance in challenging exploration environments. Instead, implementing a larger temperature and Figure 5: Performance of various MCTS exploration mechanisms in _MiniGrid_ environment (_Return_ during the collection phase). Under the naive setting, the agent fails due to inadequate exploration. Merely increasing search budgets with the _NaiveDoubleSimulation_ approach does not yield any significant improvement. _EpsGreedy_, _FixedTemperature_ and _PolicyEntropyRegularizatio-x_ display higher variance as they cannot guarantee enough exploration. _IntrinsicExploration_ effectively explores the state space by leveraging curiosity mechanisms, resulting in the highest sample efficiency. incorporating policy entropy bonus can enhance action diversity during data collection, albeit at the cost of increased variance. However, theoretically, they cannot guarantee sufficient exploration, often resulting in mediocre performance and a higher likelihood of falling into local optima due to policy collapse. Epsilon-greedy exploration ensures a small probability of uniform sampling, which aids in exploring areas with potentially high returns. _EpsGreedy_ has varying effects in different environments in early stages, but theoretically, due to its ability to ensure sufficient exploration, it may achieve good results in the long run. A more effective strategy involves curiosity-driven techniques, such as RND [27], which assigns higher intrinsic rewards to novel state-action pairs, bolstering the efficiency of exploration. The performance of the _IntrinsicExploration_ method supports this assertion, and it can be integrated into MuZero with minimal overhead (Appendix C.1.3). ### Alignment in Environment Model Learning **Motivation** Aligned and scalable [25] environment models are vital for MuZero-style algorithms, with factors such as model structure, objective functions, and optimization techniques contributing to their success. The consistency loss proposed in [17] could serve as an approach for aligning the latent state generated by the dynamics model with the state obtained from the observation. In this section, we investigate the impact of consistency loss on learning dynamic models and final performance in environments with diverse observations (vector, standard images, special checkerboard images). **Settings** To study the impact of the consistency loss on various types of observation data, we employ the MuZero algorithm as our baseline. To ensure the reliability of our experimental results, we maintain the same configurations across other settings, with additional experimental details provided in Appendix C.2. In the experiments, we use _Pong_ as the environment for image input, _LunarLander_ for continuous vector input, and _TicTacToe_ for special image input (checkerboard) environments. **Analysis** In Figure 6, consistency loss is critical for standard image input. Removing the consistency loss results in significant decline in performance, indicating the challenge of learning a dynamic model for high-dimensional inputs. For vector input environments like _LunarLander_, consistency loss provides a minor advantage, suggesting that learning a dynamic model is relatively easier on the compact vector observations. In special two dimension input environments like _TicTacToe_, the consistency loss remains large, highlighting the difficulty of achieving consistency between latent state outputs. Additionally, adding consistency loss with inappropriate hyper-parameters may lead to non-convergence (Appendix C.2). In conclusion, our experiments demonstrate that the effectiveness of the consistency loss depends on the special observation attributes. For board games, a future research direction involves investigating suitable loss functions to ensure alignment during training. ## 6 Related Work **Sequential Decision-making Problems** In the domain of sequential decision-making problems, intelligent agents aim to make optimal decisions over time, taking into account observed states and prior actions [46]. However, these problems are often compounded by the presence of continuous action spaces, dynamic transitions, and exploration difficulties. To address such problems, model-free RL methods [5; 7; 32] focus on learning expected state rewards, optimizing actions, or combining Figure 6: Impact of self-supervised consistency loss across different environments with various types of observations. From left to right, performance comparisons involve standard image input, compact vector input, and unique board image input, considering cases with and without consistency loss. Experiments show that the consistency loss proves to be critical only for standard image input. both strategies to achieve optimal policy learning. Conversely, model-based RL [25] incorporates the environment's transition into its optimization objective, aiming to maximize the expected return on trajectory distribution. MCTS [19] is a modeling approach derived from search planning algorithms such as minimax [47] and alpha-beta pruning [48]. Unlike these algorithms, which recursively search decision paths and evaluate their returns, MCTS employs a heuristic search on prior-guided simulations, effectively addressing excessive search consumption in complex decision spaces. **MCTS Algorithms and Toolkits** Despite the impressive performance and efficiency of the MCTS+RL approach, constructing the training system and dealing with intricate algorithmic details pose significant challenges when applying these algorithms to diverse decision intelligence domains. Recent research has made progress in this direction. MuZero Unplugged [24] introduced the Reanalyze technique, a simple and efficient enhancement that achieves good performance both online and offline. ROSMO [49] investigated potential issues with MuZero in offline RL and suggested a regularized one-step lookahead approach. The lack of comprehensive open-source implementations of various algorithms remains a challenge within the research community. For example, Sampled MuZero [15] lacks a public implementation. AlphaZero-General [50] and MuZero-General [51] each support only a single algorithm, and neither offers distributed implementations. Although EfficientZero [17], does support multi-GPU implementation, it is limited to the single algorithms. KataGo [52], while primarily focused on the AlphaZero and Go game, requires substantial computational resources during training, potentially posing hardware barriers for ordinary users. As a result, the research community continues to seek more efficient and enhanced open-source tools. **Standardization and Reproducibility** In the realm of Deep RL, the quest for standardizing algorithm coupled with the creation of unified benchmarks has ascended as focal points of growing significance. [53] emphasize the critical necessity of not only replicating existing work but also accurately assessing the advancements brought about by new methodologies. However, the process of reproducing extant Deep RL methods is far from straightforward, largely due to the non-determinism inherent in environments and the variability innate to the methods themselves, which can render reported results challenging to interpret. [54] proposed a set of metrics for quantitatively measuring the reliability of RL algorithms. These metrics, focusing on variability and risk both during and after training, are intended to equip researchers and production users with tools to evaluate and enhance the reliability of RL algorithms. In [55], a large-scale empirical study was undertaken to identify the factors that significantly influence the performance of on-policy RL algorithms within continuous control tasks. The insights garnered from this research offer valuable, practical suggestions for the training of on-policy RL algorithms. Despite these advancements, there remains a noticeable dearth of work specifically investigating benchmarks and the details of reproducing studies in the domain of MCTS. ## 7 Conclusion In this paper, we introduce LightZero, the first unified algorithm benchmark to modularly integrates various MCTS-style RL methods, systematically analyze and address the challenges and opportunities for deploying MCTS as a general and efficient decision solver. Through the incorporation of decoupled system design and novel algorithm insights, we conduct detailed benchmarks and demonstrate the potential of LightZero as scalable and efficient decision-making problem toolchains for the research community. Besides, based on this benchmark and related case studies, We also discuss existing limitations and valuable topics for future work in Appendix I. ## 8 Acknowledgements This project is funded in part by National Key R/D Program of China Project 2022ZD0161100, by the Centre for Perceptual and Interactive Intelligence (CPII) Ltd under the Innovation and Technology Commission (ITC)'s InnoHK, by General Research Fund of Hong Kong RGC Project 14204021. Hongsheng Li is a PI of CPII under the InnoHK. We thank several members of the SenseTime and Shanghai AI Laboratory for their help, support and feedback on this paper and related codebase. We especially thank Shenghan Zhang for informative and inspiring discussions at the beginning of this project. We are grateful to the assistance of Qingzi Zhu for many cute visual materials of LightZero.
2306.15399
Quality Estimation of Machine Translated Texts based on Direct Evidence from Training Data
Current Machine Translation systems achieve very good results on a growing variety of language pairs and data sets. However, it is now well known that they produce fluent translation outputs that often can contain important meaning errors. Quality Estimation task deals with the estimation of quality of translations produced by a Machine Translation system without depending on Reference Translations. A number of approaches have been suggested over the years. In this paper we show that the parallel corpus used as training data for training the MT system holds direct clues for estimating the quality of translations produced by the MT system. Our experiments show that this simple and direct method holds promise for quality estimation of translations produced by any purely data driven machine translation system.
Vibhuti Kumari, Narayana Murthy Kavi
2023-06-27T11:52:28Z
http://arxiv.org/abs/2306.15399v1
# Quality Estimation of Machine Translated Texts based on Direct Evidence from Training Data ###### Abstract Current Machine Translation systems achieve very good results on a growing variety of language pairs and data sets. However, it is now well known that they produce fluent translation outputs that often can contain important meaning errors. Quality Estimation task deals with the estimation of quality of translations produced by a Machine Translation system without depending on Reference Translations. A number of approaches have been suggested over the years. In this paper we show that the parallel corpus used as training data for training the MT system holds direct clues for estimating the quality of translations produced by the MT system. Our experiments show that this simple and direct method holds promise for quality estimation of translations produced by any purely data driven machine translation system. ## 1 Introduction The performance of Machine Translation (MT) systems is measured either using Manual evaluation using metrics such as Adequacy and Comprehensibility, or using automatic methods using metrics such as BLEU and TER, by comparing with Reference Translations [6]. Quality Estimation (QE), on the other hand, deals with automatic estimation of quality of translations produced by an MT system without using reference Translations. QE of MT outputs has several benefits. Good translations can be selected, post-edited as required and added to the training data. Poor quality cases can be removed from training data to reduce noise. QE helps in more accurate estimation of post-editing time and effort and in taking associated decisions in commercial translation. A large number of techniques have been proposed for quality estimation. The annual workshop on Machine Translation (WMT) has been including a shared task on quality estimation for many years now. Chrysoula Zerva et al [9] describe the findings of the 11th edition of this shared task held as part of WMT-2022. Participants from 11 different teams submitted altogether 991 systems to different task variants and language pairs in WMT-2022. Machine translation generally works sentence by sentence and the primary goal of the Quality Estimation (QE) task is also to measure of the quality of translations at sentence level. Several sub-tasks and related tasks are also taken up in the WMT workshops. Word level QE deals with marking of words as OK or BAD. In fact, sentence level scores are often computed or estimated using these word level scores. Scoring entire documents is another task. Identifying SL words that cause quality issues is also looked at. Explainable QE task and Critical Error detection task were included in the WMT-2022 conference. Both Direct Assessment on post-edit data (called MLQE-PE) and Multidimensional Quality Metrics (MQM) were included. In the current evaluation practices, QE systems are assessed mainly in terms of their correlation with human judgements. Anna Zaretskaya et al [8] ask whether the current QE systems are useful for MT model selection. Serge Gladkoff et al [3] focus on the amount of data that is required to reliably estimate the quality of MT outputs. They use Bernoulli Statistical Distribution Modeling and Monte Carlo Sampling Analysis towards this end. Shachar Don-Ychiya et al [1] focus on quality estimation of machine translation outputs in advance. They present a new task named PreQuEL, the task of predicting the quality of the output of MT systems based on the source sentence only. Some have focussed on data set generation, others have developed tools for QE. While the research in MT QE is rich in terms of ideas, techniques, tools, and resources, it appears that none of them are looking at the parallel corpus that is used for building MT systems for clues about quality of translations. In this paper we propose what we call Direct Evidence approach, which is based solely on the training data that is used to build MT systems. ## 2 Direct Evidence Approach Translation is a meaning preserving transformation of texts from a Source Language (SL) to a Target Language (TL). This is generally done sentence by sentence, or more generally, segment by segment. In order to preserve the meaning of the SL sentence, words and phrases in SL sentences need to be mapped to equivalent words and phrases in the TL. Other aspects of syntax and semantics such as agreement, word order, semantic compatibility will also need to be addressed. Modern purely data driven approaches such as Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) are based on the view that all linguistic regularities and idiosyncrasies are indirectly present in the parallel corpus and parallel corpus alone is sufficient, no other data or linguistic resource is needed. A Machine Translation (MT) system can be obtained by training on a training data consisting of a parallel corpus alone. We believe that the training data also has clues useful for estimating the quality of translations produced by the MT system. In particular, here we focus on lexical transfer. We show that the Word Co-occurrence Matrix (WCM) holds direct clues for estimating the quality of lexical transfer and hence quality of translation as a whole. Statistical basis for performing lexical transfer comes mainly from word co-occurrence statistics. Let SL-TL be a parallel corpus consisting of n Source Language segments \(S_{1},S_{2},S_{3},...,S_{n}\), paired with their translational equivalents \(T_{1},T_{2},T_{3},...,T_{n}\) in the Target Language. We say SL word i co-occurs with TL word j if the TL word j occurs anywhere in the translational equivalent of a SL sentence in which the word i occurs. Let \(V_{s}\) be the Vocabulary of the Source Language (total number of distinct word forms occurring in any of the SL segments) and \(V_{t}\) be the Vocabulary of the Target Language. Then Word Co-Occurrence Matrix WCM is a \(V_{s}\) x \(V_{t}\) matrix of non-negative integers where \(WCM_{i,j}\) indicates the total number of times the Source Language word i had co-occurred with the Target Language word j in the entire training data set. Clearly, WCM will be a very large and very sparse matrix. A large \(WCM_{i,j}\) value indicates a strong correlation between the SL word i and TL word j in the training corpus. If an SL word i co-occurs with a TL word j large number of times, if i does not occur with too many other TL words with high frequency, if the WCM counts for other possible mappings in TL are significantly lower, all these indicate that the lexical transfer of i to j during translation can be done with high confidence. When the evidence in the form of co-occurrence counts coming from the training data is weak, the MT system may still go ahead and substitute the word j for word i based on the combined evidence coming from other parts of the sentence, language model, etc. This may be an optimal decision taken by the MT system with regard to some specified loss function. Optimal choice in some probabilistic sense may not be the correct choice, it may just be the best of several possible choices, none of which may be correct. MT systems generally go ahead and produce translations, whether they are sure or not-so-sure or not-at-all-sure. Here we hypothesize that the fraction of words in a SL sentence that have strong co-occurrence relations with any of the words in the TL sentence produced by the MT system, is a direct indicator of quality of translation. Experiments and Results In our first experiment we use an English-Kannada parallel corpus consisting of 4,004,894 segments (that is approximately 4 Million segments) [7] There are about 36M tokens in English and 27M tokens in Kannada. The Vocabulary size for English is 281,881. Only 42,222 (less than 15%) occur at least 20 times. 78.5% of words occur less than 10 times, 69% of words occur less than 5 times, 44.47% of words occur only once. This highly skewed distribution of words in all human languages is very well understood and expressed through laws such as Zipf's law [10] and Mandelbrot's law [5]. The Vocabulary of the Kannada part is 1,253,589. This number is larger due to the much more complex morphology we see in Dravidian languages such as Kannada. Only 82,227 (6.5%) occur at least 20 times. 89.2% of words occur less than 10 times, 81.8% of words occur less than 5 times, 57.8% of words occur only once. The general picture will be similar for any pair of languages in the world. If a SL word i occurs only once and the translation of the sentence in which it occurs has n words, then i can be mapped to any one of these n TL words with equal probability. While an MT system may use other clues such as mappings of other words in the SL sentence and language model probabilities, it will still be decision that is not based on very strong evidence. Low frequency words show poor co-occurrence relations and hence less statistical evidence for lexical transfer. Low frequency words are large in number in any language and this is a big issue for any purely data driven model. Larger data is better but whatever may be the size of the data, the problem remains pertinent. Very high frequency words can also pose challenges. They usually include determiners, prepositions and other function words. Words such as 'the', 'of', 'by' occur with very high frequency in English, none of them map to any word in Kannada. WCM will show large number of possible mappings, all (or almost all) of them will be wrong. This is again a hopeless situation. Phrase based approaches and sub-word models attempt to address these problems and are successful to some extent. Keeping these ideas in mind, we build WCM for words that co-occur at least 20 times in the training set, we exclude words which occur more than 10,000 times in the corpus. Under these assumptions, WCM matrix can be built very fast (it took less than 4 minutes on a 40 core Intel Xeon Silver 4114 CPU at 800 MHz server) and the size of uncompressed the WCM file is only 44 MB. There are 1,474,792 entries in the WCM matrix, there are only 38,502 English words in this matrix. We divide the corpus into training, development and test sets with 4,004,894, 5000 and 5037 segments respectively and train an SMT system using MOSES [4]. WCM is computed for the training set. Then for each segment in the test set, we check the number of words (excluding very high frequency words) for which there is strong evidence in the training data. This we do by checking if the SL word co-occurs at least 20 times with any of the TL words in the translated text. We take the percentage of words with strong evidence as a score for ranking the translations. We call these scores Direct Evidence (DE) Scores. DE Scores range from 0 to 100. We run the trained SMT system on test data. We compute the DE Scores as described above for each segment. We pick out SL-TL pairs from the test data as also from the generated MT outputs based on selected ranges of DE Scores. Taking the TL part in the test data as Reference, we compute BLEU scores: See Table 1. We can clearly see a positive correlation between the DE Scores we obtained and the BLEU scores, up to a threshold of 70. Manual observations also clearly showed the gradation in quality of translations correlating with the DE Scores we compute. Sentences which got high DE Scores were generally of much better translation quality compared to sentences which got a poor DE Score. Next we compute sentence level BLEU scores and look for correlation between these BLEU scores and the DE Scores. Over 5037 segments of test data, we get a Pearson Correlation Coefficient of 0.209405. The p-value is \(<\) 0.00001 Hence the result is significant at the typical p \(<\) 0.05. Training corpora used for building MT systems are often not available for us to experiment with. Here we take up one case where we could locate the training data as also the MT outputs and Reference Translations. This relates to English-Hindi SMT system developed by Piyush Dungarwal et al from IIT Bombay [2] in the Ninth Workshop on SMT, WMT-2014. Training data consists of 273,885 segment pairs, including 3,378,341 tokens in English and 3,659,840 tokens in Hindi. There are 129,909 unique word forms in English, of which only 19,100 occur 10 times or more. Total number of unique word forms in Hindi is 137,089, of which only 18,587 occur 10 times or more. In English, 30 words occur with frequency more than 10,000 and are taken as frequent words in our experiments. In Hindi, there are 33 very high frequency words. These high frequency words are excluded from WCM computations. This makes the WCM matrix \begin{table} \begin{tabular}{|l|r|r|} \hline DE Score & No. of Segments & BLEU Score \\ \hline \(<\) 20 & 847 & 6.33 \\ \(<\) 30 & 2082 & 6.78 \\ \(<\) 40 & 3036 & 7.06 \\ \(<\) 50 & 3588 & 7.46 \\ \(\geq\) 50 & 1449 & 9.16 \\ \(\geq\) 60 & 669 & 10.49 \\ \(\geq\) 70 & 327 & 11.34 \\ \(\geq\) 80 & 237 & 10.80 \\ \hline \end{tabular} \end{table} Table 1: DE Scores vs. BLEU Scores for English-Kannada smaller and saves time too. Also, very high frequency words co-occur with too many words in TL and the evidence for proper lexical transfer becomes blurred. The WCM matrix could be computed in a minute or so on an ordinary Desktop computer. The WCM has 642,341 entries. This includes 242,477 pairs that co-occur 20 times or more. There are 2507 segments in each of the test set source, MT system output, and Reference Translations. We compute the DE Scores based purely on the WCM matrix which is based only on the training corpus. We extract subsets of the MT outputs and corresponding reference translations based on the DE Score ranges. The BLEU scores are as shown in Table 2. Here again we see a clear gradation in BLEU scores correlating with the DE Scores. Higher the DE Score, higher the BLEU. The results of these preliminary experiments support our claim that the clues needed for MT QE are present in the training data itself, nothing else may be necessary. We do not even need an MT system to predict the quality of translations it will produce, just the training data is sufficient. We then calculated the DE-Scores for the 4 Million segment Training Data used for building our English-Kannada SMT system. Figure 1 shows the histogram plot of DE-Scores obtained. It can be observed that a significant part of the training data got DE-Scores less than 50, many cases even less than 10. This can be useful in locating and reducing noise in the training data. ## 4 Conclusions In this paper we hypothesize that the Parallel Corpus used for Training an MT system holds clues about the quality of \begin{table} \begin{tabular}{|l|r|r|} \hline DE Score & No. of Segments & BLEU Score \\ \hline \(<\) 50 & 133 & 6.00 \\ \(\geq\) 50 & 2374 & 10.36 \\ \(\geq\) 60 & 2154 & 10.61 \\ \(\geq\) 70 & 1733 & 10.93 \\ \(\geq\) 80 & 1120 & 11.69 \\ \(\geq\) 90 & 463 & 12.47 \\ \hline \end{tabular} \end{table} Table 2: DE Scores and BLEU Scores for English-Hindi Figure 1: DE-Scores for English-Kannada Training Data Set translations the MT system can produce. We propose a simple and direct approach to quality estimation based solely on the training data. A word co-occurrence matrix is constructed from the training corpus and used to estimate the sentence by sentence quality of translations. Each sentence gets a score called DE Score, which is indicative of the quality of translation. Manual observations show that good quality translations generally tend to get higher DE Scores and poor quality translations tend to get lower scores. Our experiments reconfirm this. This simple and direct evidence approach to MT Quality Estimation appears to holds promise. We can estimate the quality of translations even without / before running the MT system. We do not need any other data or resource, we only need the training corpus. We have used raw frequency counts and manually selected thresholds to decide which SL words have sufficient evidence in the training corpus for reliable lexical transfer. Instead of counting the percentage of words in the SL sentence which have enough evidence (as indicated by the co-occurrence counts), we could use the actual counts themselves to get a more fine grained picture. We could check how many and which words in TL sentence co-occurred how many times with each of the words in the SL sentence. We could look at the frequency counts for all the TL words that co-occur with a given SL word, which particular TL word has contributed in the given sentence pair, which other TL words have higher or lower frequency counts, how far is the next more frequent or next less frequent TL word in the WCM matrix and so on. Co-occurrence in shorter sentences is more significant than co-occurrence in longer sentences and this could also be factored into the score computation. DE-Scores provide us a spectrum of quality grades and since they are based on co-occurrence counts, Out of Vocabulary (OOV) words are only cases that lie just outside the low end of this spectrum. Missing words automatically get reflected in poor DE Scores but extra words in TL can be detected by performing a TL to SL WCM check. If large scale manual post-edit data such as HTER scores are available, then we can estimate the various thresholds using machine learning techniques instead of using human judgement as we have done here. In this work we have only focused on only one aspect, namely quality of lexical transfer, to judge the quality of translations. It needs to be explored if other aspects such as agreement and syntactic completeness and correctness of dependency relations, semantic coherence etc. can also be estimated from the training corpus. For example, a word co-occurrence matrix built only on the TL segments (where co-occurrence is defined as appearing in the same segment), may be useful in dealing with agreement and coherence issues. Sub-word models may be incorporated.
2302.10716
Cosmologies with turning points
We explore singularity-free and geodesically-complete cosmologies based on manifolds that are not quite Lorentzian. The metric can be either smooth everywhere or non-degenerate everywhere, but not both, depending on the coordinate system. The smooth metric gives an Einstein tensor that is first order in derivatives while the non-degenerate metric has a piecewise FLRW form. On such a manifold the universe can transition from expanding to contracting, or vice versa, with the Einstein equations satisfied everywhere and without violation of standard energy conditions. We also obtain a corresponding extension of the Kasner vacuum solutions on such manifolds.
Bob Holdom
2023-02-21T15:09:42Z
http://arxiv.org/abs/2302.10716v1
# Cosmologies with turning points ###### Abstract We explore singularity-free and geodesically-complete cosmologies based on manifolds that are not quite Lorentzian. The metric can be either smooth everywhere or non-degenerate everywhere, but not both, depending on the coordinate system. The smooth metric gives an Einstein tensor that is first order in derivatives while the non-degenerate metric has a piecewise FLRW form. On such a manifold the universe can transition from expanding to contracting, or vice versa, with the Einstein equations satisfied everywhere and without violation of standard energy conditions. We also obtain a corresponding extension of the Kasner vacuum solutions on such manifolds. ## I The basic picture A fundamental premise of general relativity is that spacetime can be modelled as a 4-dimensional Lorentzian manifold. Among the defining characteristics of a Lorentzian manifold is the requirement that its metric be simultaneously smooth and non-degenerate everywhere. But the leading cosmological model based on general relativity predicts the big bang singularity, where not only the metric is degenerate, but the curvature invariants are singular. In the interest of avoiding such a curvature singularity, we shall consider relaxing the requirement that the metric be simultaneously smooth and non-degenerate. We shall make use of manifolds that are not quite Lorentzian, defined by being _either_ smooth everywhere _or_ non-degenerate everywhere, but not both simultaneously. Which of these two possibilities is realized depends on the coordinate system. Considering such manifolds from the start yields an enlarged solution space, and we shall find spatially homogeneous and isotropic solutions to the Einstein equations that avoid curvature singularities. All curvature invariants, and the components of the Einstein tensor \(G_{\mu\nu}\) as well, are nonsingular for all times in both coordinate systems. At a turning point in the evolution of the scale factor, the derivative of the scale factor vanishes. At these isolated times certain coordinate-dependent artifacts show up. When the metric at the turning point is smooth but degenerate, the degeneracy is due to the vanishing of \(g_{tt}\) and the inverse metric is singular. Components of the various curvature tensors with enough indices raised can then become singular even though the curvature invariants remain finite. When the metric at a turning point is non-degenerate but nonsmooth, due to a nonsmooth scale factor, all components of all curvature tensors remain finite. Instead, both the tensor components and the invariants can be be nonsmooth at the turning point. 'Nonsmooth' shall always mean continuous with a noncontinuous first derivative. The Einstein equations are identically solved at all times in both coordinate systems. The full not-quite-Lorentzian manifold may be considered to be a collection of Lorentzian manifolds, each bounded by, but not including, the time slices where the metric is either degenerate or nonsmooth. But the proper time along a timelike geodesic, between any point and a turning point, is finite, as are the corresponding coordinate times for both coordinate systems. Thus any of these Lorentzian manifolds is geodesically incomplete. Only the full not-quite-Lorentzian manifold, where \(-\infty<t<\infty\), is geodesically complete. (We shall discuss the geodesics below.) This is another physically desirable feature that emerges for these manifolds. The authors of [1] emphasized the possible geodesic completeness of a bouncing cosmology versus its lack thereof in inflationary cosmology [2]. They considered evolution through the singularity itself, where the scale factor vanishes, and in this way connect a contracting phase to an expanding phase. They argued that this could happen with a Weyl-invariant matter sector. In our case the turning points occur at finite values of the scale factor, the singularities are avoided altogether, and matter can be standard. We introduce the not-quite-Lorentzian spacetimes via the smooth metric \[ds^{2}=-Dd^{\prime}(\bar{t})^{2}d(\bar{t})^{u}dt^{2}+d(\bar{t})(dx^{2}+dy^{2}+ dz^{2}). \tag{1}\] \(\bar{t}=t/\ell\) is a dimensionless time and the nonconstant function \(d(\bar{t})\), its derivative \(d^{\prime}(\bar{t})\equiv\partial d(\bar{t})/\partial\bar{t}\), and the positive constant \(D\) are also all dimensionless. The exponent \(u\) is another adjustable constant. This metric turns out to generate nonsingular curvature invariants for \(u>-2\) as long as \(d(\bar{t})>0\) for all \(\bar{t}\). Turning points occur at \(d^{\prime}(\bar{t})=0\) where \(g_{tt}\) vanishes and the metric becomes degenerate. The violation of a basic premise of general relativity via a degenerate metric has also been considered by other authors as we detail in Section VI. Novel to our approach is the study of metric (1). This metric has some surprising properties that show up when solving the Einstein equations. First, the Einstein tensor turns out to involve no more than first derivatives. And second, a solution amounts to finding the value of the exponent \(u\) that is appropriate for a given equation state of a perfect-fluid matter source. After that, the function \(d(\bar{t})\) is still free to choose, with or without turning points. After displaying these features of metric (1) in the next section, we go on to show how coordinate transformations can transform the metric into Friedmann-Lemaitre-Robertson-Walker (FLRW) form, that is where \(g_{tt}=-1\). In Section III we give two examples of new solutions that describe nonsingular bounce and oscillating universes with a normal equation of state for matter. The transformation into FLRW form yields a representation with a piecewise set of expanding and contracting FLRW cosmologies. Propagation on the spacetime is explored via geodesics and a wave equation. In Section IV we consider the addition of spatial curvature and a more general matter content. In Section V we deviate from the main line of development to extend the Kasner vacuum solutions to those with turning points. We conclude with comments in Section VI. ## II Solutions and transformations We wish to solve the Einstein equations \(G_{\mu\nu}=8\pi GT_{\mu\nu}\), where \(T_{\mu\nu}\) is that for a perfect fluid, \[T_{\mu\nu} =(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}\] \[=\text{diag}((-g_{tt})\rho,g_{\infty}p,g_{yy}p,g_{zz}p), \tag{2}\] where in the last equality we have gone to the cosmic rest frame. The equation of state parameter \(w\) is defined by \(p=w\rho\). The nonzero components of the \(G_{\mu\nu}\) for metric (1) are \[G_{tt}=\frac{3}{4\ell^{2}}\frac{d^{\prime}(\bar{t})^{2}}{d(\bar{t})^{2}}\quad G _{xx}=G_{yy}=G_{zz}=\frac{(1+2u)}{4D\ell^{2}}\frac{1}{d(\bar{t})^{1+u}}. \tag{3}\] The components are finite and smooth everywhere, including at any turning point. There are no second derivatives and there are no differential equations to be solved. The choice \(u=(3w-1)/2\) is sufficient for a solution, for then \(G_{\mu\nu}=8\pi GT_{\mu\nu}\) with (2) immediately gives \[\rho(\bar{t})=\frac{1}{8\pi G\ell^{2}}\frac{3}{4D}\frac{1}{d(\bar{t})^{\frac{ 3}{2}(1+w)}},\quad p(\bar{t})=w\rho(\bar{t}). \tag{4}\] Note that the particular power of \(d(\bar{t})\) in (4) is the standard result that also follows from energy conservation. We see that the Einstein equations are solved for metric (1) without the occurrence of the Friedmann equations and that \(d(\bar{t})\) is still free to choose. Curvature invariants also have a simple, non-derivative dependence on \(d(\bar{t})\), as illustrated by these two, \[R =\frac{1}{\ell^{2}}\frac{3}{4D}(1-3w)\frac{1}{d(\bar{t})^{\frac{3 }{2}(1+w)}},\] \[R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma} =\frac{1}{\ell^{4}}\frac{3}{16D^{2}}(9w^{2}+6w+5)\frac{1}{d(\bar {t})^{3(1+w)}}. \tag{5}\] Thus the physical quantities in (4) and (5) and others are well behaved as long as \(d(\bar{t})>0\), whether or not it has turning points. We now consider the behaviour of metric (1) under a coordinate transformation of the form \(\bar{t}=f(\bar{t}_{\text{new}})\), where the function \(f\) is one-to-one and \(\bar{t}_{\text{new}}=t_{\text{new}}/\ell\). Defining \(d(f(\bar{t}_{\rm new}))\), the new metric is \[ds^{2}=-Dd^{\prime}_{\rm new}(\bar{t}_{\rm new})^{2}d(\bar{t}_{\rm new})^{u}dt_{ \rm new}^{2}+d_{\rm new}(\bar{t}_{\rm new})(dx^{2}+dy^{2}+dz^{2}). \tag{6}\] The derivative is now with respect to \(\bar{t}_{\rm new}\), and so we can say that the metric is form invariant under such transformations. Note that the factors of \(\partial\bar{t}/\partial\bar{t}_{\rm new}\) in the transformation of the metric are absorbed by the transformation of the square of the derivative in the metric. If the function \(d\) is one-to-one then we can go further and choose the transformation \(\bar{t}=d^{-1}(d_{\rm new}(\bar{t}_{\rm new}))\). We specify that \(d_{\rm new}(\bar{t}_{\rm new})\) is one-to-one and has the same range as \(d\), but otherwise it is free to choose. If the function \(d\) has turning points and is thus many-to-one, we can make a transformation as just described over each range of \(\bar{t}\) where \(d\) is one-to-one, that is, in a piecewise fashion. Combining these transformation yields a full transformation \(\bar{t}=f(\bar{t}_{\rm new})\) that is one-to-one and continuous as both \(\bar{t}\) and \(\bar{t}_{\rm new}\) range from \(-\infty\) to \(\infty\). We shall be interested in making such a coordinate transformation that takes a solution in the form of metric (1) with \(u=(3w-1)/2\) to FLRW form. We just need to find a solution to \(g_{tt}=-1\), that is, \[Dd^{\prime}_{\rm new}(\bar{t}_{\rm new})^{2}d_{\rm new}(\bar{t}_{\rm new})^{(3 w-1)/2}=1, \tag{7}\] subject to the constraints on \(d_{\rm new}\) as we have specified. We can label such a solution by \(a(\bar{t}_{\rm new})^{2}\equiv d_{\rm new}(\bar{t}_{\rm new})\), thus giving the metric \[ds^{2}=-dt_{\rm new}^{2}+a(\bar{t}_{\rm new})^{2}(dx^{2}+dy^{2}+dz^{2}). \tag{8}\] \(a(\bar{t}_{\rm new})\) is then the standard FLRW scale factor and \(a(\bar{t}_{\rm new})\) will satisfy the standard Friedman equations. We have transformed from a degenerate metric to a non-degenerate metric, and this requires a transformation \(\bar{t}=f(\bar{t}_{\rm new})\) such that \(\partial\bar{t}/\partial\bar{t}_{\rm new}\to\infty\) at each turning point. As we shall see, the result is that \(a(\bar{t}_{\rm new})\) is nonsmooth at the turning points. ## III Cosmology and propagation We now consider an explicit example of a solution where \(d(\bar{t})\) has a turning point. We take \[d(\bar{t})=s+\bar{t}^{2},\quad s>0, \tag{9}\] for \(-\infty<\bar{t}<\infty\). This gives a universe that is contracting for negative times, expanding for positive times, and that bounces off its minimum finite size at \(\bar{t}=0\). The bounce is smooth and nonsingular. To go over to FLRW form we solve (7) with the conditions \(d_{\rm new}(\pm\infty)=\infty\) and \(d_{\rm new}(0)=s\). This results in \[a(\bar{t}_{\rm new})^{2}=\left(s^{v}+\tfrac{v}{\sqrt{D}}|\bar{t}_{\rm new}|\right) ^{\tfrac{1}{\nu}},\quad v=\frac{3w+3}{4}. \tag{10}\] We could have started with any \(d(\bar{t})\) with a single turning point at \(\bar{t}=0\) with \(d(0)=s\) and \(d(\pm\infty)=\infty\), since they are all solutions; and they all yield (10) in FLRW coordinates. The value of \(v\) here is determined by the desired value of the equation of state parameter \(w\), for example \(w=1/3\) or \(w=0\) for radiation or matter domination and where any \(w>-1\) satisfies the null energy condition. We thus have a standard FLRW expanding phase that starts at \(t_{\rm new}=0\) when the universe has finite size. This is preceded at negative times by a FLRW contracting phase. The second example is oscillatory, \[d(\bar{t})=s+1-\cos(\bar{t}),\quad s>0. \tag{11}\] To transform to FLRW form we solve (7), again with \(d_{\rm new}(0)=s\), but now with the constraint that \(d_{\rm new}(\bar{t}_{\rm new})\) only increases to \(2+s\) before turning back down. The result for one period of the oscillation is \[a(\bar{t}_{\rm new})^{2}=\begin{cases}\left(s^{v}+\tfrac{v}{\sqrt{D}}\bar{t}_ {\rm new}\right)^{\tfrac{1}{\nu}}&0<\bar{t}_{\rm new}<\bar{t}_{\rm half}\\ \left((2+s)^{v}-\tfrac{v}{\sqrt{D}}(\bar{t}_{\rm new}-\bar{t}_{\rm half}) \right)^{\tfrac{1}{\nu}}&\bar{t}_{\rm half}<\bar{t}_{\rm new}<2\bar{t}_{\rm half }\end{cases} \tag{12}\] \(\bar{t}_{\rm half}\) is the value of \(\bar{t}_{\rm new}\) at half the period, \[\bar{t}_{\rm half}=\frac{\sqrt{D}}{\nu}\left((2+s)^{v}-s^{v}\right). \tag{13}\] Each period of the original oscillation is now represented by a standard FLRW expanding phase followed by a FLRW contracting phase. The universe alternates between growing up to size \(a=\sqrt{2+s}\) and then shrinking down to size \(a=\sqrt{s}\). Both phases last the same amount of time, which is \(\bar{t}_{\rm half}\ell\). For both examples the expanding and contracting FLRW solutions meet at turning points where \(a^{\prime}(\bar{t}_{\rm new})\) is not continuous. Nevertheless, FLRW time is the standard cosmic time, and so when the turning point arrives, a cosmic expansion will seem to instantaneously change to a cosmic contraction, or vice versa. The acceleration \(a^{\prime\prime}(\bar{t}_{\rm new})\) (which as usual is negative when \(w>-\tfrac{1}{3}\)) remains continuous, as does \(a^{\prime}(t_{\rm new})^{2}\). Thus the substitution of these solutions into the standard Friedman equations introduces no discontinuity, that is, the Einstein tensor \(G_{\mu\nu}\) remains continuous. The matter energy density and pressure are similarly nonsmooth at turning points (unlike in (4) where they were smooth) in such a way that the equation of state \(p(\bar{t}_{\rm new})=w\rho(\bar{t}_{\rm new})\) continues to be satisfied at all times. We now consider the propagation of particles in these spacetimes. The particle energy is defined by \(E=-u_{\mu}(dx^{\mu}/d\lambda)\) where \(\lambda\) is an affine parameter in the case of massless particles, or the proper time in the case of massive particles (in which case \(E\) is replaced by \(E/m\)). We observe the particle in the cosmic rest frame where only \(u_{0}=-\sqrt{-g_{\pi}}\) is nonvanishing. If the particle travels in the \(x\) direction we make use of the Killing vector \(K=\partial_{x}\), or \(K_{\mu}=(0,g_{xx},0,0)\), and the relations \[K_{\mu}\frac{dx^{\mu}}{d\lambda}=\kappa,\quad g_{\mu\nu}\frac{dx^{\mu}}{d \lambda}\frac{dx^{\nu}}{d\lambda}=-\epsilon, \tag{14}\] to obtain the result \[\frac{dt}{d\lambda}=\sqrt{\frac{\kappa^{2}+\epsilon g_{xx}}{(-g_{\pi})g_{xx}}}, \tag{15}\] where \(\epsilon=0\) or \(1\) and the constant \(\kappa=p\) or \(p/m\) for massless or massive particles respectively. This relation can be used in either coordinate system; in FLRW coordinates \(t\to t_{\rm new}\). The dependence on \(g_{\pi}\) means that \(dt/d\lambda\) diverges at a turning point for metric (1), while it remains finite for FLRW coordinates. But the dependence on \(g_{\pi}\) cancels in the definition of energy, and in particular for a massless particle we have \(E\propto d(\bar{t})^{-\frac{1}{2}}\) and \(E\propto a(\bar{t}_{\rm new})^{-1}\) for the two coordinate systems respectively. The difference is that \(d(\bar{t})\) is smooth, and so the transition between redshifting and blueshifting at a turning point is smooth for metric (1), while it is nonsmooth in FLRW coordinates. As we have seen, this mirrors the behaviour of other physical quantities. The apparent coordinate velocities \(dx/dt\) and \(dx_{\rm new}/dt_{\rm new}\) can also be obtained from the above. For \(dx/dt\), the velocity increases close to the turning point but then drops to be instantaneously zero at the turning point. \(dx/dt\) is symmetric around the turning point and is nonsmooth at the turning point. For \(dx_{\rm new}/dt_{\rm new}\), the velocity picks up a small nonsmooth component proportional to \(|t_{\rm new}-t_{\rm new}^{\rm tp}|/\ell\) close to the turning point, where we have reinstated \(\ell\). This nonsmooth behaviour can be negligible since \(\ell\) is related to the size of the universe. We may also consider the scalar wave equation \(\Box\phi(\bar{t},\bar{X})=0\) where \(X=(x,y,z)\). This equation with metric (1) implies that at the time of a turning point \(\bar{t}^{\rm tp}\) we must have \(\partial_{\bar{t}}\phi(\bar{t},\bar{X})|_{\bar{t}=\bar{t}^{\rm tp}}=0\), and as a result of this, the scalar quantity \(g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\) remains finite. In FLRW coordinates this scalar is obviously finite, and instead, like other invariants, it is nonsmooth. In these coordinates the wave itself will have a nonsmooth component that is proportional to \(|t_{\rm new}-t_{\rm new}^{\rm tp}|/\ell\) close to the turning point. Again this is typically negligible compared to the normal time variation of the wave. Generalization Thus far we have described cosmologies where \(T_{\mu\nu}\) has a single component described by \(p=w\rho\), for some \(w\). We shall generalize metric (1) in this section, so as to accommodate the more realistic situation of having several components contribute to \(T_{\mu\nu}\). First let us show another need for this generalization by modifying metric (1) to include a spatial curvature parameterized by \(k\). This is done in the usual way via spherical coordinates as follows, \[ds^{2}=-Dd^{\prime}(\bar{t})^{2}d(\bar{t})^{u}dt^{2}+d(\bar{t})\left(\frac{1}{ 1-k\bar{r}^{2}}dr^{2}+r^{2}d\theta^{2}+r^{2}\sin(\theta)^{2}d\phi^{2}\right). \tag{16}\] Now when we take \(u=(3w-1)/2\) and calculate \(-G_{\mu}/g_{\mu}\) and \(G_{\alpha\alpha}/g_{\alpha\alpha}\) we find two contributions to each. One corresponds to the original equation of state \(p=w\rho\) as before. The other is proportional to \(k\) and corresponds to an equation of state \(p=-\frac{1}{3}\rho\), and thus \(\rho(\bar{t})\propto 1/d(\bar{t})\) for this component. This corresponds to the spatial curvature term that appears in the Einstein equations when the FLRW metric is used. In our case the Einstein equations are algebraic and they are no longer solved (that is for arbitrary \(d(\bar{t})\) and assuming that the actual matter has \(p=w\rho\) with \(w\neq-\frac{1}{3}\)). In other words the additional effective contribution to \(T_{\mu\nu}\) spoils the solution. Before turning to the required generalization of metric (1), let us briefly consider another effective contribution to \(T_{\mu\nu}\). This is due to corrections to the field equations when curvature-squared terms are added to the action. Metric (1) has a vanishing Weyl tensor and a vanishing Bach tensor, and so a Weyl-squared term gives no corrections. An \(R^{2}\) term corrects the field equations with a term proportional to \[\left(R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R+g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{ \nu}\right)R. \tag{17}\] By evaluating this term with metric (1) with \(u=(3w-1)/2\), we find that it corresponds to a new effective contribution to \(T_{\mu\nu}\) with \[\rho(\bar{t})\propto\frac{1}{\bar{\ell}^{4}}\frac{5+2\tilde{w}-3\tilde{w}^{2} }{d(\bar{t})^{\frac{3}{2}(1+\tilde{w})}},\ \ \ p(\bar{t})=\tilde{w}\rho(\bar{t}), \tag{18}\] where \(\tilde{w}=1+2w\). This contribution has \(\rho(\bar{t})\propto 1/d(\bar{t})^{3(1+w)}\) as compared to (4), and there is a suppression factor of order \(G/\ell^{2}\). So let us turn to the generalization needed to deal with a \(T_{\mu\nu}\) having several components, each with its own equation of state. If we consider an effective equation of state for the complete \(T_{\mu\nu}\), it is sufficient to allow this to depend on the size of the universe. We use \[p(\bar{t})=w(d(\bar{t}))\rho(\bar{t}), \tag{19}\] where \(p\) and \(\rho\) now include all contributions. We generalize metric (1) to \[ds^{2}=-Dd^{\prime}(\bar{t})^{2}F(d(\bar{t}))dt^{2}+d(\bar{t})(dx^{2}+dy^{2}+dz^{ 2}), \tag{20}\] where we now have a function \(F\) instead of a power of \(d(\bar{t})\). It is useful to define \[\mathcal{D}(\bar{t})=\exp\left(\int^{\bar{t}}w(d(\bar{t}))\frac{d^{\prime}( \bar{t})}{d(\bar{t})}\,d\bar{t}\right)=\exp\left(\int^{d(\bar{t})}w(\bar{d})\, d\ln(\bar{d})\right). \tag{21}\] \(\mathcal{D}(\bar{t})\to d(\bar{t})^{w}\) when \(w(d(\bar{t}))\) is simply a constant \(w\). We find that \(G_{\mu\nu}=8\pi GT_{\mu\nu}\) is solved with \(T_{\mu\nu}\) incorporating the equation of state in (19) when \[F(d(\bar{t}))=\mathcal{D}(\bar{t})^{\frac{3}{2}}d(\bar{t})^{-\frac{1}{2}}. \tag{22}\] \(G_{\mu\nu}\) is still first order in derivatives and these solutions can once again be such that \(d(\bar{t})\) has turning points. The previous results in (4-5) can be updated for the generalized metric (20) with (22), \[\rho(\bar{t}) =\frac{1}{8\pi G\ell^{2}}\frac{3}{4D}\frac{1}{d(\bar{t})^{\frac{3 }{2}}}\frac{1}{\mathcal{D}(\bar{t})^{\frac{3}{2}}},\] \[R =\frac{1}{\ell^{2}}\frac{3}{4D}(1-3w(d(\bar{t})))\frac{1}{d(\bar{ t})^{\frac{3}{2}}}\frac{1}{\mathcal{D}(\bar{t})^{\frac{3}{2}}},\] \[R_{\mu\nu\rho\sigma}R^{\mu\,\nu\rho\sigma} =\frac{1}{\ell^{4}}\frac{3}{16D^{2}}(9w(d(\bar{t}))^{2}+6w(d(\bar {t}))+5)\frac{1}{d(\bar{t})^{3}}\frac{1}{\mathcal{D}(\bar{t})^{3}}. \tag{23}\] The metric (20) continues to be form invariant under the coordinate transformations we have discussed. The transformation to FLRW form is once again of the form \(\bar{t}=d^{-1}(a(\bar{t}_{\rm new})^{2})\) for the finite ranges of \(t\) where \(d(\bar{t})\) is one-to-one. Explicitly finding \(a(\bar{t}_{\rm new})^{2}\) amounts to finding a solution to \(g_{tt}=-1\) using (20) and (22). Finding the scale factor this way is entirely equivalent to starting with the FLRW metric and using the Einstein equations to solve for the scale factor using the equation of state in (19). ## V Extending Kasner Similar to the way not-quite-Lorentzian manifolds extend the solution space for homogeneous and isotropic metrics, they can also extend the solution space for metrics that are homogeneous but not isotropic. The relevant known solutions of this type are the Kasner solutions of the vacuum Einstein equations. Let us give the new solutions first; the following smooth metric solves \(G_{\mu\nu}=0\) for arbitrary choice of the function \(d(\bar{t})\) and the constants \(u_{1}\) and \(u_{2}\), \[ds^{2} =-Dd^{\prime}(\bar{t})^{2}d(\bar{t})^{2\nu-2}dt^{2}+d(\bar{t})^{- \frac{u_{2}u_{1}}{u_{1}+u_{2}}}dx^{2}+d(\bar{t})^{u_{1}}dy^{2}+d(\bar{t})^{u_{2 }}dz^{2}, \tag{24}\] \[\nu =\frac{u_{1}^{2}+u_{2}u_{1}+u_{2}^{2}}{2(u_{1}+u_{2})}.\] At turning points of \(d(\bar{t})\), the metric becomes degenerate. In the special case of a power law \(d(\bar{t})=\bar{t}^{a}\) and \(D=1/\alpha^{2}\) we recover the original form of the Kasner metric (see eq. (13.51) in [3]), \[ds^{2}=-\bar{t}^{2a_{4}}dt^{2}+\bar{t}^{2a_{1}}dx^{2}+\bar{t}^{2 a_{2}}dy^{2}+\bar{t}^{2a_{3}}dz^{2},\] \[a_{1}=-\frac{\alpha}{2}\frac{u_{2}u_{1}}{u_{1}+u_{2}},\quad a_{ 2}=\frac{\alpha}{2}u_{1},\quad a_{3}=\frac{\alpha}{2}u_{2},\quad a_{4}= \alpha\nu-1.\] These exponents satisfy the Kasner conditions, \[a_{1}+a_{2}+a_{3}=a_{4}+1,\quad a_{1}^{2}+a_{2}^{2}+a_{3}^{2}=(a_{4}+1)^{2}. \tag{25}\] The quadratic curvature invariants from metric (24) are \[R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}=C_{\mu\nu\rho\sigma}C^{\mu\nu\rho \sigma}=\frac{1}{\ell^{4}}\frac{2\nu}{D^{2}}\frac{u_{1}^{2}u_{2}^{2}}{u_{1}+u_ {2}}d(\bar{t})^{-4\nu}. \tag{26}\] These curvatures remain bounded as long as \(d(\bar{t})\) is positive and bounded, for either sign of \(\nu\). Since we are free to choose \(d(\bar{t})\) in metric (24) and still have a solution, we can thus extend the Kasner solutions to solutions that are everywhere nonsingular. For example we can again choose a bounce solution as in (9) (here we need \(\nu>0\) since \(d(\bar{t})\) is unbounded from above in this case) or an oscillating solution as in (11). Metric (24) is again form invariant under transformations of the form \(\bar{t}=f(\bar{t}_{\rm new})\), such that \(d(\bar{t})\) is simply replaced by \(d_{\rm new}(\bar{t}_{\rm new})=d(f(\bar{t}_{\rm new}))\) in the transformed metric. It is standard practice to transform the Kasner metric into a form where \(g_{tt}=-1\), and we can do the same for metric (24). The new function \(d_{\rm new}(\bar{t}_{\rm new})\) in this case must be a solution of \[Dd^{\prime}_{\rm new}(\bar{t}_{\rm new})^{2}d_{\rm new}(\bar{t}_{\rm new})^{2 \nu-2}=1. \tag{27}\] The solution is \[d_{\rm new}(\bar{t}_{\rm new})=\left(c\pm\tfrac{\nu}{\sqrt{D}}\bar{t}_{\rm new }\right)^{\frac{1}{\nu}}, \tag{28}\] for an arbitrary constant \(c\), and so the transformed metric is \[ds^{2}=-dt_{\rm new}^{2}+(c\pm\frac{v}{\sqrt{D}}\bar{t}_{\rm new})^ {2p_{1}}dx^{2}+(c\pm\frac{v}{\sqrt{D}}\bar{t}_{\rm new})^{2p_{2}}dy^{2}+(c\pm \frac{v}{\sqrt{D}}\bar{t}_{\rm new})^{2p_{3}}dz^{2}, \tag{29}\] \[p_{1}=-\frac{u_{1}u_{2}}{u_{1}+u_{2}}\frac{1}{2v},\quad p_{2}= \frac{u_{1}}{2v},\quad p_{3}=\frac{u_{2}}{2v}.\] These exponents satisfy the Kasner relations, \[p_{1}+p_{2}+p_{3}=1,\quad p_{1}^{2}+p_{2}^{2}+p_{3}^{2}=1, \tag{30}\] and thus we have ended up with the common form of the Kasner metric. The result in (28) assumes that the original \(d(\bar{t})\) is one-to-one. If it is many-to-one, then as before, the transformation must be done in a piecewise fashion over every subrange of \(\bar{t}\) where \(d(\bar{t})\) is one-to-one. This will result in a set of expansions and contractions, each of the form in (28), that meet at turning points where the evolution, as described by the complete \(d_{\rm new}(\bar{t}_{\rm new})\), is nonsmooth. In this section we are dealing with the vacuum Einstein equations, and these equations are identically solved by the nonsmooth metric at all times, just as they are for (24). Thus we have found the analog of Kasner solutions on a not-quite-Lorentzian manifold, where they can be singularity free and geodesically complete. The Kasner solutions are such that the dependence on \(t\) can be traded for any other coordinate, and the sign of each component of the metric is free to choose. This is also true for our extension of the Kasner metric, and so analogous alterations to the metric (24) will give solutions such as \[ds^{2}=-a(\bar{x})^{-\frac{u_{2}u_{1}}{u_{1}+u_{2}}}dt^{2}+Aa^{\prime}(\bar{x} )^{2}a(\bar{x})^{2v-2}dx^{2}+a(\bar{x})^{u_{1}}dy^{2}+a(\bar{x})^{u_{2}}dz^{2}. \tag{31}\] ## VI Comments We return to a universe that is close to being homogeneous and isotropic. Depending on how small the universe was at the last turning point, the universe could contain structure that is older than that time. Also, entropy tends to increase from one phase to the next [4]. These complications were not present in our discussion of Section III, where we had a single-component perfect fluid and a time evolution that was symmetric around each turning point. We generalized the equation of state parameter to be dependent on the size of the universe in Section IV, and a further generalization would be to assume that this function of size is different for each phase, and in particular different between expanding and contracting phases. Work on the bounce and oscillating universes has a long history; see [5] for a historical review and [6] for some modern motivation. We have not discussed the stability of our solutions. But when viewed in FLRW coordinates, our solutions are locally just the standard FLRW solutions, up to a set of times of measure zero. The matter at all times satisfies standard energy conditions. It is thus difficult to see how instabilities can occur. Stability has been more of a concern for attempts to describe a nonsingular bounce cosmology on a Lorentzian manifold, where some violation of the null energy condition during the bounce seems necessary. Constructions involving nontrivial scalar field dynamics have been proposed as a way to solve that stability problem [7]. Something similar to our bounce cosmology, also based on a metric degenerate at \(t=0\), has been studied in [8] (and in references therein). In that approach, the FLRW metric is modified by replacing \(g_{tt}=-1\) by the ansatz \(g_{tt}=-t^{2}/(t^{2}+b^{2})\) for some constant \(b\), and then obtaining the scale factor \(a(t)\) via the Einstein equations that now depend on \(b\). The transformation of the metric to standard FLRW form was also obtained; the new time coordinate is not continuous, jumping from a value of \(-b\) to a value of \(b\) as the turning point is traversed. These two times are then identified. Perturbations around the metric in the modified FLRW form were studied in [9] with apparently acceptable results. Rather than stability, our solutions raise the issue of determinism. We have a metric that provides cosmological solutions to the Einstein equations for any smooth and positive \(d(\bar{t})\). Some such function could in principle have a set of turning points that occur randomly as a function of time, and such a cosmology would randomly switch between expansion and contraction. This could be viewed as an indeterminism in the cosmic evolution as described by the Einstein equations, when spacetime is a not-quite-Lorentzian manifold. There may be implications for a quantum theory. It was argued in [10] that the path integral for quantum gravity should at least include all metrics with a finite action, whether or not they are degenerate at isolated points. This reference also observed what we have noted, that degeneracy at isolated points still allows the Einstein equations to be solved everywhere and it still allows curvature invariants and other scalars to be finite everywhere. The focus of that work was on degenerate metrics that could lead to topology change, but metric we have investigated does not appear to be of that type.
2301.09717
Reconfigurable Intelligent Surface Aided Amplitude- and Phase-Modulated Downlink Transmission
New reconfigurable intelligent surface (RIS) based amplitude and phase modulation schemes are proposed as an evolution how the phase-only modulation schemes available in the literature. Explicitly, both the amplitude-phase shift keying (A-PSK) and quadrature amplitude-phase shift keying (QA-PSK) are conceived, where the RIS is assumed to be part of a transmitter to deliver information to the multi-antenna aided downlink receiver. In the proposed design, the RIS is partitioned into multiple blocks, and the information bits are conveyed by controlling both the ON-OFF state and the phase shift of the RIS elements in each block. Since the propagation paths spanning from each RIS block to the receiver can be coherently combined as a benefit of appropriately configuring the phase of the RIS elements, the received signal constellations can be designed by controlling both the ON-OFF pattern of the RIS blocks as well as the phase shift of the RIS elements. Both the theoretical analysis and the simulation results show that our proposed RIS-aided modulation schemes outperform the state-of-the-art RIS-based PSK modulation both in terms of its discrete-input-continuous-output memoryless channel (DCMC) capacity and its symbol error probability, especially in the high signal-to-noise-ratio (SNR) region, when considering realistic finite resolution RIS phase shifts.
Qingchao Li, Mohammed El-Hajjar, Ibrahim Hemadeh, Arman Shojaeifard, Alain A. M. Mourad, Lajos Hanzo
2023-01-23T20:47:06Z
http://arxiv.org/abs/2301.09717v1
# Reconfigurable Intelligent Surface Aided Amplitude- and Phase-Modulated Downlink Transmission ###### Abstract New reconfigurable intelligent surface (RIS) based amplitude and phase modulation schemes are proposed as an evolution how the phase-only modulation schemes available in the literature. Explicitly, both the amplitude-phase shift keying (A-PSK) and quadrature amplitude-phase shift keying (QA-PSK) are conceived, where the RIS is assumed to be part of a transmitter to deliver information to the multi-antenna aided downlink receiver. In the proposed design, the RIS is partitioned into multiple blocks, and the information bits are conveyed by controlling both the ON-OFF state and the phase shift of the RIS elements in each block. Since the propagation paths spanning from each RIS block to the receiver can be coherently combined as a benefit of appropriately configuring the phase of the RIS elements, the received signal constellations can be designed by controlling both the ON-OFF pattern of the RIS blocks as well as the phase shift of the RIS elements. Both the theoretical analysis and the simulation results show that our proposed RIS-aided modulation schemes outperform the state-of-the-art RIS-based PSK modulation both in terms of its discrete-input-continuous-output memoryless channel (DCMC) capacity and its symbol error probability, especially in the high signal-to-noise-ratio (SNR) region, when considering realistic finite resolution RIS phase shifts. Reconfigurable intelligent surfaces (RIS), amplitude-phase modulation, channel capacity, symbol error probability. ## I Introduction Reconfigurable intelligent surfaces (RIS) are capable of beneficially reconfiguring the wireless environment by deploying a large number of passive reflecting elements for suitably adjusting the phase shift and even potentially the amplitude of the impinging signals [1, 2, 3]. Furthermore, RISs may also act as a transmitter relying on a single RF chain, where the information is conveyed by appropriately configuring the reflection coefficients of the passive RIS elements. This has promising applications in wireless communications as a benefit of its extremely low hardware complexity compared to conventional MIMO systems [4, 5, 6, 7, 8, 9, 10]. For the sake of reducing the RIS configuration complexity, most published work considers the amplitudes of RIS elements to be fixed and the signals are only manipulated by controlling the RIS phase shifts [4, 5, 6, 7, 8, 9, 10]. Therefore, in this case, phase shift keying (PSK) modulation can be readily realized using RISs, since the PSK signals have a constant envelope. Specifically, the phase shift of each RIS element is adjusted by taking into account the corresponding channel phase of the link spanning from the RIS to the receiver for maximizing the channel gain, where additionally an \(M\)-level phase shift may be imposed on the signals reflected from all RIS elements for creating a virtual \(M\)-ary PSK signal constellation [4]. In [5], Basar _et al._ proposed an amalgamated blind access point and RIS modulation scheme capable of operating without channel state information (CSI), where a binary phase shift of 0 and \(\pi\) is imposed on all RIS elements to create a binary phase shift keying constellation. However, this was attained at the cost of a certain performance loss. In [6], the RIS was partitioned into two blocks, and the classic Alamouti scheme was employed based on configuring the phase shift of the RIS elements, with the information mapped to the virtual \(M\)-PSK symbols. Further improvements can achieved by exploiting that in the quadrature amplitude modulation (QAM), the amplitudes of the signals convey extra information, but it is not intuitive at all how we can intrinsically amalgamate QAM with a RIS-based transmitter via the above methods relying on the constant envelope constraint. In [7], Tang _et al._ conceived a high-order QAM constellation based on independently controlling the amplitude and phase shift of each RIS element by introducing a non-linear modulation technique under the constraint of a constant envelope, which was however quite complex. In [8], Basar constructed a RIS-based single-RF transmitter relying either on space shift keying or on spatial modulation (SM). Explicitly, the signals radiated from the RF-chain are unmodulated and information is only conveyed to the specific receiver antenna (RA). The phase shift of each RIS element is configured to design the passive beamforming from the RIS to the selected RA. To further increase the throughput, Yuan _et al._[9] proposed a RIS-aided receiver-side quadrature reflecting modulation scheme, where the RIS is partitioned into two halves associated with the in-phase and quadrature components. Then the information is conveyed via each half of the RIS to form a beam focussed on a specific antenna at the receiver. However, the spatial modulation philosophy was applied at the user equipment side, which increased their receiver complexity. In our context, the RIS is deployed as a transmitter, and we propose a pair of new RIS-based amplitude-phase modulation schemes, namely the amplitude-phase shift keying (A-PSK) and quadrature amplitude-phase shift keying (QA-PSK). Explicitly, our contributions are as follows: * We partition the RIS into multiple blocks, where the information is conveyed based on both the ON-OFF state and on the phase shift of each block, which is similar to the concept of the SM for MIMO systems in [11]. Furthermore, since the phase of the RIS elements in each block can be beneficially configured for coherently combining the fading channels, the received signal constellation can be conveniently controlled, which is different from conventional SM, where the received signal constellation cannot be controlled owing to the random fast fading. Furthermore, the maximum likelihood (ML) detector is derived for our proposed schemes. * Both the discrete-input-continuous-output memoryless channel (DCMC) capacity and the symbol error probability (SEP) of our proposed schemes are derived. Our simulation results reveal that our arrangement outperforms the state-of-the-art (SoA) RIS-based PSK, especially in high rate transmission and for realistic finite RIS phase shift resolution. _Notations:_\((\cdot)^{\mathrm{T}}\) and \((\cdot)^{\mathrm{H}}\) represent the transpose and Hermitian transpose operation, respectively, \(\mathbb{C}^{m\times n}\) denotes the space of \(m\times n\) complex-valued matrices, \(\mathbf{I}_{n}\) represents the \(n\times n\) identity matrix, \(\mathbf{0}_{n}\) and \(\mathbf{1}_{n}\) are the \(n\times 1\) vectors with all elements being 0 and 1, respectively, \(\mathcal{R}(\mathbf{a})\) and \(\mathcal{I}(\mathbf{a})\) represent the real and imaginary parts of the complex vector \(\mathbf{a}\), respectively, \(f_{X}(x)\) is the probability density function (PDF) of a random variables \(X\), a complex Gaussian random vector with mean \(\mathbf{a}\) and covariance matrix \(\boldsymbol{\Sigma}\) is denoted as \(\mathcal{CN}(\mathbf{a},\boldsymbol{\Sigma})\), \(\mathbb{E}(X)\) represents the mean of the random variable \(X\). ## II System Model As in [4, 5, 6, 7, 8, 9, 10], the RIS is configured as the low-complexity transmitter shown in Fig. 1, where a single RF chain generates the unmodulated carrier of wavelength \(\lambda\) impinging on the \(N\)-element passive RIS. The RIS controller adjusts the phase shift of the RIS elements according to the baseband information and the CSI, where the information is conveyed by the specific RIS phase pattern configuration. The carrier wave impinging on the RIS is then reflected to the \(K\)-antenna single-user receiver. Since the transmitter RF generator of Fig. 1 is close to the RIS, the RIS can be viewed as part of the transmitter, and thus the fading effects between the RF generator and the RIS can be ignored [4, 5, 6, 7, 8, 9, 10]. We denote the channel between the RIS and the receiver as \(\mathbf{H}\in\mathbb{C}^{K\times N}\), and \(\mathbf{H}=[\mathbf{h}_{1}^{\mathrm{H}},\mathbf{h}_{2}^{\mathrm{H}},\cdots, \mathbf{h}_{K}^{\mathrm{H}}]^{\mathrm{H}}\), where \(\mathbf{h}_{k}\in\mathbb{C}^{1\times N}\) represents the specific link between the \(N\)-element RIS and the \(k\)th antenna at the receiver. We assume that all the links are independent and experience flat Rician fading [9]. Thus, \(\mathbf{h}_{k}\) can be expressed as \[\mathbf{h}_{k}=\sqrt{\frac{\kappa}{1+\kappa}}\overline{\mathbf{h}}_{k}+\sqrt{ \frac{1}{1+\kappa}}\widetilde{\mathbf{h}}_{k}, \tag{1}\] where \(\kappa\) is the Rician factor and \(\overline{\mathbf{h}}_{k}\) denotes the line-of-sight (LoS) component, satisfying \(|\overline{\mathbf{h}}_{k}|=1_{N}\), while \(\widetilde{\mathbf{h}}_{k}\) denotes the non-line-of-sight (NLoS) component obeying \(\widetilde{\mathbf{h}}_{k}\sim\mathcal{CN}(\mathbf{0}_{N},\mathbf{I}_{N})\). We also assume that instantaneous CSI can be attained at the transmitter, which may be estimated as in [12] for example. The receiver combining vector \(\mathbf{w}\in\mathbb{C}^{1\times K}\) of the user relies on statistical CSI, namely on the angle of arrival (AoA) \(\phi\) at the receiver, as follows \[\mathbf{w}=[1,e^{j\frac{2\pi}{N}d\sin\phi},\cdots,e^{j\frac{2\pi}{N}d(K-1)\sin \phi}], \tag{2}\] where \(d\) is the distance between the adjacent RAs. Therefore, the equivalent channel vector of the link is given by \[\mathbf{g}=\mathbf{w}\mathbf{H}=\sqrt{\frac{\kappa}{1+\kappa}}K\overline{ \mathbf{h}}_{1}+\sqrt{\frac{1}{1+\kappa}}\sum_{k=1}^{K}\widetilde{\mathbf{h}} _{k}e^{j\frac{2\pi}{N}d(k-1)\sin\phi}. \tag{3}\] Since the links \(\widetilde{\mathbf{h}}_{k}\) (\(k=1,\cdots,K\)) in (3) are independently and identically distributed obeying \(\mathcal{CN}(\mathbf{0}_{N},\mathbf{I}_{N})\), the distribution of \(\sum_{k=1}^{K}\widetilde{\mathbf{h}}_{k}e^{j\frac{2\pi}{N}d(k-1)\sin\phi}\) is given by \(\mathcal{CN}(\mathbf{0}_{N},K\mathbf{I}_{N})\). We denote the LoS component of \(\mathbf{g}\) as \(K\widetilde{\mathbf{h}}_{1}\), and the NLoS component of \(\mathbf{g}\) as \(\sum_{k=1}^{K}\widetilde{\mathbf{h}}_{k}e^{j\frac{2\pi}{N}d(k-1)\sin\phi}\), which follows \(\mathcal{CN}(\mathbf{0}_{N},K\mathbf{I}_{N})\). Therefore, the \(K\)-antenna receiver relying on the statistical CSI constituted by the received SNR \(\rho\) and experiencing the Rician factor \(\kappa\) may be declared equivalent to a single-antenna receiver having the received SNR \(\rho^{\prime}=(\frac{\kappa}{1+\kappa}K+\frac{1}{1+\kappa})\rho\) and the Rician factor \(\kappa^{\prime}=K\kappa\). We denote the phase shift of the RIS elements as \(\mathbf{\Theta}=[\theta_{1},\cdots,\theta_{N}]^{\mathrm{T}}\), where we assume that the phase shift of each RIS element has \(B\)-bit resolutions, i.e. the phase shift of each RIS element belongs to the set \(\{0,\frac{2\pi}{2\pi},\cdots,(2^{B}-1)\cdot\frac{2\pi}{2^{B}}\}\)[7]. In the following, firstly the SoA RIS-based modulation is presented, and then we propose a pair of new RIS-based amplitude-phase modulation schemes. ### _State-of-the-art RIS-based Modulation_ In the SoA RIS-based \(M\)-PSK modulation [4, 5, 6], the channel fading experienced by all RIS elements is coherently combined for detecting the transmitted information symbol \(m\) (\(m=0,1,\cdots,M-1\)), where the phase shift of the RIS elements is designed as \[\mathbf{\Theta}=\frac{\angle}{n}\big{(}e^{j\frac{2\pi m}{N}}\mathbf{g}^{ \mathrm{H}}\big{)}, \tag{4}\] with \(\frac{\angle}{n}(\cdot)\) representing the phase calculation using \(B\)-bit quantization. Thus, the signal set of \(M\)-PSK modulation is given by: \[\mathbb{S}_{M\text{-PSK}}=\{\mathbf{g}\cdot\widetilde{\mathbf{g}}(e^{j\frac{2 \pi m}{N}}\mathbf{g}^{\mathrm{H}})|m=0,1,\cdots,M-1\}. \tag{5}\] From (5), we can find that when \(M\leq 2^{B}\), the received signals have a unique envelope, and \(\mathbb{S}_{M\text{-PSK}}\) can be simplified as \(\{e^{j\frac{2\pi m}{N}}X|m=0,\cdots,M-1\}\), where \(X=\mathbf{g}\cdot\widetilde{\angle}_{B}^{\mathrm{H}}(\mathbf{g}^{\mathrm{H}})\) is the constant envelope of the received signals. However, when \(M>2^{B}\), the envelope of the received signals in \(\mathbb{S}_{M\text{-PSK}}\) is not necessarily unique due to the \(B\)-bit phase-quantization. Fig. 2 (a) shows an example of the statistical CSI-based received signal constellation of 128-PSK modulation. ### _Proposed RIS-based Modulation_ The SoA RIS-based modulation relies on pure phase shift control, while higher order modulation schemes can be realized by conveying the information both on the amplitude and the phase shift of the modulated signals. However, the RIS requires the employment of active reflection type amplifiers in order to control the amplitude of each RIS element [13], which requires high hardware cost and complexity. In this section we present our proposed designs capable of both amplitude and phase modulation based on controlling the ON-OFF state and the phase shift of RIS elements without requiring any additional hardware components. #### Ii-B1 RIS-based Amplitude-Phase Shift Keying Modulation Inspired by the conventional amplitude-phase shift keying (A-PSK), where the information is conveyed both by the amplitude and the phase shift of modulated signals, we propose a RIS-based A-PSK scheme. Firstly, we partition the RIS into \(\frac{M}{V}\) blocks \(\mathcal{B}_{1},\cdots,\mathcal{B}_{\frac{M}{V}}\), each of which contains \(\frac{NV}{M}\) elements. We denote the channel spanning from each block to the receiver as \(\mathbf{g}_{1},\cdots,\mathbf{g}_{\frac{M}{V}}\), respectively. To realize \(M\)-ary modulation by the \(N\)-element RIS, we partition the \(M\)-ary information into two parts, i.e. \(\frac{M}{V}\)-ary information, denoted as \(l\) (\(l=1,\cdots,\frac{M}{V}\)), conveyed by the ON-OFF state of the RIS blocks and \(V\)-ary information, denoted as \(v\) (\(v=0,\cdots,V-1\)), and conveyed by the phase shift of the ON-state RIS blocks. Specifically, in each information transmission slot, the blocks \(\mathcal{B}_{1},\cdots,\mathcal{B}_{l}\) are turned on, i.e. the amplitudes of the RIS elements in these blocks aer set to 1, while the blocks \(\mathcal{B}_{l+1},\cdots,\mathcal{B}_{\frac{M}{V}}\) are turned off with the amplitudes of the RIS elements in these blocks set to 0. To avoid any RIS phase matching problem, \(V\) is chosen as a divisor of \(2^{B}\), hence \(\log_{2}V\) is an integer not larger than \(B\). Furthermore, in all ON-state RIS blocks, the \(V\)-PSK modulation scheme of (4) is employed. Therefore, the RIS phase shift of the ON-state blocks \(\mathcal{B}_{1},\cdots,\mathcal{B}_{l}\), denoted as \(\mathbf{\Theta}_{1},\cdots,\mathbf{\Theta}_{l},\) is given by \[\mathbf{\Theta}_{(1:l)}^{\mathrm{T}}=\angle\limits_{B}\big{(}e^{j\frac{2\pi v}{N}} \mathbf{g}_{(1:l)}^{\mathrm{H}}\big{)}=e^{j\frac{2\pi v}{N}}\angle\big{(} \mathbf{g}_{(1:l)}^{\mathrm{H}}\big{)}, \tag{6}\] where \(\mathbf{\Theta}_{(1:l)}=[\mathbf{\Theta}_{1},\cdots,\mathbf{\Theta}_{l}]\), and \(\mathbf{g}_{(1:l)}=[\mathbf{g}_{1},\cdots,\mathbf{g}_{l}]\). Thus, an \(M\)-ary information symbol can be transmitted per each channel use upon appropriately controlling \(l\) and \(v\) in (6). Fig. 2 (b) shows an example of the statistical CSI based constellation of the received signals using this modulation scheme, where we have \(M=128\) and \(V=8\). Again, the information is jointly conveyed both by the \(\frac{M}{V}\)-level amplitude and the \(V\)-level phase shift. Hence we term this scheme as \((\frac{M}{V},V)\) A-PSK modulation, denoted as \(\mathcal{A}_{\frac{M}{V}}^{\mathrm{V}}\) A-PSK, where Fig. 1: System model of RIS-based single RF-chain transmitter. the set of received signals is given by: \[\mathbb{S}_{A^{V}_{\frac{M}{V}}}=\Big{\{}\mathrm{e}^{j\frac{2\pi}{ \mathrm{e}^{j\pi}}}\sum_{l^{\prime}=1}^{l}X_{l^{\prime}}\Big{|}v=0,\cdots,V-1;l=1,\cdots,\frac{M}{V}\Big{\}}, \tag{7}\] and \(X_{l^{\prime}}=\mathbf{g}_{l^{\prime}}\mathbf{\Theta}_{l^{\prime}}\) is the channel gain of the block \(\mathcal{B}_{l^{\prime}}\). #### Ii-B2 RIS-based Quadrature Amplitude-Phase Shift Keying Modulation Inspired by the classic QAM scheme, we improve the above RIS-based A-PSK modulation as follows. Firstly, we partition the \(N\)-element RIS into two branches, namely the in-phase (I-) and quadrature (Q-) branch, with each containing \(\frac{N}{2}\) RIS elements. Explicitly, in each branch, the RIS is divided into \(\sqrt{\frac{M}{V}}\) blocks, each of which contains \(\frac{N}{2}\sqrt{\frac{V}{M}}\) elements, and we denote these blocks as \(\mathcal{B}_{1}^{\mathrm{Q}_{1}},\cdots,\mathcal{B}_{\sqrt{\frac{M}{V}}}^{ \mathrm{Q}_{1}}\) and \(\mathcal{B}_{1}^{\mathrm{Q}_{1}},\cdots,\frac{\mathcal{B}_{\sqrt{\frac{M}{V}}} ^{\mathrm{Q}_{1}}}{\sqrt{\frac{M}{V}}}\) in the I-branch and Q-branch, respectively. Furthermore, we denote these channels as \(\mathcal{B}_{1}^{\mathrm{Q}_{1}},\cdots,\mathcal{B}_{\sqrt{\frac{M}{V}}}^{ \mathrm{Q}_{1}}\) and \(\mathcal{B}_{1}^{\mathrm{Q}_{1}},\cdots,\mathcal{B}_{\sqrt{\frac{M}{V}}}^{ \mathrm{Q}_{1}}\), in the I-branch and Q-branch, respectively. To realize \(M\)-ary modulation, we partition the \(M\)-ary information into two parts. In the first part, \(\frac{M}{V}\)-ary information is conveyed, denoted as the pair \((l_{1},l_{2})\) (\(l_{1},l_{2}=1,\cdots,\sqrt{\frac{M}{V}}\)), which is carried by the ON-OFF state of the RIS blocks. The second part conveys \(V\)-ary information, denoted as \(v\) (\(v=0,\cdots,V-1\)), which is represented by the phase shift of the ON-state RIS blocks. Specifically, in each information transmission slot, the blocks \(\mathcal{B}_{1}^{\mathrm{Q}_{1}},\cdots,\mathcal{B}_{1}^{\mathrm{Q}_{1}}\) in the I-branch and the blocks \(\mathcal{B}_{1}^{\mathrm{Q}_{1}},\cdots,\mathcal{B}_{\sqrt{\frac{M}{V}}}^{ \mathrm{Q}_{1}}\) in the Q-branch are turned on, while all other blocks are turned off. Again, to avoid RIS phase matching problems, \(V\) is chosen as a divisor of \(2^{B}\), with \(\log_{2}V\) being an integer not larger than \(B\). Additionally, in the ON-state blocks of each branch, \(V\)-PSK modulation is employed. Note that, to form a two-dimensional amplitude, a phase shift of \(e^{j\frac{2\pi}{\mathrm{e}^{j\pi}}}\) must be between the I-branch and the Q-branch. Therefore, the RIS phase shift of blocks \(\mathcal{B}_{1}^{\mathrm{Q}_{1}},\cdots,\mathcal{B}_{1}^{\mathrm{Q}_{1}}\) and \(\mathcal{B}_{1}^{\mathrm{Q}_{1}},\cdots,\mathcal{B}_{12}^{\mathrm{Q}_{1}}\) are denoted by \(\mathbf{\Theta}_{1}^{\mathrm{Q}_{1}},\cdots,\mathbf{\Theta}_{1}^{\mathrm{Q}_{ 1}}\) and \(\mathbf{\Theta}_{1}^{\mathrm{Q}_{1}},\cdots,\mathbf{\Theta}_{12}^{\mathrm{Q}_{ 1}}\), and they are given by \[\mathbf{\Theta}_{1:1_{1}}^{\mathrm{Q}_{1}}=\tfrac{\angle}{n}e^{j \frac{2\pi}{\mathrm{e}^{j\pi}}}\mathbf{\Theta}_{1:1_{1}}^{\mathrm{Q}_{1}}=e^{ j\frac{2\pi}{\mathrm{e}^{j\pi}}}\tfrac{\angle}{n}(\mathbf{\Theta}_{1:1_{1}}^{ \mathrm{Q}_{1}}), \tag{8}\] \[\mathbf{\Theta}_{1:1_{2}}^{\mathrm{Q}_{1}}=e^{j\frac{2\pi}{\mathrm{e}^{j\pi}}} (e^{j\frac{2\pi}{\mathrm{e}^{j\pi}}}\mathbf{\Theta}_{1:1_{2}}^{\mathrm{Q}_{1}} )=e^{j\frac{2\pi(v+1)}{V}}\tfrac{\angle}{n}(\mathbf{\Theta}_{1:1_{2}}^{\mathrm{ Q}_{1}}), \tag{9}\] where \(\mathbf{\Theta}_{1:1_{1}}^{\mathrm{Q}_{1}}=[\mathbf{\Theta}_{1}^{\mathrm{Q}_{1 }},\cdots,\mathbf{\Theta}_{1}^{\mathrm{Q}_{1}}],\mathbf{\Theta}_{1:1_{2}}^{ \mathrm{Q}_{1}}=[\mathbf{\Theta}_{1}^{\mathrm{Q}_{1}},\cdots,\mathbf{\Theta}_{ 12}^{\mathrm{Q}_{1}}]\), \(\mathbf{\Theta}_{1:1_{2}}^{\mathrm{Q}_{1}}=[\mathbf{\Theta}_{1}^{\mathrm{Q}_{1 }},\cdots,\mathbf{\Theta}_{12}^{\mathrm{Q}_{1}}]\), and \(\mathbf{\Theta}_{1:12}^{\mathrm{Q}_{1}}=[\mathbf{\Theta}_{1}^{\mathrm{Q}_{1 }},\cdots,\mathbf{\Theta}_{12}^{\mathrm{Q}_{1}}]\). Thus, effectively an \(M\)-ary information symbol can be transmitted per each channel use by appropriately controlling \((l_{1},l_{2})\) and \(v\) in (8) and (9). Fig. 2 (c) shows an example of the statistical CSI-based received signal constellation for \(M=128\) and \(V=8\), relying on \(\frac{M}{V}\)-level two-dimensional amplitude and \(V\)-level phase shifts. Hence, we term this scheme as \((\frac{M}{V},V)\) quadrature amplitude-phase shift keying (QA-PSK), denoted as \(\mathcal{Q}_{\frac{M}{V}}^{\mathrm{Q}_{1}}\) QA-PSK. Note that our QA-PSK modulation requires \(B\geq 2\) bits, where the set of received signals is given by \[\mathbb{S}_{\mathcal{Q}_{\frac{M}{V}}^{\mathrm{Q}_{1}}}= \Big{\{}\mathrm{e}^{j\frac{2\pi}{\mathrm{e}^{j\pi}}}\big{(}\sum_{l ^{\prime}_{1}=1}^{l_{1}}X_{l^{\prime}_{1}}^{\mathrm{Q}_{1}}+\mathrm{e}^{j\frac{ 2\pi}{\mathrm{e}^{j\pi}}}\sum_{l^{\prime}_{2}=1}^{l_{2}}X_{l^{\prime}_{2}}^{ \mathrm{Q}_{1}}\big{)}|v=0,\] \[\cdots,V-1;l_{1},l_{2}=1,\cdots,\sqrt{\frac{M}{V}}\Big{\}}, \tag{10}\] with \(X_{l^{\prime}_{1}}^{\mathrm{Q}_{1}}=\mathbf{\Theta}_{l^{\prime}_{1}}^{\mathrm{ Q}_{1}}\mathbf{\Theta}_{1}^{\mathrm{Q}_{1}}\) and \(X_{l^{\prime}_{2}}^{\mathrm{Q}_{1}}=\mathrm{e}^{-j\frac{2\pi}{\mathrm{e}^{j\pi}}} \mathbf{\Theta}_{l^{\prime}_{2}}^{\mathrm{Q}_{1}}\mathbf{\Theta}_{l^{\prime}_{2}}^ {\mathrm{Q}_{1}}\) being the channel gain of block \(\mathcal{B}_{l^{\prime}_{1}}^{\mathrm{Q}_{1}}\) and block \(\mathcal{B}_{l^{\prime}_{2}}^{\mathrm{Q}_{1}}\), respectively. ### _Receiver Design_ The ML detection method is employed at the receiver to recover the information. We denote the received signal as \(y\). Based on the ML criterion, the information recovered by the SoA RIS-based \(M\)-PSK, as well as by our proposed \(A^{V}_{\frac{M}{V}}\) A-PSK and \(\mathcal{Q}_{\frac{V}{V}}^{V}\) QA-PSK are \(\hat{s}=\min\limits_{i\in\{0,\cdots,M-1\}}\|y-\sqrt{\rho}\mathbf{g}_{\rho}^{ \mathrm{e}^{j\frac{2\pi}{\mathrm{e}^{j\pi}}}}(\mathrm{e}^{j\frac{2\pi}{ \mathrm{e}^{j\pi}}}\mathbf{\Theta}_{l^{\prime}_{2}}^{\mathrm{Q}_{1}})\|\), \(\hat{s}=\min\limits_{l\in\{1,\cdots,\frac{M}{V}\},i\in\{0,\cdots,V-1\}}\|y- \sqrt{\rho}^{\mathrm{e}}j^{\frac{2\pi}{\mathrm{e}^{j\pi}}}\sum_{l^{\prime}=1}^{l }X_{l^{\prime}}\|\), and \(\hat{s}=\min\limits_{l_{1},l_{2}\in\{0,\cdots,V-1\}}\|y-\sqrt{\rho}^{ \mathrm{e}}j^{\frac{2\pi}{\mathrm{e}^{j\pi}}}\|(\sum_{l^{\prime}_{1}=1}^{l_{1}}X _{l^{\prime}_{1}}^{\mathrm{Q}_{1}}+\mathrm{e}^{j\frac{2\pi}{\mathrm{e}^{j\pi}}} \sum_{l^{\prime}_{2}=1}^{l}X_{l^{\prime}_{2}}^{\mathrm{Q}_{1}})\|\), respectively. ## III Theoretical Performance Analysis In this section, we derive the DCMC capacity and SEP expressions of our proposed RIS-based A-PSK and QA-PSK modulation schemes. We commence by first deriving the distribution to the receiver, which follows the Rician distribution with parameters of \(\nu=\sqrt{\frac{\kappa^{\prime}}{1+\kappa^{\prime}}}\) and \(\sigma=\sqrt{\frac{1}{2(1+\kappa^{\prime})}}\), where the first moment and second moment are \(\mathbb{E}[\alpha_{i}]=\sigma\sqrt{\frac{\pi}{2}}L_{\frac{1}{2}}(-\frac{\nu^{2 }}{2\sigma^{2}})\) and \(\mathbb{E}[\alpha_{i}^{2}]=2\sigma^{2}+\nu^{2}\), respectively. Therefore, (11) can be expressed as \[\mathbb{E}[X_{l}]=\frac{2^{B}}{\pi}\sin\frac{\pi}{2^{B}}\sum_{i=1}^{N_{\cal A} }\mathbb{E}[\alpha_{i}]=N_{\cal A}\frac{2^{B}}{\pi}\sin\frac{\pi}{2^{B}}\frac {\sqrt{\pi}}{2}L_{\frac{1}{2}}(-\kappa^{\prime}), \tag{12}\] where \(L_{\frac{1}{2}}(\cdot)\) is the Laguerre polynomial [14]. Then, the second moment of \(X_{l}\) is given by \[\mathbb{E}[X_{l}^{2}]= \mathbb{E}\Big{[}\big{(}\sum_{i=1}^{N_{\cal A}}\alpha_{i}\mathrm{ e}^{j\psi_{i}}\big{)}^{2}\Big{]}\] \[= 2\sum_{i_{1}=1}^{N_{\cal A}-1}\sum_{i_{2}=i_{1}+1}^{N_{\cal A}} \mathbb{E}[\alpha_{i_{1}}]\mathbb{E}[\alpha_{i_{2}}]\mathbb{E}[\cos(\psi_{i_ {1}})]\mathbb{E}[\cos(\psi_{i_{2}})]+\] \[\sum_{i=1}^{N_{\cal A}}\mathbb{E}[\alpha_{i}^{2}]\mathbb{E}[( \cos\psi_{i})^{2}]\] \[= N_{\cal A}(N_{\cal A}-1)\Big{(}\frac{2^{B}}{\pi}\sin\frac{\pi}{ 2^{B}}\frac{\sqrt{\pi}}{2}L_{\frac{1}{2}}(-\kappa^{\prime})\Big{)}^{2}\] \[+N_{\cal A}\cdot\frac{1+\frac{2^{B}}{2\pi}\sin\frac{2\pi}{2^{B}}} {2}. \tag{13}\] According to (12) and (13), the channel gain \(X_{l}\) can be approximated by a Gamma distribution having the PDF of \[f_{X_{l}}(x)=\frac{1}{\Gamma(k_{\cal A})\theta_{\cal A}^{\cal A}}x^{k_{\cal A }-1}\mathrm{e}^{-\frac{\pi}{k_{\cal A}}}, \tag{14}\] where the shape parameter \(k_{\cal A}\) and the scale parameter \(\theta_{\cal A}\) are \[k_{\cal A}=\frac{(\mathbb{E}[X_{l}])^{2}}{\mathbb{E}[X_{l}^{2}]-(\mathbb{E}[ X_{l}])^{2}},\quad\theta_{\cal A}=\frac{\mathbb{E}[X_{l}^{2}]-(\mathbb{E}[X_{l}])^{2} }{\mathbb{E}[X_{l}]}. \tag{15}\] In the \({\cal Q}_{\frac{1}{2}{\cal A}}^{V}\) QA-PSK, the number of RIS elements in each block, denoted as \(N_{\cal Q}\), is \(N_{\cal Q}=\frac{N}{2}\sqrt{\frac{\nu}{M}}\). The first moment and second moment of the channel gain \(X_{l}^{\rm(1)}\) and \(X_{l}^{\rm(0)}\) can be similarly derived as in the case of A-PSK, upon simply replacing \(N_{\cal A}\) by \(N_{\cal Q}\) in (12) and (13). The channel gain \(X_{l}^{\rm(1)}\) and \(X_{l}^{\rm(0)}\) can be approximated by a Gamma distribution having the PDF of (14) upon simply replacing \(k_{\cal A}\) and \(\theta_{\cal A}\) by \(k_{\cal Q}\) and \(\theta_{\cal O}\) in (15), respectively, where the shape parameter \(k_{\cal Q}\) and the scale parameter \(\theta_{\cal O}\) are similarly given upon replacing \(\mathbb{E}[X_{l}]\) by \(\mathbb{E}[X_{l}^{\rm(1)}]\) and \(\mathbb{E}[X_{l}^{\rm(0)}]\), as well as replacing \(\mathbb{E}[(X_{l})^{2}]\) by \(\mathbb{E}[(X_{l}^{\rm(0)})^{2}]\) and \(\mathbb{E}[(X_{l}^{\rm(0)})^{2}]\) in (15), respectively. ### _DCMC Capacity_ The DCMC capacity is given by [15] \[R= \mathbb{E}\Bigg{[}\log_{2}(M)-\frac{1}{M\pi}\sum_{m_{1}=1}^{M} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\exp(-t_{1}^{2}-t_{2}^{2})\cdot\] \[\log_{2}\Big{(}\sum_{m_{2}=1}^{M}\exp\Big{(}-2[t_{1},t_{2}]\Big{[} \begin{array}{c}\sqrt{\rho^{\prime}}{\cal R}({\bf z}_{m_{1}}-{\bf z}_{m_{2}}) \\ \sqrt{\rho^{\prime}}{\cal I}({\bf z}_{m_{1}}-{\bf z}_{m_{2}})\end{array}\Big{]}-\] \[\Big{\|}\begin{array}{c}\sqrt{\rho^{\prime}}{\cal R}({\bf z}_{ m_{1}}-{\bf z}_{m_{2}})\\ \sqrt{\rho^{\prime}}{\cal I}({\bf z}_{m_{1}}-{\bf z}_{m_{2}})\end{array}\Big{]} \Big{\|}^{2}\Big{)}\Big{)}\Bigg{)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Fig. 3 illustrates the decision regions of ML detection for \(\mathcal{A}_{4}^{8}\) A-PSK. For example, when the symbol \(s_{0}\) in the 1st layer is transmitted, the receiver can correctly recover it, when the received signal is located in the triangular region \(A_{0}\). Therefore, \(P_{e,1}^{(\mathcal{A})}\) is given by \[P_{e,1}^{(\mathcal{A})}=\frac{1}{2\pi}\sum_{k=0}^{2}\int_{0}^{\theta_{k}}\exp \Big{[}-\frac{\rho^{\prime}b_{k}^{2}\sin^{2}\psi_{k}}{\sin^{2}\big{(}\theta+ \psi_{k}\big{)}}\Big{]}d\theta, \tag{21}\] where \(b_{0}=b_{2}=\sqrt{(X_{1}+\frac{1}{2}X_{2})^{2}\tan^{2}\frac{\pi}{\psi}+(\frac{ 1}{2}X_{2})^{2}}\), \(b_{1}=X_{1}\), \(\theta_{0}=\theta_{2}=\pi-\arctan[(1+\frac{2X_{1}}{X_{2}})\tan\frac{\pi}{\psi}]\), \(\theta_{1}=2\arctan[(1+\frac{2X_{1}}{X_{2}})\tan\frac{\pi}{\psi}]\), \(\psi_{0}=\arctan[(1+\frac{2X_{1}}{X_{2}})\tan\frac{\pi}{\psi}]-\frac{\pi}{\psi}\), \(\psi_{1}=\frac{\pi}{\psi}\), and \(\psi_{2}=\frac{\pi}{2}-\arctan[(1+\frac{2X_{1}}{X_{2}})\tan\frac{\pi}{\psi}]\). When \(l=2,3,\cdots,\frac{M}{V}-1\), \(P_{e,l}^{(\mathcal{A})}\) is given by \[P_{e,l}^{(\mathcal{A})}=\frac{1}{2\pi}\sum_{k=0}^{3}\int_{0}^{\theta_{k}}\exp \Big{[}-\frac{\rho^{\prime}b_{k}^{2}\sin^{2}\psi_{k}}{\sin^{2}(\theta+\psi_{k} )}\Big{]}d\theta, \tag{22}\] where \(b_{0}=b_{3}=\sqrt{\big{(}X_{1}\!+\!\cdots\!+\!X_{l}\!+\!\frac{1}{2}X_{l+1}\big{)} ^{2}\tan^{2}\frac{\pi}{V}\!+\!\big{(}\frac{1}{2}X_{l+1}\big{)}^{2}}\), \(b_{1}=b_{2}=\sqrt{(X_{1}+\cdots+X_{l-1}+\frac{1}{2}X_{l})^{2}\tan^{2}\frac{\pi }{V}\!+\!\big{(}\frac{1}{2}X_{l}\big{)}^{2}}\), \(\theta_{0}=\theta_{2}=\pi-\arctan[(1+\frac{2(X_{1}+\cdots+X_{l-1})}{X_{l}}) \tan\frac{\pi}{\psi}]\), \(\theta_{1}\!=\!2\arctan[(1+\frac{2(X_{1}+\cdots+X_{l-1})}{X_{l}})\tan\frac{ \pi}{\psi}]\), \(\theta_{3}\!=\!2\arctan[(1+\frac{2(X_{1}+\cdots+X_{l})}{X_{l+1}})\tan\frac{ \pi}{\psi}]\), \(\psi_{0}\!=\!\arctan[(1+\frac{2(X_{1}+\cdots+X_{l})}{X_{l+1}})\tan\frac{\pi}{ \psi}]\), \(\psi_{1}\!=\!\frac{2}{2}\!-\arctan[(1+\frac{2(X_{1}+\cdots+X_{l-1})}{X_{l}}) \tan\frac{\pi}{\psi}]\), \(\frac{2(X_{1}+\cdots+X_{l-1})}{X_{l}}\tan\frac{\pi}{\psi}]\), \(\psi_{2}\!=\!\arctan[(1+\frac{2(X_{1}+\cdots+X_{l-1})}{X_{l+1}})\tan\frac{ \pi}{\psi}]\) and \(\psi_{3}\!=\!\frac{\pi}{\psi}\). Furthermore, \(P_{e,\frac{\mathcal{A}}{W}}^{(\mathcal{A})}\) is given by \[P_{e,\frac{\mathcal{A}}{W}}^{(\mathcal{A})}=\frac{1}{2\pi}\int_{0}^{\theta_{0} }\exp\Big{[}-\frac{\rho^{\prime}b_{0}^{2}\sin^{2}\psi_{0}}{\sin^{2}(\theta+ \psi_{0})}\Big{]}d\theta+\] \[\frac{1}{\pi}\sum_{k=0}^{1}\int_{0}^{\pi-\psi_{1}}\exp\Big{[}- \frac{\rho^{\prime}b_{0}^{2}\sin^{2}\psi_{1}}{\sin^{2}\theta}\Big{]}d\theta, \tag{23}\] where \(b_{0}=\sqrt{(X_{1}\!+\!\cdots\!+\!X_{\frac{M}{V}\!-\!1}\!+\!\frac{1}{2}X_{\frac {M}{V}})^{2}\tan^{2}\frac{\pi}{V}\!+\!(\frac{1}{2}X_{\frac{M}{V}})^{2}}\), \(\theta_{0}=2\arctan[(1+\frac{2(X_{1}+\cdots+X_{\frac{M}{V}-1})}{X_{l}})\tan \frac{\pi}{V}]\), \(\psi_{0}=\frac{\pi}{2}-\arctan[(1+\frac{2(X_{1}+\cdots+X_{\frac{M}{V}-1})}{X _{l}})\tan\frac{\pi}{V}]\) and \(\psi_{1}=\arctan[(1+\frac{2(X_{1}+\cdots+X_{\frac{M}{V}-1})}{X _{l}})\tan\frac{\pi}{V}]+\frac{\pi}{V}\). Then, upon substituting (21), (22) and (23) into (20), we arrive at the theoretical SEP of \(\mathcal{A}_{\frac{M}{V}}^{V}\) A-PSK. Since the final result includes the values of \(X_{1},X_{2},\cdots,X_{\frac{M}{V}}\), it can be evaluated numerically. #### Iii-C2 Symbol Error Probability of QA-PSK In the ML detection, the SEP is determined by the Euclidean distance of the received signal points from the respective decision boundaries, in which the lowest distances play a dominant role. Thus, the SEP of QA-PSK can be calculated by neglecting the effect of high-distance decision boundary. Therefore, observe from Fig. 2 (c) that the SEP of \(\mathcal{Q}_{\frac{M}{V}}^{V}\) QA-PSK, denoted as \(P_{e}^{(\mathcal{Q})}\), is given by \[P_{e}^{(\mathcal{Q})}\!=\!\frac{2}{\sqrt{\frac{M}{V}}}\sum_{l=2}^{\sqrt{\frac{ M}{V}}}\Big{[}Q\Big{(}\frac{\sqrt{\rho^{\prime}}X_{l}^{(\mathcal{Q})}}{2} \Big{)}+Q\Big{(}\frac{\sqrt{\rho^{\prime}}X_{l}^{(\mathcal{Q})}}{2}\Big{)} \Big{]}+\frac{q}{\frac{M}{V}}.\] Fig. 4: Comparison of DCMC capacity \(R\) versus receive SNR \(\rho\) for the SoA RIS-based 128-PSK, the proposed \(\mathcal{A}_{16}^{\mathcal{A}}\) A-PSK and \(\mathcal{Q}_{16}^{\mathcal{Q}}\) QA-PSK with different number of RIS elements \(N\), where the RIS phase shift resolution is \(B=3\) bits. Fig. 5: Comparison of symbol error probability \(P_{e}\) versus receive SNR \(\rho\) for various modulation schemes: (a) with different number of RIS elements \(N\); (b) with different transmission rate \(R\); (c) with different RIS phase shift resolution \(B\). \[\sum_{l=2}^{\sqrt{\frac{\sqrt{\frac{\rho^{\prime}}{\Psi}}}}}\Bigg{[}Q \Big{(}\sqrt{\frac{\rho^{\prime}((X_{l}^{\text{(I)}})^{2}+(X_{l}^{\text{(Q)}})^{2} -2\cos\frac{2\pi}{V}X_{l}^{\text{(I)}}X_{l}^{\text{(Q)}})}{2}}\Big{)}\Bigg{]}. \tag{24}\] where \(Q(\cdot)\) represents the Gaussian Q-function [14], and the constant \(q=4\) when \(\log_{2}V=2\) and \(q=2\) when \(\log_{2}V\geq 3\). Since the final result includes the values of \(X_{2}^{\text{(Q)}},X_{3}^{\text{(Q)}},\cdots,X_{\sqrt{\frac{\rho}{\Psi}}}^{ \text{(I)}}\) and \(X_{2}^{\text{(Q)}},X_{3}^{\text{(Q)}},\cdots,X_{\sqrt{\frac{\rho}{\Psi}}}^{ \text{(I)}}\), it can be evaluated numerically. ## IV Simulation Results In this section, we analyze the performance of the proposed schemes in terms of their DCMC capacity and SEP, against the SoA RIS-based PSK modulation, where the distance between adjacent RIS elements is \(\frac{\lambda}{2}\), and the Rician factor is \(\kappa=0\text{dB}\). For fairness of comparison with the SoA RIS-based PSK scheme, we assume the number of RAs at the user is \(K=1\). Fig. 4 compares the DCMC capacity \(R\) versus the received SNR \(\rho\) for the SoA RIS-based 128-PSK, as well as the proposed \(\mathcal{A}_{16}^{8}\) A-PSK and the \(\mathcal{Q}_{16}^{8}\) QA-PSK for different number of RIS elements \(N\), where the RIS phase shift resolution is \(B=3\) bits. The theoretical (theo.) UB is very tight compared to the simulation results (simu.) for our proposed schemes. It is shown in Fig. 4 that the DCMC capacity reaches a maximum of 7 bit/s/Hz since the modulation order is \(M=128\). In the low-SNR region, the DCMC capacity of the SoA RIS-based PSK modulation is higher than that of our proposed A-PSK and QA-PSK scheme. However, the DCMC capacity of our proposed A-PSK and QA-PSK is better than that of the SoA RIS-based PSK scheme, when the receive SNR is higher than \(-35\text{dB}\), \(-40\text{dB}\) and \(-45\text{dB}\) for \(N=512\), \(N=1024\) and \(N=2048\), respectively. Fig. 5 (a) compares the SEP \(P_{e}\) versus receive SNR \(\rho\) for the SoA RIS-based 128-PSK modulation, the proposed \(\mathcal{A}_{16}^{8}\) A-PSK and \(\mathcal{Q}_{16}^{8}\) QA-PSK, with the parameters being the same as in Fig. 4. This shows that doubling the number of RIS elements yields approximately 6dB gain, since it is proportional to the square of the number of RIS elements \(N\). Furthermore, this shows that under the same transmission rate of 7 bit/s/Hz, the \(\mathcal{A}_{16}^{8}\) A-PSK and \(\mathcal{Q}_{16}^{8}\) QA-PSK have better SEP than the RIS-based 128-PSK. Furthermore, QA-PSK outperforms A-PSK, since the transmit signals of QA-PSK are distributed more uniformly than those of A-PSK, which results in higher minimum Euclidean distance in the received signal constellation. It also shows that the theoretical analysis and the simulation results of the QA-PSK scheme match tightly in the high-SNR region. This is due to the fact that in our theoretical analysis, the SEP is derived based on the lowest distances from the received signal points to the respective decision boundaries, which results in tight approximation in the high-SNR region. Fig. 5 (b) compares the SEP \(P_{e}\) versus the receive SNR \(\rho\) of the SoA RIS-based 128-PSK modulation, of the proposed A-PSK and QA-PSK at different transmission rates \(R\), for \(N=1024\) RIS elements, and for \(B=3\) bits. In the SoA RIS-based scheme, 32-PSK, 128-PSK and 512-PSK are employed at the transmission rates of \(R=5,7,9\) bit/s/Hz, respectively. By contrast, in our proposed methods, the \(\mathcal{A}_{4}^{8}\), \(\mathcal{A}_{16}^{8}\), \(\mathcal{A}_{64}^{8}\) A-PSK schemes and \(\mathcal{Q}_{4}^{8}\), \(\mathcal{Q}_{16}^{8}\), \(\mathcal{Q}_{84}^{8}\) QA-PSK schemes are employed correspondingly. Observe that at low rates of say \(R=5\) bit/s/Hz, the advantage of QA-PSK is not obvious, but at high rates of say \(R=9\) bit/s/Hz, QA-PSK considerably outperforms both the SoA RIS-based PSK and A-PSK. Explicitly, our proposed QA-PSK scheme has improved the SEP, especially at high transmission rates. Fig. 5 (c) compares the SEP \(P_{e}\) versus received SNR \(\rho\) of the SoA RIS-based 128-PSK, of the proposed \(\mathcal{A}_{16}^{8}\) A-PSK and \(\mathcal{Q}_{16}^{8}\) QA-PSK at different values of \(B\), where the number of RIS elements is \(N=64\). As expected, the finite phase shift resolution degrades the SEP of the SoA RIS-based PSK, but it has little effect on our proposed schemes. ## V Conclusions The novel A-PSK and QA-PSK schemes are proposed for RIS-based transmitters, where the signals of the multi-RA receiver were coherently combined based on the statistical CSI, when using the ML detection method. Both our theoretical analysis and simulation results show that the proposed schemes outperform the SoA RIS-based PSK modulation in terms of both the DCMC capacity and SEP, especially for high rate transmission and finite RIS phase shift resolution.
2302.08159
Parabolic opers and differential operators
Parabolic SL(r,C)-opers were defined and investigated in [BDP] in the set-up of vector bundles on curves with a parabolic structure over a divisor. Here we introduce and study holomorphic differential operators between parabolic vector bundles over curves. We consider the parabolic SL(r,C)-opers on a Riemann surface X with given singular divisor S and with fixed parabolic weights satisfying the condition that all parabolic weights at any point $x_i$ in S are integral multiples of $\frac{1}{2N_i+1}$, where $N_i > 1$ are fixed integers. We prove that this space of opers is canonically identified with the affine space of holomorphic differential operators of order r between two natural parabolic line bundles on X (depending only on the divisor S and the weights $N_i$) satisfying the conditions that the principal symbol of the differential operators is the constant function 1 and the sub-principal symbol vanishes identically. The vanishing of the sub-principal symbol ensures that the logarithmic connection on the rank r bundle is actually a logarithmic SL(r, C)-connection.
Indranil Biswas, Niels Borne, Sorin Dumitrescu, Sebastian Heller, Christian Pauly
2023-02-16T09:15:05Z
http://arxiv.org/abs/2302.08159v1
# Parabolic opers and differential operators ###### Abstract. Parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers were defined and investigated in [BDP] in the set-up of vector bundles on curves with a parabolic structure over a divisor. Here we introduce and study holomorphic differential operators between parabolic vector bundles over curves. We consider the parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers on a Riemann surface \(X\) with given singular divisor \(S\subset\,X\) and with fixed parabolic weights satisfying the condition that all parabolic weights at any \(x_{i}\,\in\,S\) are integral multiples of \(\frac{1}{2N_{i}+1}\), where \(N_{i}\,>\,1\) are fixed integers. We prove that this space of opers is canonically identified with the affine space of holomorphic differential operators of order \(r\) between two natural parabolic line bundles on \(X\) (depending only on the divisor \(S\) and the weights \(N_{i}\)) satisfying the conditions that the principal symbol of the differential operators is the constant function \(1\) and the sub-principal symbol vanishes identically. The vanishing of the sub-principal symbol ensures that the logarithmic connection on the rank \(r\) bundle is actually a logarithmic \(\operatorname{SL}(r,\mathbb{C})\)-connection. Key words and phrases:Oper, parabolic bundle, differential operator, logarithmic connection 2010 Mathematics Subject Classification: 14H60, 33C80, 53A55 ###### Contents * 1 Introduction * 2 A rank two parabolic bundle * 2.1 Parabolic bundles and parabolic connections * 2.2 The parabolic Gunning bundle * 2.3 Orbifold structure * 3 Symmetric powers of parabolic bundle * 3.1 Explicit description of some symmetric powers * 3.2 Higher rank parabolic opers * 4 Some properties of parabolic opers * 5 Differential operators on parabolic bundles * 5.1 Another description of differential operators on parabolic bundles * 5.2 The symbol map * 6 Parabolic opers and differential operators ## 1. Introduction After the seminal work of Drinfeld and Sokolov [19], [20], the notion of opers was introduced by Beilinson and Drinfeld [21, 22] as geometric structures on Riemann surfaces that formalize the notion of ordinary differential equations in a coordinate-free way. This broad formalism encapsulates the classical notion of a Riccati equation, or equivalently that of a complex projective structure on a Riemann surface, as being an \(\operatorname{SL}(2,\mathbb{C})\)-oper. Since then the notion of oper turned out to be very important, not only in the study of differential equations, but also in very diverse topics, as for example, geometric Langlands correspondence, nonabelian Hodge theory and also some branches of mathematical physics; see, for example, [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23] and references therein. In contemporary research in mathematics and mathematical physics, the study of opers and their applications have been firmly established as an important topic, testified by the works of many. In particular, important progress in the understanding of opers was carried out in [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. In [21], three of the authors introduced and studied parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers on curves in the set-up of parabolic vector bundles as defined by Mehta and Seshadri, [20], and also by Maruyama and Yokogawa [21]. Later on, being inspired by the works [2, 26], the infinitesimal deformations of parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers and also the monodromy map for parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers were studied in [21]. It may be mentioned that the appendix of [21] provides an alternative definition of a parabolic \(\operatorname{SL}(r,\mathbb{C})\)-oper in terms of \(\mathbb{R}\)-filtered sheaves as introduced and studied by Maruyama and Yokogawa in [21]. This definition is conceptually closer to the definition of an ordinary \(\operatorname{SL}(r,\mathbb{C})\)-oper and clarifies the one given in [21]. The objective of this article is to further investigate parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers and to characterize them as a special class of holomorphic differential operators on parabolic bundles. It should be recalled that the relation between opers and differential operators is established and well-known in the context of ordinary opers [21]. Here we introduce and study holomorphic differential operators on parabolic vector bundles over Riemann surfaces under the condition that at each point \(x_{i}\) on the singular divisor \(S\) all the parabolic weights are integral multiples of \(\frac{1}{2N_{i}+1}\), with \(N_{i}\,>\,1\) being an integer. Under this assumption, the main result of the article, Theorem 6.2, proves that the space of all parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers on \(X\) with given singular set \(S\,:=\,\{x_{1},\,\cdots,\,x_{n}\}\,\subset\,X\) and fixed parabolic weights integral multiples of \(\frac{1}{2N_{i}+1}\) at each \(x_{i}\,\in\,S\), is canonically identified with the affine space of \(r\)-order holomorphic differential operators between two natural parabolic line bundles on \(X\) (depending only on \(S\) and the weights \(N_{i}\)) having as principal symbol the constant function \(1\) and with vanishing sub-principal symbol. The vanishing of the sub-principal symbol ensures that the logarithmic connection on the rank \(r\) bundle is indeed a logarithmic \(\operatorname{SL}(r,\mathbb{C})\)-connection. The article is organized in the following way. Section 2 deals with parabolic \(\operatorname{SL}(2,\mathbb{C})\)-opers. In particular we introduce a rank two parabolic bundle which is a parabolic version of the indigenous bundle (also called Gunning bundle or uniformization bundle) introduced in [28] (see also [11]); recall that this indigenous bundle introduced by Gunning is the rank two holomorphic vector bundle associated to any ordinary \(\operatorname{SL}(2,\mathbb{C})\)-oper (e.g. a complex projective structure) on a given Riemann surface. It should be clarified that this parabolic analog of Gunning bundle depends only on the divisor \(S\) and the integers \(N_{i}\). All parabolic \(\operatorname{SL}(2,\mathbb{C})\)-opers with given singular set \(S\) and fixed weights are parabolic connections on the same parabolic Gunning bundle. Section 3 starts with an explicit description of several (parabolic) symmetric powers of the rank two parabolic Gunning bundle constructed in Section 2; then \(\operatorname{SL}(r,\mathbb{C})\)-opers on a Riemann surface \(X\), singular over \(S\,\subset\,X\), are defined (see Definition (3.3)). In this context Proposition 3.6 proves that parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers on \(X\) with weights equal to integral multiples of \(\frac{1}{2N_{i}+1}\) at each \(x_{i}\,\in\,S\) are in natural bijection with invariant \(\operatorname{SL}(r,\mathbb{C})\)-opers on a ramified Galois covering \(Y\) over \(X\) equipped with an action of the Galois group. This Proposition 3.6 is a generalization of Theorem 6.3 in [BDP] where a similar result was proved under the extra assumption that \(r\) is odd. The proof of Proposition 3.6 uses in an essential way the correspondence studied in [Bi1], [Bo1], [Bo2], and also a result (Corollary 2.6(3)) of Section 2 proving that, at each point of \(S\), the monodromy of any parabolic connection on the parabolic Gunning bundle is semisimple. Section 4 constructs the canonical parabolic filtration associated to any parabolic \(\operatorname{SL}(r,\mathbb{C})\)-oper. This parabolic filtration depends only on \(S\) and the integers \(N_{i}\). It is proved then that any parabolic connection on the associated parabolic bundle satisfies the Griffith transversality condition with respect to the above filtration (all corresponding second fundamental forms are actually isomorphisms). Section 5 defines and study several equivalent definitions for holomorphic differential operators between parabolic vector bundles. Under the above rationality assumption on the parabolic weights, Proposition 5.2 proves that holomorphic differential operators between parabolic vector bundles are canonically identified with the invariant holomorphic differential operators between corresponding orbifold vector bundles on a ramified Galois covering \(Y\) over \(X\) equipped with an action of the Galois group. We deduce the construction of the principal symbol map defined on the space of differential operators in the parabolic set-up (see Lemma 5.3). The last Section focuses on the class of holomorphic differential operators associated to \(\operatorname{SL}(r,\mathbb{C})\)-opers. These are holomorphic differential operators between two parabolic line bundles over \(X\) naturally associated to the Gunning parabolic bundle (those line bundles only depend on the divisor \(S\) and the parabolic weights \(N_{i}\)). In this case the principal symbol is the constant function \(1\) and the sub-principal symbol map (constructed in Lemma 6.1) defined on the space of parabolic differential operators between the appropriate parabolic line bundles vanishes. Then the main Theorem 6.2 stated above is proved. ## 2. A rank two parabolic bundle Let \(X\) be a compact connected Riemann surface. Its canonical line bundle will be denoted by \(K_{X}\). Fix a finite subset of \(n\) distinct points \[S\,:=\,\{x_{1},\,\cdots,\,x_{n}\}\,\subset\,X. \tag{2.1}\] The reduced effective divisor \(x_{1}+\ldots+x_{n}\) on \(X\) will also be denoted by \(S\). If \({\rm genus}(X)\,=\,0\), we assume that \(n\,\geq\,3\). For any holomorphic vector bundle \(E\) on \(X\), and any \(k\,\in\,\mathbb{Z}\), the holomorphic vector bundle \(E\otimes{\mathcal{O}}_{X}(kS)\) on \(X\) will be denoted by \(E(kS)\). Let us first start with the definition of a parabolic structure on a holomorphic vector bundle over \(X\) having \(S\) as the parabolic divisor. ### Parabolic bundles and parabolic connections A quasiparabolic structure on a holomorphic vector bundle \(E\) on \(X\), associated to the divisor \(S\), is a filtration of subspaces of the fiber \(E_{x_{i}}\) of \(E\) over \(x_{i}\) \[E_{x_{i}}\,=\,E_{i,1}\,\supset\,E_{i,2}\,\supset\,\cdots\,\supset\,E_{i,l_{i} }\,\supset\,E_{i,l_{i}+1}\,=\,0 \tag{2.2}\] for every \(1\,\leq\,i\,\leq\,n\). A parabolic structure on \(E\) is a quasiparabolic structure as above together with a finite sequence of positive real numbers \[0\,\leq\,\alpha_{i,1}\,<\,\alpha_{i,2}\,<\,\cdots\,<\,\alpha_{i,l_{i}}\,<\,1 \tag{2.3}\] for every \(1\,\leq\,i\,\leq\,n\). The number \(\alpha_{i,j}\) is called the parabolic weight of the corresponding subspace \(E_{i,j}\) in (2.2) (see [MS], [MY]). A parabolic vector bundle is a holomorphic vector bundle \(E\) with a parabolic structure (\(\{E_{i,j}\}\), \(\{\alpha_{i,j}\}\)). It will be denoted by \(E_{*}\) for convenience. A _logarithmic connection_ on the holomorphic vector bundle \(E\), singular over \(S\), is a holomorphic differential operator of order one \[D\,:\,E\,\longrightarrow\,E\otimes K_{X}\otimes{\mathcal{O}}_{X}(S)\] satisfying the Leibniz rule, meaning \[D(fs)\,=\,fD(s)+s\otimes df \tag{2.4}\] for any locally defined holomorphic function \(f\) on \(X\) and any locally defined holomorphic section \(s\) of \(E\). Recall that any logarithmic connection on \(E\) over the Riemann surface is necessarily flat. Indeed, the curvature (2-form) vanishes identically because \(\Omega^{2,0}_{X}\,=\,0\). Take a point \(x_{i}\,\in\,S\). The fiber of \(K_{X}\otimes{\mathcal{O}}_{X}(S)\) over \(x_{i}\) is identified with \(\mathbb{C}\) by the Poincare adjunction formula [GH, p. 146] which gives an isomorphism \[{\mathcal{O}}_{X}(-x_{i})_{x_{i}}\,\stackrel{{\sim}}{{ \longrightarrow}}\,(K_{X})_{x_{i}}. \tag{2.5}\] To describe this isomorphism, let \(z\) be a holomorphic coordinate function on \(X\) defined on an analytic open neighborhood of \(x_{i}\) such that \(z(x_{i})\,=\,0\). We have an isomorphism \({\mathcal{O}}_{X}(-x_{i})_{x_{i}}\,\longrightarrow\,(K_{X})_{x_{i}}\) that sends \(z\) to \(dz(x_{i})\). It is straightforward to check that this map is actually independent of the choice of the holomorphic local coordinate \(z\) at \(x_{i}\). Let \(D\,:\,E\,\longrightarrow\,E\otimes K_{X}\otimes{\mathcal{O}}_{X}(S)\) be a logarithmic connection on \(E\). From (2.4) it follows that the composition of homomorphisms \[E\,\stackrel{{ D}}{{\longrightarrow}}\,E\otimes K_{X}\otimes{ \mathcal{O}}_{X}(S)\,\longrightarrow\,(E\otimes K_{X}\otimes{\mathcal{O}}_{ X}(S))_{x_{i}}\,\stackrel{{\sim}}{{\longrightarrow}}\,E_{x_{i}} \tag{2.6}\] is \(\mathcal{O}_{X}\)-linear; the above isomorphism \((E\otimes K_{X}\otimes\mathcal{O}_{X}(S))_{x_{i}}\,\stackrel{{ \sim}}{{\longrightarrow}}\,E_{x_{i}}\) is given by the isomorphism in (2.5). Therefore, the composition of homomorphisms in (2.6) produces a \(\mathbb{C}\)-linear homomorphism \[\operatorname{Res}(D\,,x_{i})\,:\,E_{x_{i}}\,\longrightarrow\,E_{x_{i}}\,, \tag{2.7}\] which is called the _residue_ of the logarithmic connection \(D\) at \(x_{i}\) (see [De] for more details). **Remark 2.1**.: The local monodromy of \(D\) around \(x_{i}\) is conjugated to \[\exp\left(-2\pi\sqrt{-1}\cdot\operatorname{Res}(D,\,x_{i})\right)\,\in\, \operatorname{GL}(E_{x_{i}})\] [De]. Consider now \(E\) with its parabolic structure \(E_{*}\,=\,(E,\,(\{E_{i,j}\},\,\{\alpha_{i,j}\}))\); see (2.2), (2.3). A _parabolic connection_ on \(E_{*}\) is a logarithmic connection \(D\) on \(E\), singular over \(S\), such that 1. \(\operatorname{Res}(D,x_{i})(E_{i,j})\,\subset\,E_{i,j}\) for all \(1\,\leq\,j\,\leq\,l_{i},\,1\,\leq\,i\,\leq\,n\) (see (2.2)), and 2. the endomorphism of \(E_{i,j}/E_{i,j+1}\) induced by \(\operatorname{Res}(D,x_{i})\) coincides with multiplication by the parabolic weight \(\alpha_{i,j}\) for all \(1\,\leq\,j\,\leq\,l_{i},\,1\,\leq\,i\,\leq\,n\) (see (2.3)). **Remark 2.2**.: The following necessary and sufficient condition for \(E_{*}\) to admit a parabolic connection was given in [BL]: A parabolic vector bundle \(E_{*}\) admits a parabolic connection if and only if the parabolic degree of every direct summand of \(E_{*}\) is zero [BL, p. 594, Theorem 1.1]. ### The parabolic Gunning bundle Choose a holomorphic line bundle \(\mathcal{L}\) on \(X\) such that \(\mathcal{L}^{\otimes 2}\) is holomorphically isomorphic to \(K_{X}\); also fix a holomorphic isomorphism between \(\mathcal{L}^{\otimes 2}\) and \(K_{X}\). We have \(H^{1}(X,\,\operatorname{Hom}(\mathcal{L}^{*},\,\mathcal{L}))\,=\,H^{1}(X,\,K_ {X})\,=\,H^{0}(X,\,\mathcal{O}_{X})^{*}\,=\,\mathbb{C}\) (Serre duality); note that here the chosen isomorphism between \(\mathcal{L}^{\otimes 2}\) and \(K_{X}\) is being used. Consequently, there is a natural nontrivial extension \(\widetilde{E}\) of \(\mathcal{L}^{*}\) by \(\mathcal{L}\) that corresponds to \[1\,\in\,H^{1}(X,\,\operatorname{Hom}(\mathcal{L}^{*},\,\mathcal{L})).\] So \(\widetilde{E}\) fits in a short exact sequence of holomorphic vector bundles \[0\,\longrightarrow\,\mathcal{L}\,\longrightarrow\,\widetilde{E}\, \stackrel{{ p_{0}}}{{\longrightarrow}}\,\mathcal{L}^{*}\, \longrightarrow\,0\,; \tag{2.8}\] this short exact sequence does not split holomorphically. Consider the subsheaf \(\mathcal{L}^{*}(-S)\,\subset\,\mathcal{L}^{*}\). Define \[E\,:=\,p_{0}^{-1}(\mathcal{L}^{*}(-S))\,\subset\,\widetilde{E}\,,\] where \(p_{0}\) is the projection in (2.8). From (2.8) we know that this \(E\) fits in a short exact sequence of holomorphic vector bundles \[0\,\longrightarrow\,\mathcal{L}\,\stackrel{{\iota}}{{ \longrightarrow}}\,E\,\stackrel{{ p}}{{\longrightarrow}}\, \mathcal{L}^{*}(-S)\,\longrightarrow\,0\,; \tag{2.9}\] the projection \(p\) in (2.9) is the restriction, to the subsheaf \(E\), of \(p_{0}\) in (2.8). **Lemma 2.3**.: _Take any point \(x\,\in\,S\). The fiber \(E_{x}\) of \(E\) (see (2.9)) over \(x\) canonically decomposes as_ \[E_{x}\,=\,\mathcal{L}_{x}\oplus\mathcal{L}^{*}(-S)_{x}\,=\,\mathcal{L}_{x} \oplus\mathcal{L}_{x}\,.\] Proof.: Take \(x\,\in\,S\). First we have the homomorphism \[\iota(x)\,:\,{\mathcal{L}}_{x}\,\longrightarrow\,E_{x}\,, \tag{2.10}\] where \(\iota\) is the homomorphism in (2.9), which is evidently injective. On the other hand, tensoring (2.8) with \({\mathcal{O}}_{X}(-S)\) and using the natural map of it to (2.9) we have the commutative diagram \[\begin{array}{ccccccccc}0&\longrightarrow&{\mathcal{L}}(-S)&\stackrel{{ \iota^{\prime}}}{{\longrightarrow}}&\widetilde{E}(-S)&\stackrel{{ p^{\prime}}}{{\longrightarrow}}&{\mathcal{L}}^{*}(-S)& \longrightarrow&0\\ &&\Big{\downarrow}\psi^{\prime}&&\Big{\downarrow}\psi&&\Big{\downarrow}\text{ Id}&&\\ 0&\longrightarrow&{\mathcal{L}}&\stackrel{{\iota}}{{ \longrightarrow}}&E&\stackrel{{ p^{\prime}}}{{ \longrightarrow}}&{\mathcal{L}}^{*}(-S)&\longrightarrow&0,\end{array} \tag{2.11}\] where \(\iota^{\prime}\) and \(p^{\prime}\) are the restrictions of \(\iota\) and \(p\) respectively. Note that the composition of maps \[\psi(x)\circ\iota^{\prime}(x)\,:\,{\mathcal{L}}(-S)_{x}\,\longrightarrow\,E_ {x}\] in (2.11) is the zero homomorphism, because \(\psi^{\prime}(x)\,:\,{\mathcal{L}}(-S)_{x}\,\longrightarrow\,{\mathcal{L}}_ {x}\) is the zero homomorphism and \(\psi\circ\iota^{\prime}\,=\,\iota\circ\psi^{\prime}\) by the commutativity of (2.11). Since \(\psi(x)\circ\iota^{\prime}(x)\,=\,0\), the homomorphism \(\psi(x)\) is given by a homomorphism \[q_{x}\,:\,\widetilde{E}(-S)_{x}/(\iota^{\prime}(x)({\mathcal{L}}(-S)_{x}))\,= \,{\mathcal{L}}^{*}(-S)_{x}\,\longrightarrow\,E_{x}\,. \tag{2.12}\] The homomorphism \(q_{x}\) in (2.12) is injective, because \(\psi(x)\,\neq\,0\). From (2.10) and (2.12) we have \[\iota(x)\oplus q_{x}\,:\,{\mathcal{L}}_{x}\oplus{\mathcal{L}}^{*}(-S)_{x}\, \longrightarrow\,E_{x} \tag{2.13}\] which is clearly an isomorphism. Using (2.5) and the given isomorphism between \({\mathcal{L}}^{\otimes 2}\) and \(K_{X}\) we have \[{\mathcal{L}}^{*}(-S)_{x}\,=\,((K_{X})_{x}\otimes{\mathcal{L}}_{x}^{*})^{*} \otimes{\mathcal{O}}_{X}(-S)_{x}\,=\,({\mathcal{L}}_{x}^{*})^{*}\,=\,{\mathcal{ L}}_{x}\,.\] Hence the isomorphism in (2.13) gives that \(E_{x}\,=\,{\mathcal{L}}_{x}\oplus{\mathcal{L}}^{*}(-S)_{x}\,=\,{\mathcal{L}}_{x}\oplus{\mathcal{L}}_{x}\). For each \(x_{i}\,\in\,S\) (see (2.1)), fix \[c_{i}\,\in\,\mathbb{R} \tag{2.14}\] such that \(c_{i}\,>\,1\). Using \(\{c_{i}\}_{i=1}^{n}\) we will construct a parabolic structure on the holomorphic vector bundle \(E\) in (2.9). For any \(x_{i}\,\in\,S\), the quasiparabolic filtration of \(E_{x_{i}}\) is the following: \[0\,\subset\,{\mathcal{L}}^{*}(-S)_{x_{i}}\,\subset\,E_{x_{i}} \tag{2.15}\] (see Lemma 2.3). The parabolic weight of \({\mathcal{L}}^{*}(-S)_{x_{i}}\) is \(\frac{c_{i}+1}{2c_{i}+1}\); the parabolic weight of \(E_{x_{i}}\) is \(\frac{c_{i}}{2c_{i}+1}\). The parabolic vector bundle defined by this parabolic structure on \(E\) will be denoted by \(E_{*}\). Note that \[\text{par-deg}(E_{*})\,=\,\text{degree}(E)+\sum_{i=1}^{n}\left(\frac{c_{i}+1} {2c_{i}+1}+\frac{c_{i}}{2c_{i}+1}\right)\,=\,-n+n\,=\,0\,; \tag{2.16}\] in fact the parabolic second exterior product is \[\det E_{*}\,=\,\bigwedge^{2}E_{*}\,=\,(\bigwedge^{2}E)\otimes{\mathcal{O}}_{X }(S)\,=\,{\mathcal{O}}_{X}\,, \tag{2.17}\] where \({\mathcal{O}}_{X}\) is equipped with the trivial parabolic structure (no nonzero parabolic weights). **Proposition 2.4**.: 1. _The holomorphic vector bundle_ \(E\) _in (_2.9_) is isomorphic to a direct sum of holomorphic line bundles_ \(\mathcal{L}\oplus\mathcal{L}^{*}(-S)\)_._ 2. _The parabolic vector bundle_ \(E_{*}\) _in (_2.15_) is not isomorphic to a direct sum of parabolic line bundles._ Proof.: Consider the short exact sequence in (2.9). Note that \[H^{1}(X,\,\mathrm{Hom}(\mathcal{L}^{*}(-S),\,\mathcal{L}))\,=\,H^{1}(X,\,K_{X} (S))\,=\,H^{0}(X,\,\mathcal{O}_{X}(-S))^{*}\,=\,0\,.\] Hence the short exact sequence in (2.9) splits holomorphically, and \(E\,=\,\mathcal{L}\oplus\mathcal{L}^{*}(-S)\). This proves the first statement. To prove the second statement by contradiction, assume that \[E_{*}\,=\,A_{*}\oplus B_{*}\,, \tag{2.18}\] where \(A_{*}\) and \(B_{*}\) are parabolic line bundles on \(X\). Since \[\mathrm{par-deg}(A_{*})+\mathrm{par-deg}(B_{*})\,=\,\mathrm{par-deg}(E_{*})\,= \,0\] (see (2.16)), at least one of \(A_{*}\) and \(B_{*}\) has nonnegative parabolic degree. Assume that \(\mathrm{par-deg}(A_{*})\,\geq\,0\). Since the parabolic degree of the quotient \(\mathcal{L}^{*}(-S)\) in (2.9), equipped with the parabolic structure induced by \(E_{*}\), is negative (recall that \(n\,\geq\,3\) if \(\mathrm{genus}(X)\,=\,0\)), there is no nonzero homomorphism from \(A_{*}\) to it (recall that \(\mathrm{par-deg}(A_{*})\,\geq\,0\)). Consequently, the parabolic subbundle \(A_{*}\,\subset\,E_{*}\) in (2.18) coincides with the subbundle \(\mathcal{L}\) in (2.9) equipped with the parabolic structure induced by \(E_{*}\). This implies that the following composition of homomorphisms \[B\,\hookrightarrow\,E\,\longrightarrow\,E/\mathcal{L}\,=\,\mathcal{L}^{*}(-S)\] is an isomorphism, where \(B\) denotes the holomorphic line bundle underlying \(B_{*}\) in (2.18). Therefore, the inclusion map \(B\,\hookrightarrow\,E\) in (2.18) produces a holomorphic splitting \[\rho\,:\,\mathcal{L}^{*}(-S)\,\longrightarrow\,E \tag{2.19}\] of (2.9). Since \(\rho\) in (2.19) is given by (2.18), and the parabolic subbundle \(A_{*}\,\subset\,E_{*}\) in (2.18) coincides with the subbundle \(\mathcal{L}\) in (2.9) equipped with the parabolic structure induced by \(E_{*}\), it follows that for all \(x\,\in\,S\), \[\rho(\mathcal{L}^{*}(-S)_{x})\,=\,\mathcal{L}^{*}(-S)_{x}\,\subset\,E_{x}\,. \tag{2.20}\] Recall that the quasiparabolic structure of \(E_{*}\) at \(x\) is given by the subspace \(\mathcal{L}^{*}(-S)_{x}\,\subset\,E_{x}\) in Lemma 2.3, and therefore \(\mathcal{L}^{*}(-S)_{x}\) must lie in the image, in \(E_{*}\), of either \(A_{*}\) or \(B_{*}\). From (2.20) it follows that \(\rho\) in (2.19) satisfies the condition \[\rho(\mathcal{L}^{*}(-S))\,\subset\,\psi(\widetilde{E}(-S))\,\subset\,E\,,\] where \(\psi\) is the homomorphism in (2.11). Consequently, \(\rho\) produces a unique holomorphic homomorphism \[\alpha\,:\,\mathcal{L}^{*}(-S)\,\longrightarrow\,\widetilde{E}(-S)\] such that \(\rho\,=\,\psi\circ\alpha\) on \(\mathcal{L}^{*}(-S)\). This homomorphism \(\alpha\) evidently gives a holomorphic splitting of the top exact sequence in (2.11), meaning \(p^{\prime}\circ\iota\,=\,\mathrm{Id}_{\mathcal{L}^{*}(-S)}\), where \(p^{\prime}\) is the projection in (2.11). After tensoring the above homomorphism \(\alpha\) with \(\operatorname{Id}_{\mathcal{O}_{X}(S)}\) we get a homomorphism \[\mathcal{L}^{*}\,=\,\mathcal{L}^{*}(-S)\otimes\mathcal{O}_{X}(S)\,\xrightarrow{ \alpha\otimes\operatorname{Id}_{\mathcal{O}_{X}(S)}}\,\widetilde{E}(-S) \otimes\mathcal{O}_{X}(S)\,=\,\widetilde{E}\] that splits holomorphically the short exact sequence in (2.8). But, as noted earlier, the short exact sequence in (2.8) does not split holomorphically. In view of this contradiction we conclude that there is no decomposition as in (2.18). **Remark 2.5**.: Regarding Proposition 2.4(1) it should be clarified that although \(E\) in (2.9) is isomorphic to \(\mathcal{L}\oplus\mathcal{L}^{*}(-S)\), there is no natural isomorphism between them. Indeed, any two holomorphic splittings of the short exact sequence (2.9) differ by an element of \[H^{0}(X,\,\operatorname{Hom}(\mathcal{L}^{*}(-S),\,\mathcal{L}))\,=\,H^{0}(X,\, K_{X}(S)).\] A holomorphic splittings of the short exact sequence (2.9) produces an isomorphism of \(E_{x}\) with \(\mathcal{L}_{x}\oplus\mathcal{L}^{*}(-S)_{x}\) for any \(x\,\in\,X\), but this isomorphism depends on the choice of the splitting. This shows that Proposition 2.4(1) does not imply Lemma 2.3. We recall that a parabolic connection on the parabolic vector bundle \(E_{*}\) in (2.15) is a logarithmic connection \(D_{0}\,:\,E\,\longrightarrow\,E\otimes K_{X}(S)\) on \(E\), singular over \(S\), such that the following conditions hold: 1. for any \(x_{i}\,\in\,S\) the eigenvalues of the residue \(\operatorname{Res}(D_{0},\,x_{i})\) of \(D_{0}\) at \(x_{i}\) are \(\frac{c_{i}+1}{2c_{i}+1}\) and \(\frac{c_{i}}{2c_{i}+1}\) (see (2.14)). 2. The eigenspace in \(E_{x_{i}}\) for the eigenvalue \(\frac{c_{i}+1}{2c_{i}+1}\) of \(\operatorname{Res}(D_{0},\,x_{i})\) is the line \[\mathcal{L}^{*}(-S)_{x}\,\subset\,E_{x_{i}}\] in Lemma 2.3. Let \(D_{0}\,:\,E\,\longrightarrow\,E\otimes K_{X}(S)\) be a logarithmic connection on \(E\). Take the holomorphic line subbundle \(\mathcal{L}\,\subset\,E\) in (2.9), and consider the composition of homomorphisms \[\mathcal{L}\,\hookrightarrow\,E\,\xrightarrow{D_{0}}\,E\otimes K_{X}(S)\, \xrightarrow{p\otimes\operatorname{Id}_{K_{X}(S)}}\,\mathcal{L}^{*}(-S) \otimes K_{X}(S)\,=\,\mathcal{L}\,,\] where \(p\) is the projection in (2.9); this composition of homomorphisms will be denoted by \(\mathcal{S}(D_{0},\,\mathcal{L})\). This homomorphism \[\mathcal{S}(D_{0},\,\mathcal{L})\,:\,\mathcal{L}\,\longrightarrow\,\mathcal{ L} \tag{2.21}\] is called the second fundamental form of the subbundle \(\mathcal{L}\,\subset\,E\) for the logarithmic connection \(D_{0}\). We note that \(\mathcal{S}(D_{0},\,\mathcal{L})\) is a constant scalar multiplication. A parabolic connection on \(E_{*}\) induces a holomorphic connection on \(\det E_{*}\,=\,\mathcal{O}_{X}\) (see (2.17)). Note that any holomorphic connection on \(\mathcal{O}_{X}\) is of the form \(d+\omega\), where \(d\) denotes the de Rham differential and \(\omega\,\in\,H^{0}(X,\,K_{X})\). A parabolic connection \(D_{0}\) on \(E_{*}\) is called a parabolic \(\operatorname{SL}(2,\mathbb{C})\)-connection if the connection on \(\det E_{*}\,=\,\mathcal{O}_{X}\) induced by \(D_{0}\) coincides with the trivial connection \(d\). **Corollary 2.6**.: 1. _The parabolic vector bundle_ \(E_{*}\) _in (_2.15_) admits a parabolic_ \(\operatorname{SL}(2,\mathbb{C})\)_-connection._ 2. _For any parabolic connection_ \(D_{0}\) _on_ \(E_{*}\)_, the second fundamental form_ \(\mathcal{S}(D_{0},\,\mathcal{L})\) _in_ (2.21) _is an isomorphism of_ \(\mathcal{L}\)_._ 3. _For any parabolic connection_ \(D_{0}\) _on_ \(E_{*}\) _the local monodromy of_ \(D_{0}\) _around any point of_ \(S\) _is semisimple._ Proof.: In view of Remark 2.2, from (2.16) and the second statement in Proposition 2.4 it follows immediately that \(E_{*}\) admits a parabolic connection. Take a parabolic connection \(D_{0}\) on \(E_{*}\). Let \(d+\omega\) be the connection on \(\det E_{*}\,=\,\mathcal{O}_{X}\) induced by \(D_{0}\), where \(\omega\,\in\,H^{0}(X,\,K_{X})\) and \(d\) is the de Rham differential. Then \(D_{0}-\frac{1}{2}\omega\otimes\mathrm{Id}_{E}\) is a parabolic \(\mathrm{SL}(2,\mathbb{C})\)-connection on \(E_{*}\). For any parabolic connection \(D_{0}\) on \(E_{*}\), consider the second fundamental form \(\mathcal{S}(D_{0},\,\mathcal{L})\) in the second statement. If \(\mathcal{S}(D_{0},\,\mathcal{L})\,=\,0\), then \(D_{0}\) produces a parabolic connection on the line subbundle \(\mathcal{L}\,\subset\,E\) in (2.9) equipped with the parabolic structure induced by \(E_{*}\). But the parabolic degree of this parabolic line bundle is \[g-1+\sum_{i=1}^{n}\frac{c_{i}}{2c_{i}+1}\,>\,0.\] This implies that this parabolic line bundle does not admit any parabolic connection. Hence we conclude that \(\mathcal{S}(D_{0},\,\mathcal{L})\,\neq\,0\). This implies that \(\mathcal{S}(D_{0},\,\mathcal{L})\) is an isomorphism of \(\mathcal{L}\). The local monodromy of \(D_{0}\) around any \(x\,\in\,S\) is conjugate to \(\exp\left(-2\pi\sqrt{-1}\cdot\mathrm{Res}(D_{0},\,x)\right)\) (see Remark 2.1). Hence the eigenvalues of the local monodromy for \(D_{0}\) around each \(x_{i}\,\in\,S\) are \(\exp\left(-2\pi\sqrt{-1}\frac{c_{i}+1}{2c_{i}+1}\right)\) and \(\exp\left(-2\pi\sqrt{-1}\frac{c_{i}}{2c_{i}+1}\right)\). This proves the third statement. We will see in Corollary 4.2 that the endomorphism \(\mathcal{S}(D_{0},\,\mathcal{L})\) in Corollary 2.6(2) is actually independent of the parabolic connection \(D_{0}\) on \(E_{*}\). **Corollary 2.7**.: _Take any parabolic connection \(D_{0}\) on \(E_{*}\). There is no holomorphic line subbundle of \(E\) preserved by \(D_{0}\)._ Proof.: Let \(L\,\subset\,E\) be a holomorphic line subbundle preserved by \(D_{0}\). Denoted by \(L_{*}\) the parabolic line bundle defined by the parabolic structure on \(L\) induced by \(E_{*}\). Since \(D_{0}\) is a parabolic connection on \(E_{*}\), its restriction to \(L\) is a parabolic connection on \(L_{*}\). Therefore, we have \[\mathrm{par-deg}(L_{*})\,=\,0. \tag{2.22}\] Consider the parabolic structure on the quotient \(\mathcal{L}^{*}(-S)\) in (2.9) induced by \(E_{*}\). Its parabolic degree is negative, and hence from (2.22) we conclude that there is no nonzero parabolic homomorphism from \(L_{*}\) to it. Consequently, the subbundle \(L\,\subset\,E\) coincides with the subbundle \(\mathcal{L}\) in (2.9). Since \(L\,=\,\mathcal{L}\) is preserved by \(D_{0}\), the second fundamental form \(\mathcal{S}(D_{0},\,\mathcal{L})\) in (2.21) vanishes identically. But this contradicts Corollary 2.6(2). Hence \(D_{0}\) does not preserve any holomorphic line subbundle of \(E\). Given a parabolic connection \(D\) on \(E_{*}\), consider its monodromy representation \[\mathrm{Mon}_{D}\,:\,\pi_{1}(X\setminus S,\,y)\,\longrightarrow\,\mathrm{GL}(2,\mathbb{C})\,,\] where \(y\,\in\,X\setminus D\) is a base point. Corollary 2.7 implies that \({\rm Mon}_{D}\) is irreducible, meaning the action of \({\rm Mon}_{D}(\pi_{1}(X\setminus S,\,y))\,\subset\,{\rm GL}(2,{\mathbb{C}})\) on \({\mathbb{C}}^{2}\) does not preserve any line. ### Orbifold structure In this subsection we assume that \(\{c_{i}\}_{i=1}^{n}\) in (2.14) are all integers; recall that \(c_{i}\,>\,1\) for all \(1\,\leq\,i\,\leq\,n\), There is a ramified Galois covering \[\varphi\,:\,Y\,\longrightarrow\,X \tag{2.23}\] satisfying the following two conditions: * \(\varphi\) is unramified over the complement \(X\setminus S\), and * for every \(x_{i}\,\in\,S\) and one (hence every) point \(y\,\in\,\varphi^{-1}(x_{i})\), the order of the ramification of \(\varphi\) at \(y\) is \(2c_{i}+1\). Such a ramified Galois covering \(\varphi\) exists; see [Na, p. 26, Proposition 1.2.12]. Let \[\Gamma\,:=\,{\rm Gal}(\varphi)\,=\,{\rm Aut}(Y/X)\,\subset\,{\rm Aut}(Y) \tag{2.24}\] be the Galois group for the Galois covering \(\varphi\). A holomorphic vector bundle \(V\,\stackrel{{ q_{0}}}{{\longrightarrow}}\,Y\) is called an _orbifold bundle_ if \(\Gamma\) acts on the total space of \(V\) such that following three conditions hold: 1. The map \(V\,\longrightarrow\,V\) given by the action of any element of \(\Gamma\) on \(V\) is holomorphic, 2. the projection \(q_{0}\) is \(\Gamma\)-equivariant, and 3. the action of any \(\gamma\,\in\,\Gamma\) on \(V\) is a holomorphic automorphism of the vector bundle \(V\) over the automorphism \(\gamma\) of \(Y\). Recall that the parabolic weights of \(E_{*}\) at any \(x_{i}\,\in\,S\) are integral multiples of \(\frac{1}{2c_{i}+1}\). Therefore, there is a unique, up to an isomorphism, orbifold vector bundle \({\mathcal{V}}\) of rank two on \(Y\) which corresponds to the parabolic vector bundle \(E_{*}\) [Bi], [Bo1], [Bo2]. The action of \(\Gamma\) on this \({\mathcal{V}}\) produces an action of \(\Gamma\) on the direct image \(\varphi_{*}{\mathcal{V}}\). We have \[(\varphi_{*}{\mathcal{V}})^{\Gamma}\,=\,E\,. \tag{2.25}\] From (2.17) it follows that \[\det{\mathcal{V}}\,=\,\bigwedge^{2}{\mathcal{V}}\,=\,{\mathcal{O}}_{Y}\,, \tag{2.26}\] and the action of \(\Gamma\) on the orbifold bundle \(\det{\mathcal{V}}\) coincides with the action of \(\Gamma\) on \({\mathcal{O}}_{Y}\) given by the action of \(\Gamma\) on \(Y\). Consider the subbundle \({\mathcal{L}}\,\subset\,E\) in (2.9). Let \[{\bf L}\,\subset\,{\mathcal{V}} \tag{2.27}\] be the orbifold line subbundle corresponding to it. So the action of \(\Gamma\) on \({\mathcal{V}}\) preserves the subbundle \({\bf L}\), and the subbundle \[(\varphi_{*}{\bf L})^{\Gamma}\,\subset\,(\varphi_{*}{\mathcal{V}})^{\Gamma}\, =\,E\,.\] coincides with \({\mathcal{L}}\). The action of \(\Gamma\) on \(Y\) produces an action of \(\Gamma\) on the canonical bundle \(K_{Y}\). For any automorphism \(\gamma\,\in\,\Gamma\) consider its differential \(d\gamma\,:\,TY\,\longrightarrow\,\gamma^{*}TY\). The action of \(\gamma\) on \(K_{Y}\) is given by \(((d\gamma)^{*})^{-1}\,=\,(d\gamma^{-1})^{*}\). Therefore, \(K_{Y}\) is an orbifold line bundle. **Lemma 2.8**.: _The orbifold line bundle \({\bf L}^{\otimes 2}\) (see (2.27)) is isomorphic to the orbifold line bundle \(K_{Y}\)._ Proof.: Let \({\mathcal{L}}_{*}\) denote the holomorphic line subbundle \({\mathcal{L}}\) in (2.9) equipped with the parabolic structure on it induced by \(E_{*}\). So the underlying holomorphic line bundle for the parabolic bundle \({\mathcal{L}}_{*}\otimes{\mathcal{L}}_{*}\) is \(K_{X}\), and the parabolic weight at any \(x_{i}\,\in\,S\) is \(\frac{2c_{i}}{2c_{i}+1}\). Hence the orbifold line bundle on \(Y\) corresponding to \({\mathcal{L}}_{*}\otimes{\mathcal{L}}_{*}\) is \[(\varphi^{*}K_{X})\otimes{\mathcal{O}}_{Y}\left(\sum_{i=1}^{n}2c_{i}\varphi^{- 1}(x_{i})_{\rm red}\right)\,=\,K_{Y}\] equipped with the action of \(\Gamma\) given by the action \(\Gamma\) on \(Y\), where \(\varphi^{-1}(x_{i})_{\rm red}\) is the reduced inverse image of \(x_{i}\). Since the orbifold line bundle \({\bf L}^{\otimes 2}\) corresponds to the parabolic line bundle \({\mathcal{L}}_{*}\otimes{\mathcal{L}}_{*}\), the lemma follows. From Lemma 2.8 it follows that \({\bf L}\) is an orbifold theta characteristic on \(Y\), and from (2.26) we have a short exact sequence of orbifold bundles \[0\,\longrightarrow\,{\bf L}\,\longrightarrow\,{\mathcal{V}}\,\longrightarrow \,{\bf L}^{*}\,\longrightarrow\,0\,. \tag{2.28}\] **Corollary 2.9**.: _The short exact sequence in (2.28) does not admit any \(\Gamma\)-equivariant holomorphic splitting._ Proof.: If (2.28) has a \(\Gamma\)-equivariant holomorphic splitting, then \({\mathcal{V}}\) is a direct sum of orbifold line bundles. This would imply that the parabolic vector bundle \(E_{*}\) -- that corresponds to \({\mathcal{V}}\) -- is a direct sum of parabolic line bundles. Therefore, from Proposition 2.4(2) it follows that (2.28) does not admit any \(\Gamma\)-equivariant holomorphic splitting. Actually a stronger form of Corollary 2.9 can be proved using it. **Proposition 2.10**.: _The short exact sequence of holomorphic vector bundles in (2.28) does not admit any holomorphic splitting._ Proof.: Assume that there is a holomorphic splitting \[\rho\;:\;{\bf L}^{*}\,\longrightarrow\,{\mathcal{V}}\] of the short exact sequence of holomorphic vector bundles in (2.28). Although \(\rho\) itself may not be \(\Gamma\)-equivariant, using it we will construct a \(\Gamma\)-equivariant splitting. For any \(\gamma\,\in\,\Gamma\), the composition of homomorphisms \[{\bf L}^{*}\;\stackrel{{\gamma}}{{\longrightarrow}}\;{\bf L}^{* }\;\stackrel{{\rho}}{{\longrightarrow}}\;{\mathcal{V}}\; \stackrel{{\gamma^{-1}}}{{\longrightarrow}}\;{\mathcal{V}}\,,\] which will be denoted by \(\rho[\gamma]\), is also a holomorphic splitting of the short exact sequence of holomorphic vector bundles in (2.28). Now the average \[\widetilde{\rho}\,:=\,\frac{1}{\#\Gamma}\sum_{\gamma\in\Gamma}\rho[\gamma]\;: \;{\bf L}^{*}\,\longrightarrow\,{\mathcal{V}},\] where \(\#\Gamma\) is the order of \(\Gamma\), is a \(\Gamma\)-equivariant holomorphic splitting of the short exact sequence of holomorphic vector bundles in (2.28). But this contradicts Corollary 2.9. Therefore, the short exact sequence of holomorphic vector bundles in (2.28) does not admit any holomorphic splitting. The \(\Gamma\)-invariant holomorphic connections on \(\mathcal{V}\) correspond to the parabolic connections on \(E_{*}\). Moreover, the parabolic \(\operatorname{SL}(2,\mathbb{C})\)-connections on \(E_{*}\) correspond to the \(\Gamma\)-invariant holomorphic connections \(D_{V}\) on \(\mathcal{V}\) that satisfy the condition that the holomorphic connection on \(\det\mathcal{V}\,=\,\mathcal{O}_{Y}\) (see (2.26)) induced by \(D_{V}\) is the trivial connection on \(\mathcal{O}_{Y}\) given by the de Rham differential. **Lemma 2.11**.: _The orbifold vector bundle \(\mathcal{V}\) admits \(\operatorname{SL}(2,\mathbb{C})\)-oper connections. The parabolic \(\operatorname{SL}(2,\mathbb{C})\)-connections on the parabolic bundle \(E_{*}\) are precisely the \(\Gamma\)-invariant \(\operatorname{SL}(2,\mathbb{C})\)-oper structures on the orbifold bundle \(\mathcal{V}\)._ Proof.: From Proposition 2.10 it follows immediately that \(\mathcal{V}\) admits \(\operatorname{SL}(2,\mathbb{C})\)-oper connections. Now the second statement of the lemma is deduced from the above observation that the parabolic \(\operatorname{SL}(2,\mathbb{C})\)-connections on \(E_{*}\) correspond to the \(\Gamma\)-invariant holomorphic connections \(D_{V}\) on \(\mathcal{V}\) that satisfy the condition that the holomorphic connection on \(\det\mathcal{V}\,=\,\mathcal{O}_{Y}\) induced by \(D_{V}\) is the trivial connection on \(\mathcal{O}_{Y}\). ## 3. Symmetric powers of parabolic bundle ### Explicit description of some symmetric powers In Section 3.2 we will define parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers for all \(r\,\geq\,2\). The definition involves symmetric powers of the parabolic vector bundle \(E_{*}\) in (2.16). Keeping this in mind, we will explicitly describe a few low degree symmetric powers of the parabolic vector bundle \(E_{*}\). This will done using the alternative description of parabolic bundles -- given by Maruyama and Yokogawa in [MY] (see also [Yo] and [BDHP, Appendix A3]) -- as filtered sheaves. This approach of [MY] is better suited for handling the tensor product, symmetric product exterior product of parabolic vector bundles. First we will describe the second symmetric power \(\operatorname{Sym}^{2}(E_{*})\) of the parabolic vector bundle \(E_{*}\). Consider the rank three holomorphic vector bundle \(\operatorname{Sym}^{2}(E)\), where \(E\) is the vector bundle in (2.9). Since \(\operatorname{Sym}^{2}(E)\) is a quotient of \(E^{\otimes 2}\), any subspace of \(E^{\otimes 2}_{x}\) produces a subspace of \(\operatorname{Sym}^{2}(E)_{x}\). For each \(x_{i}\,\in\,S\), let \[B_{i}\,\subset\,\operatorname{Sym}^{2}(E)_{x_{i}}\,=\,\operatorname{Sym}^{2} (E_{x_{i}})\] be the subspace given by the image of \[E_{x_{i}}\otimes\mathcal{L}^{*}(-S)_{x_{i}}\,\subset\,E^{\otimes 2}_{x_{i}}\] in \(\operatorname{Sym}^{2}(E_{x_{i}})\), where \(\mathcal{L}^{*}(-S)_{x_{i}}\,\subset\,E_{x_{i}}\) is the subspace in Lemma 2.3. Consider the unique holomorphic vector bundle \(E^{2}\) of rank three on \(X\) that fits in the following short exact sequence of sheaves \[0\,\longrightarrow\,E^{2}\,\longrightarrow\,\operatorname{Sym}^{2}(E)(S)\, :=\,\operatorname{Sym}^{2}(E)\otimes\mathcal{O}_{X}(S) \tag{3.1}\] \[\longrightarrow\,\bigoplus_{i=1}^{n}\big{(}{\rm Sym}^{2}(E)_{x_{i}}/B_{i}\big{)} \otimes{\mathcal{O}}_{X}(S)_{x_{i}}\,\longrightarrow\,0\,.\] The holomorphic vector bundle underlying the parabolic vector bundle \({\rm Sym}^{2}(E_{*})\) is \(E^{2}\). **Lemma 3.1**.: _For every \(x_{i}\,\in\,S\), the fiber \(E_{x}^{2}\) fits in a natural exact sequence_ \[\begin{array}{ccccc}0\,\longrightarrow\,{\mathcal{L}}_{x_{i}}^{\otimes 2} \,\longrightarrow\,E_{x_{i}}^{2}\,\longrightarrow\,B_{i}\otimes{\mathcal{O}}_ {X}(S)_{x_{i}}\\ \\ =\,(E_{x_{i}}\otimes{\mathcal{L}}^{*}(-S)_{x_{i}})\otimes{\mathcal{O}}_{X}(S)_ {x_{i}}\,=\,(E\otimes{\mathcal{L}}^{*})_{x_{i}}\,\longrightarrow\,0\,.\end{array}\] Proof.: Consider the commutative digram \[\begin{array}{ccccccccc}0&\longrightarrow&{\rm Sym}^{2}(E)& \longrightarrow&{\rm Sym}^{2}(E)(S)&\longrightarrow&\bigoplus_{i=1}^{n}{\rm Sym }^{2}(E)(S)_{x_{i}}&\longrightarrow&0\\ &&\Big{\downarrow}{\bf f}&&\Big{\downarrow}{\rm Id}&&\Big{\downarrow}\\ 0&\longrightarrow&E^{2}&\longrightarrow&{\rm Sym}^{2}(E)(S)&\longrightarrow& \bigoplus_{i=1}^{n}\frac{{\rm Sym}^{2}(E)(S)_{x_{i}}}{B_{i}\otimes{\mathcal{O }}_{X}(S)_{x_{i}}}&\longrightarrow&0.\end{array}\] For any \(x\,\in\,S\), the map \({\bf f}(x)\,:\,{\rm Sym}^{2}(E)_{x}\,\longrightarrow\,E_{x}^{2}\) is injective on the subspace \({\mathcal{L}}_{x_{i}}^{\otimes 2}\hookrightarrow{\rm Sym}^{2}(E)_{x_{i}}\), and moreover \({\bf f}(x_{i})({\mathcal{L}}_{x_{i}}^{\otimes 2})\,\subset\,E_{x_{i}}^{2}\) coincides with \({\bf f}(x_{i})({\rm Sym}^{2}(E)_{x_{i}})\). Therefore, the subspace \({\mathcal{L}}_{x_{i}}^{\otimes 2}\,\hookrightarrow\,E_{x_{i}}^{2}\) in the lemma is the image of the homomorphism \({\bf f}(x_{i})\). For the map \(E^{2}\,\longrightarrow\,{\rm Sym}^{2}(E)(S)\,:=\,{\rm Sym}^{2}(E)\otimes{ \mathcal{O}}_{X}(S)\) in (3.1), the image of \(E_{x_{i}}^{2}\) is \[B_{i}\otimes{\mathcal{O}}_{X}(S)_{x_{i}}\,=\,(E_{x_{i}}\otimes{\mathcal{L}}^ {*}(-S)_{x_{i}})\otimes{\mathcal{O}}_{X}(S)_{x_{i}}\,=\,(E\otimes{\mathcal{L }}^{*})_{x_{i}}\,\subset\,{\rm Sym}^{2}(E)(S)_{x_{i}}\,.\] This proves the lemma. For any \(x_{i}\,\in\,S\), consider the subspace \[{\mathcal{L}}^{*}(-S)_{x_{i}}^{\otimes 2}\,\subset\,B_{i}\,=\,({\mathcal{L}}_{x_{i} }\otimes{\mathcal{L}}^{*}(-S)_{x_{i}})\oplus{\mathcal{L}}^{*}(-S)_{x_{i}}^{ \otimes 2}\,.\] Let \[{\mathcal{F}}_{i}\,\subset\,E_{x_{i}}^{2} \tag{3.2}\] be the inverse image of \({\mathcal{L}}^{*}(-S)_{x_{i}}^{\otimes 2}\otimes{\mathcal{O}}_{X}(S)_{x_{i}}\, \subset\,B_{i}\otimes{\mathcal{O}}_{X}(S)_{x_{i}}\) for the quotient map \(E_{x_{i}}^{2}\,\longrightarrow\,B_{i}\otimes{\mathcal{O}}_{X}(S)_{x_{i}}\) in Lemma 3.1. As mentioned before, the holomorphic vector bundle underlying the parabolic vector bundle \({\rm Sym}^{2}(E_{*})\) is \(E^{2}\). The quasiparabolic filtration of \(E_{x_{i}}^{2}\), where \(x_{i}\,\in\,S\), is the following: \[{\mathcal{L}}_{x_{i}}^{\otimes 2}\,\subset\,\,{\mathcal{F}}_{i}\,\,\subset\,\,E_{x_{i} }^{2}\,, \tag{3.3}\] where \({\mathcal{L}}_{x_{i}}^{\otimes 2}\) and \({\mathcal{F}}_{i}\) are the subspaces in Lemma 3.1 and (3.2) respectively. The parabolic weight of \({\mathcal{L}}_{x_{i}}^{\otimes 2}\) is \(\frac{2e_{i}}{2c_{i}+1}\) and the parabolic weight of \({\mathcal{F}}_{i}\) is \(\frac{1}{2c_{i}+1}\); the parabolic weight of \(E_{x_{i}}^{2}\) is \(0\). The parabolic symmetric product \({\rm Sym}^{3}(E_{*})\) is actually a little easier to describe. The holomorphic vector bundle underlying the parabolic vector bundle \({\rm Sym}^{3}(E_{*})\) is the rank four vector bundle \[E^{3}\,:=\,({\rm Sym}^{3}(E))\otimes{\mathcal{O}}_{X}(S). \tag{3.4}\] For each \(x_{i}\,\in\,S\), the decomposition of \(E_{x_{i}}\) in Lemma 2.3 gives the following decomposition of the fiber \(E_{x_{i}}^{3}\): \[\big{(}({\mathcal{L}}^{*}(-S)_{x_{i}}^{\otimes 3})\oplus({\mathcal{L}}^{*}(-S)_{x_ {i}}^{\otimes 2}\otimes{\mathcal{L}}_{x_{i}})\oplus({\mathcal{L}}^{*}(-S)_{x_{i}} \otimes{\mathcal{L}}_{x_{i}}^{\otimes 2})\oplus({\mathcal{L}}_{x_{i}}^{\otimes 3}) \big{)}\otimes{\mathcal{O}}_{X}(S)_{x_{i}}\,=\,E_{x_{i}}^{3}\,. \tag{3.5}\] The quasiparabolic filtration of \(E^{3}_{x_{i}}\) is \[(\mathcal{L}^{*}(-S)^{\otimes 3}_{x_{i}})\otimes\mathcal{O}_{X}(S)_{x_{i}}\, \subset\,\big{(}(\mathcal{L}^{*}(-S)^{\otimes 3}_{x_{i}})\oplus(\mathcal{L}^{*}(-S)^{ \otimes 2}_{x_{i}}\otimes\mathcal{L}_{x_{i}})\big{)}\otimes\mathcal{O}_{X}(S)_{x _{i}} \tag{3.6}\] \[\subset\,\big{(}(\mathcal{L}^{*}(-S)^{\otimes 3}_{x_{i}})\oplus(\mathcal{L}^{*} (-S)^{\otimes 2}_{x_{i}}\otimes\mathcal{L}_{x_{i}})\oplus(\mathcal{L}^{*}(-S)_{x _{i}}\otimes\mathcal{L}^{\otimes 2}_{x_{i}})\big{)}\otimes\mathcal{O}_{X}(S)_{x_{i}} \,\subset\,E^{3}_{x_{i}}.\] The parabolic weight of \(\mathcal{L}^{*}(-S)^{\otimes 3}_{x_{i}}\otimes\mathcal{O}_{X}(S)_{x_{i}}\) is \(\frac{c_{i}+2}{2c_{i}+1}\), The parabolic weight of \[\big{(}(\mathcal{L}^{*}(-S)^{\otimes 3}_{x_{i}})\oplus(\mathcal{L}^{*}(-S)^{ \otimes 2}_{x_{i}}\otimes\mathcal{L}_{x_{i}})\big{)}\otimes\mathcal{O}_{X}(S)_{x _{i}}\] is \(\frac{c_{i}+1}{2c_{i}+1}\), the parabolic weight of \(\big{(}(\mathcal{L}^{*}(-S)^{\otimes 3}_{x_{i}})\oplus(\mathcal{L}^{*}(-S)^{ \otimes 2}_{x_{i}}\otimes\mathcal{L}_{x_{i}})\oplus(\mathcal{L}^{*}(-S)_{x_{i} }\otimes\mathcal{L}^{\otimes 2}_{x_{i}})\big{)}\otimes\mathcal{O}_{X}(S)_{x_{i}}\) is \(\frac{c_{i}}{2c_{i}+1}\), and the parabolic weight of \(E^{3}_{x_{i}}\) is \(\frac{c_{i}-1}{2c_{i}+1}\). Finally, we will describe the parabolic symmetric product \(\operatorname{Sym}^{4}(E_{*})\). Consider the rank five vector bundle \[\operatorname{Sym}^{4}(E)(2S)\,=\,(\operatorname{Sym}^{4}(E))\otimes \mathcal{O}_{X}(2S)\,.\] Using Lemma 2.3, the fiber \(\operatorname{Sym}^{4}(E)(2S)_{x_{i}}\), where \(x_{i}\,\in\,S\), decomposes into a direct sum of lines. More precisely, as in (3.5), \[\operatorname{Sym}^{4}(E)(2S)_{x_{i}}\,=\,((\mathcal{L}^{*})^{\otimes 4}(-2S))_{x _{i}}\oplus((\mathcal{L}^{*})^{\otimes 3}\otimes\mathcal{L}(-S))_{x_{i}} \tag{3.7}\] \[\oplus((\mathcal{L}^{*})^{\otimes 2}\otimes\mathcal{L}^{\otimes 2})_{x_{i}} \oplus(\mathcal{L}^{*}\otimes\mathcal{L}^{\otimes 3}(S))_{x_{i}}\oplus( \mathcal{L}^{\otimes 4}(2S))_{x_{i}}.\] Let \(E^{4}\) denote the vector bundle of rank five defined by the following short exact sequence of sheaves: \[0\,\longrightarrow\,E^{4}\,\stackrel{{\mathbf{h}}}{{ \longrightarrow}}\,\operatorname{Sym}^{4}(E)(2S)\,\longrightarrow \tag{3.8}\] \[\bigoplus_{i=1}^{n}\mathcal{Q}_{i}\,=\,\bigoplus_{i=1}^{n}\frac{ \operatorname{Sym}^{4}(E)(2S)_{x_{i}}}{((\mathcal{L}^{*})^{\otimes 4}(-2S))_{x_{i}} \oplus((\mathcal{L}^{*})^{\otimes 3}\otimes\mathcal{L}(-S))_{x_{i}}\oplus(( \mathcal{L}^{*})^{\otimes 2}\otimes\mathcal{L}^{\otimes 2})_{x_{i}}}\, \longrightarrow\,0,\] where \[\mathcal{Q}_{i}\,:=\,\frac{\operatorname{Sym}^{4}(E)(2S)_{x_{i}}}{((\mathcal{ L}^{*})^{\otimes 4}(-2S))_{x_{i}}\oplus((\mathcal{L}^{*})^{\otimes 3}\otimes \mathcal{L}(-S))_{x_{i}}\oplus((\mathcal{L}^{*})^{\otimes 2}\otimes\mathcal{L}^{ \otimes 2})_{x_{i}}}. \tag{3.9}\] The holomorphic vector bundle underlying the parabolic vector bundle \(\operatorname{Sym}^{4}(E_{*})\) is \(E^{4}\) defined in (3.8). **Lemma 3.2**.: _For every \(x_{i}\,\in\,S\), the fiber \(E^{4}_{x_{i}}\) fits in the following short exact sequence of vector spaces:_ \[0\,\longrightarrow\,(\mathcal{L}^{*}\otimes\mathcal{L}^{\otimes 3})_{x_{i}} \oplus(\mathcal{L}^{\otimes 4}(S))_{x_{i}}\,\longrightarrow\,E^{4}_{x_{i}}\] \[\stackrel{{\rho_{i}}}{{\longrightarrow}}\,((\mathcal{L}^{*})^{ \otimes 4}(-2S))_{x_{i}}\oplus((\mathcal{L}^{*})^{\otimes 3}\otimes\mathcal{L}(-S))_{x_{i}} \oplus((\mathcal{L}^{*})^{\otimes 2}\otimes\mathcal{L}^{\otimes 2})_{x_{i}}\, \longrightarrow\,0\,.\] Proof.: The projection \[\rho_{i}\,:\,E^{4}_{x_{i}}\,\longrightarrow\,((\mathcal{L}^{*})^{\otimes 4}(-2S))_{x_{i} }\oplus((\mathcal{L}^{*})^{\otimes 3}\otimes\mathcal{L}(-S))_{x_{i}}\oplus(( \mathcal{L}^{*})^{\otimes 2}\otimes\mathcal{L}^{\otimes 2})_{x_{i}}\] in the lemma is given by the homomorphism \(\mathbf{h}(x_{i})\) in (3.8). To describe the homomorphism \[(\mathcal{L}^{*}\otimes\mathcal{L}^{\otimes 3})_{x_{i}}\oplus(\mathcal{L}^{ \otimes 4}(S))_{x_{i}}\,\longrightarrow\,E^{4}_{x_{i}}\] in the lemma, we consider the commutative diagram of homomorphisms \[\begin{array}{ccccccccc}0&\longrightarrow&\operatorname{Sym}^{4}(E)(S)& \longrightarrow&\operatorname{Sym}^{4}(E)(2S)&\longrightarrow&\bigoplus_{i=1}^{n} \operatorname{Sym}^{4}(E)(2S)_{x_{i}}&\longrightarrow&0\\ &&\big{|}\,\mathbf{f}&&\big{|}&&\big{|}&&\big{|}&&\\ 0&\longrightarrow&\stackrel{{\mathcal{L}^{4}}}{{\longrightarrow}}& \operatorname{Sym}^{4}(E)(2S)&\longrightarrow&\bigoplus_{i=1}^{n} \mathcal{Q}_{i}&\longrightarrow&0\end{array}\] where \({\mathcal{Q}}_{i}\) is defined in (3.9). Let \[{\bf f}(x_{i})\,:\,{\rm Sym}^{4}(E)(S)_{x_{i}}\,\longrightarrow\,E^{4}_{x_{i}} \tag{3.10}\] be the restriction of it to \(x_{i}\,\in\,S\). As in (3.7), we have the decomposition \[\begin{array}{c}{\rm Sym}^{4}(E)(S)_{x_{i}}\,=\,(({\mathcal{L}}^{*})^{\otimes 4 }(-3S))_{x_{i}}\oplus(({\mathcal{L}}^{*})^{\otimes 3}\otimes{\mathcal{L}}(-2S))_ {x_{i}}\\ \oplus(({\mathcal{L}}^{*})^{\otimes 2}\otimes{\mathcal{L}}^{\otimes 2}(-S))_{x_{i} }\oplus({\mathcal{L}}^{*}\otimes{\mathcal{L}}^{\otimes 3})_{x_{i}}\oplus({ \mathcal{L}}^{\otimes 4}(S))_{x_{i}}.\end{array}\] The subspace \[(({\mathcal{L}}^{*})^{\otimes 4}(-3S))_{x_{i}}\oplus(({\mathcal{L}}^{*})^{ \otimes 3}\otimes{\mathcal{L}}(-2S))_{x_{i}}\oplus(({\mathcal{L}}^{*})^{ \otimes 2}\otimes{\mathcal{L}}^{\otimes 2}(-S))_{x_{i}}\,\subset\,{\rm Sym}^{4}(E)(S)_ {x_{i}}\] is the kernel of the homomorphism \({\bf f}(x_{i})\) in (3.10). The restriction of \({\bf f}(x_{i})\) to the subspace \[({\mathcal{L}}^{*}\otimes{\mathcal{L}}^{\otimes 3})_{x_{i}}\oplus({\mathcal{L}}^{ \otimes 4}(S))_{x_{i}}\,\subset\,{\rm Sym}^{4}(E)(S)_{x_{i}}\] is injective. Therefore, \({\bf f}(x_{i})\) gives the homomorphism \[({\mathcal{L}}^{*}\otimes{\mathcal{L}}^{\otimes 3})_{x_{i}}\oplus({\mathcal{L}}^{ \otimes 4}(S))_{x_{i}}\,\longrightarrow\,E^{4}_{x_{i}}\] in the lemma. It is evident that the quotient map \(E^{4}_{x_{i}}\,\longrightarrow\,E^{4}_{x_{i}}/(({\mathcal{L}}^{*}\otimes{ \mathcal{L}}^{\otimes 3})_{x_{i}}\oplus({\mathcal{L}}^{\otimes 4}(S))_{x_{i}})\) coincides with \(\rho_{i}\). Define the subspaces \[{\mathcal{F}}^{i}_{3}\,:=\,\rho_{i}^{-1}((({\mathcal{L}}^{*})^{\otimes 4}(-2S))_ {x_{i}})\,\subset\,{\mathcal{F}}^{i}_{4}\,:=\,\rho_{i}^{-1}((({\mathcal{L}}^{* })^{\otimes 4}(-2S))_{x_{i}}\oplus(({\mathcal{L}}^{*})^{\otimes 3} \otimes{\mathcal{L}}(-S))_{x_{i}})\,\subset\,E^{4}_{x_{i}} \tag{3.11}\] where \(\rho_{i}\) is the homomorphism in Lemma 3.2. As mentioned before, the holomorphic vector bundle underlying the parabolic vector bundle \({\rm Sym}^{4}(E_{*})\) is \(E^{4}\). The quasiparabolic filtration of \(E^{4}_{x_{i}}\) is \[({\mathcal{L}}^{*}\otimes{\mathcal{L}}^{\otimes 3})_{x_{i}}\,\subset\,({\mathcal{ L}}^{*}\otimes{\mathcal{L}}^{\otimes 3})_{x_{i}}\oplus({\mathcal{L}}^{ \otimes 4}(S))_{x_{i}}\,\subset\,{\mathcal{F}}^{i}_{3}\,\subset\,{\mathcal{F}} ^{i}_{4}\,\subset\,E^{4}_{x_{i}}\] (see Lemma 3.2 and (3.11)). The parabolic weight of \(({\mathcal{L}}^{*}\otimes{\mathcal{L}}^{\otimes 3})_{x_{i}}\) is \(\frac{2c_{i}}{2c_{i}+1}\), the parabolic weight of \(({\mathcal{L}}^{*}\otimes{\mathcal{L}}^{\otimes 3})_{x_{i}}\oplus({ \mathcal{L}}^{\otimes 4}(S))_{x_{i}}\) is \(\frac{2c_{i}-1}{2c_{i}+1}\), the parabolic weight of \({\mathcal{F}}^{i}_{3}\) is \(\frac{2}{2c_{i}+1}\), the parabolic weight of \({\mathcal{F}}^{i}_{4}\) is \(\frac{1}{2c_{i}+1}\) and the parabolic weight of \(E^{4}_{x_{i}}\) is \(0\). ### Higher rank parabolic opers For any \(r\,\geq\,2\), consider the parabolic vector bundle of rank \(r\) defined by the symmetric product \({\rm Sym}^{r-1}(E_{*})\) of the parabolic vector bundle \(E_{*}\) in (2.15). Since \(\det E_{*}\,=\,{\mathcal{O}}_{X}\) (see (2.17)), it follows that \[\det{\rm Sym}^{r-1}(E_{*})\,=\,\bigwedge^{r}{\rm Sym}^{r-1}(E_{*})\,=\,{ \mathcal{O}}_{X}, \tag{3.12}\] where \({\mathcal{O}}_{X}\) is equipped with the trivial parabolic structure (no nonzero parabolic weights). A parabolic \({\rm SL}(r,{\mathbb{C}})\)-connection on \({\rm Sym}^{r-1}(E_{*})\) is a parabolic connection on \({\rm Sym}^{r-1}(E_{*})\) satisfying the condition that the induced parabolic connection on \(\det{\rm Sym}^{r-1}(E_{*})\,=\,{\mathcal{O}}_{X}\) is the trivial connection. Two parabolic \({\rm SL}(r,{\mathbb{C}})\)-connections on \({\rm Sym}^{r-1}(E_{*})\) are called equivalent if they differ by a holomorphic automorphism of the parabolic bundle \({\rm Sym}^{r-1}(E_{*})\). If \(D_{1}\) is a parabolic \({\rm SL}(r,{\mathbb{C}})\)-connection on \({\rm Sym}^{r-1}(E_{*})\), and \(D_{2}\) is another parabolic connection on \({\rm Sym}^{r-1}(E_{*})\) equivalent to \(D_{1}\), then \(D_{2}\) is clearly a parabolic \({\rm SL}(r,{\mathbb{C}})\)-connection. Indeed, this follows immediately from the fact that the holomorphic automorphisms of a holomorphic line bundle \(\mathbb{L}\) on \(X\) act trivially on the space of all logarithmic connections on \(\mathbb{L}\). **Definition 3.3**.: A parabolic \(\operatorname{SL}(r,\mathbb{C})\)-_oper_ on \(X\) is an equivalence class of parabolic \(\operatorname{SL}(r,\mathbb{C})\)-connections on \(\operatorname{Sym}^{r-1}(E_{*})\). **Remark 3.4**.: It should be clarified that the class of parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers in Definition 3.3 is different from the class in [BDP] (see [BDP, p. 504, Definition 4.1] and [BDP, p. 511, Definition 5.2]). Indeed, the parabolic vector bundle \(E_{*}\) in (2.16) is different from the one in [BDP] (see [BDP, p. 497, (3.4)], [BDP, p. 497, (3.5)]). In fact the underlying rank two bundles are different and the parabolic weights are also different. In the nonparabolic case there is only one class of \(\operatorname{SL}(r,\mathbb{C})\)-opers. Roughly speaking, parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers can be considered as equivariant opers and the two classes of parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers arise because of two different types of equivariant structures. **Proposition 3.5**.: 1. _The parabolic vector bundle_ \(\operatorname{Sym}^{r-1}(E_{*})\) _admits a parabolic_ \(\operatorname{SL}(r,\mathbb{C})\)_-connection._ 2. _For any parabolic connection_ \(D_{r}\) _on_ \(\operatorname{Sym}^{r-1}(E_{*})\)_, the local monodromy of_ \(D_{r}\) _around any_ \(x_{i}\,\in\,S\) _is semisimple._ Proof.: Any parabolic connection on \(E_{*}\) induces a parabolic connection on \(\operatorname{Sym}^{r-1}(E_{*})\). Moreover, a parabolic \(\operatorname{SL}(2,\mathbb{C})\)-connection on \(E_{*}\) induces a parabolic \(\operatorname{SL}(r,\mathbb{C})\)-connection on \(\operatorname{Sym}^{r-1}(E_{*})\). Therefore, from Corollary 2.6(1) it follows that \(\operatorname{Sym}^{r-1}(E_{*})\) admits a parabolic connection on \(E_{*}\). Let \(D_{2}\) be a parabolic \(\operatorname{SL}(2,\mathbb{C})\)-connection on \(E_{*}\). Denote by \(D_{r}\) the parabolic connection on \(\operatorname{Sym}^{r-1}(E_{*})\) induced by \(D_{2}\). From Corollary 2.6(3) we know that the local monodromy of \(D_{2}\) around any \(x_{i}\,\in\,S\) is semisimple. Since the local monodromy of \(D_{r}\) around any \(x_{i}\,\in\,S\) is simply the \((r-1)\)-th symmetric product of the local monodromy of \(D_{2}\) around \(x_{i}\,\in\,S\), and the local monodromy of \(D_{2}\) around \(x_{i}\,\in\,S\) is semisimple, it follows that the local monodromy of \(D_{r}\) around \(x_{i}\,\in\,S\) is semisimple. We have shown that \(\operatorname{Sym}^{r-1}(E_{*})\) admits a parabolic connection for which the local monodromy around any \(x_{i}\,\in\,S\) is semisimple. On the other hand, the space of parabolic connections on \(\operatorname{Sym}^{r-1}(E_{*})\) is an affine space for the vector space \[H^{0}(X,\,\operatorname{End}^{n}(\operatorname{Sym}^{r-1}(E_{*}))\otimes K_ {X}(S)),\] where \[\operatorname{End}^{n}(\operatorname{Sym}^{r-1}(E_{*}))\,\subset\, \operatorname{End}(\operatorname{Sym}^{r-1}(E_{*})) \tag{3.13}\] is the subsheaf defined by the sheaf of endomorphisms nilpotent with respect to the quasi-parabolic filtrations of \(\operatorname{Sym}^{r-1}(E_{*})\) over \(S\). Consequently, using Remark 2.1 it follows that for every parabolic connection \(D_{r}^{\prime}\) on \(\operatorname{Sym}^{r-1}(E_{*})\) the local monodromy of \(D_{r}^{\prime}\) around any \(x_{i}\,\in\,S\) is semisimple. In the rest of this section we assume that \(c_{i}\), \(1\,\leq\,i\,\leq\,n\), in (2.14) are integers. Take a ramified Galois covering \(\varphi\,:\,Y\,\longrightarrow\,X\) as in (2.23). As in Section 2, let \(\mathcal{V}\) denote the orbifold bundle on \(Y\) corresponding to the parabolic bundle \(E_{*}\) on \(X\). The action of the Galois group \(\Gamma\,=\,\operatorname{Gal}(\varphi)\) on \(\mathcal{V}\) produces an action of \(\Gamma\) on \(\operatorname{Sym}^{r-1}(\mathcal{V})\). A holomorphic connection on \(\operatorname{Sym}^{r-1}(\mathcal{V})\) is called _equivariant_ if it is preserved by the action of \(\Gamma\) on \(\operatorname{Sym}^{r-1}(\mathcal{V})\). From (3.12) it follows immediately that \[\det\operatorname{Sym}^{r-1}(\mathcal{V})\,=\,\bigwedge^{r}\operatorname{Sym }^{r-1}(\mathcal{V})\,=\,\mathcal{O}_{Y}.\] An \(\operatorname{SL}(r,\mathbb{C})\)-connection on \(\operatorname{Sym}^{r-1}(\mathcal{V})\) is a holomorphic connection \(D^{\prime}_{r}\) on \(\operatorname{Sym}^{r-1}(\mathcal{V})\) such that the connection on \(\det\operatorname{Sym}^{r-1}(\mathcal{V})\,=\,\mathcal{O}_{Y}\) induced by \(D^{\prime}_{r}\) coincides with the trivial connection on \(\mathcal{O}_{Y}\). Two equivariant \(\operatorname{SL}(r,\mathbb{C})\)-connections on \(\operatorname{Sym}^{r-1}(\mathcal{V})\) are called equivalent if they differ by a holomorphic \(\Gamma\)-equivariant automorphism of \(\operatorname{Sym}^{r-1}(\mathcal{V})\). **Proposition 3.6**.: _There is a natural bijection between the parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers on \(X\) and the equivalence classes of equivariant \(\operatorname{SL}(r,\mathbb{C})\)-connections on \(\operatorname{Sym}^{r-1}(\mathcal{V})\)._ Proof.: Let \(D_{2}\) be a parabolic connection on \(E_{*}\). Since the local monodromy of \(D_{2}\) around any \(x_{i}\,\in\,S\) is semisimple, it corresponds to an equivariant holomorphic connection \(\widehat{D}_{2}\) on \(\mathcal{V}\). Let \(\widehat{D}_{r}\) be the equivariant connection on \(\operatorname{Sym}^{r-1}(\mathcal{V})\) induced by \(\widehat{D}_{2}\). As before, \(D_{r}\) denotes the parabolic connection on \(\operatorname{Sym}^{r-1}(E_{*})\) induced by \(D_{2}\). Therefore, \(\widehat{D}_{r}\) corresponds to \(D_{r}\). The holomorphic vector bundle underlying the parabolic bundle \(\operatorname{Sym}^{r-1}(E_{*})\) is denoted by \(\operatorname{Sym}^{r-1}(E_{*})_{0}\) [MY]. As in (3.13), let \[\operatorname{End}^{n}(\operatorname{Sym}^{r-1}(E_{*}))\,\subset\, \operatorname{End}(\operatorname{Sym}^{r-1}(E_{*})_{0})\] be the coherent analytic subsheaf consisting of all locally defined sections \(s\) of the endomorphism bundle \(\operatorname{End}(\operatorname{Sym}^{r-1}(E_{*})_{0})\) satisfying the condition that \(s(x)\) is nilpotent with respect to the quasi-parabolic filtration of \(\operatorname{Sym}^{r-1}(E_{*})_{x}\), for all \(x\,\in\,S\) lying in the domain of \(s\). Recall that any parabolic connection on \(\operatorname{Sym}^{r-1}(E_{*})\) is of the form \(D_{r}+\theta\) for some \[\theta\,\in\,H^{0}(X,\,\operatorname{End}^{n}(\operatorname{Sym}^{r-1}(E_{*}) )\otimes K_{X}(S)).\] We have \[H^{0}(X,\,\operatorname{End}^{n}(\operatorname{Sym}^{r-1}(E_{*}))\otimes K_{X }(S))\,=\,H^{0}(Y,\,\operatorname{End}(\operatorname{Sym}^{r-1}(\mathcal{V}) ))^{\Gamma}. \tag{3.14}\] Also the space of all equivariant holomorphic connections on \(\operatorname{Sym}^{r-1}(\mathcal{V})\) is an affine space for \(H^{0}(Y,\,\operatorname{End}(\operatorname{Sym}^{r-1}(\mathcal{V})))^{\Gamma}\). The parabolic connection \(D_{r}+\theta\), where \(\theta\,\in\,H^{0}(X,\,\operatorname{End}^{n}(\operatorname{Sym}^{r-1}(E_{*} ))\otimes K_{X}(S))\), corresponds to the equivariant connection \(\widehat{D}_{r}+\widehat{\theta}\) on \(\operatorname{Sym}^{r-1}(\mathcal{V})\), where \(\widehat{\theta}\,\in\,H^{0}(Y,\,\operatorname{End}(\operatorname{Sym}^{r-1}( \mathcal{V})))^{\Gamma}\) corresponds to \(\theta\) by the isomorphism in (3.14). Also, parabolic automorphisms of \(\operatorname{Sym}^{r-1}(E_{*})\) are identified with the \(\Gamma\)-equivariant automorphisms of \(\mathcal{V}\). Now the proposition follows from (3.14), Proposition 3.5 and Definition 3.3. The above Proposition 3.6 is a generalization of Theorem 6.3 in [BDP] where a similar statement was proved under the extra assumption that \(r\) is odd. ## 4. Some properties of parabolic opers Consider the vector bundle \(E\) in (2.9). Let \[\operatorname{End}^{n}(E_{*})\,\subset\,\operatorname{End}(E) \tag{4.1}\] be the coherent analytic subsheaf defined by the conditions that \(s(E_{x})\,\subset\,\mathcal{L}^{*}(-S)_{x}\) and \(s(\mathcal{L}^{*}(-S)_{x})\,=\,0\) for all \(x\,\in\,S\) lying in the domain of the local section \(s\) of \(\operatorname{End}(E)\) (see Lemma 2.3). Take any \[\phi\ \in\ H^{0}(X,\,\operatorname{End}^{n}(E_{*})\otimes K_{X}(S))\,.\] Let \[\widehat{\phi}\ :\ \mathcal{L}\ \longrightarrow\,\mathcal{L}^{*}(-S)\otimes K _{X}(S)\,=\,\mathcal{L} \tag{4.2}\] be the homomorphism given by the following composition of homomorphisms: \[\mathcal{L}\,\stackrel{{\iota}}{{\longrightarrow}}\,E\, \stackrel{{\phi}}{{\longrightarrow}}\,E\otimes K_{X}(S)\, \xrightarrow{p\otimes\operatorname{Id}_{K_{X}(S)}}\,\mathcal{L}^{*}(-S) \otimes K_{X}(S)\,=\,\mathcal{L}\,,\] where \(\iota\) and \(p\) are the homomorphisms in (2.9); recall that \(\mathcal{L}^{\otimes 2}\,=\,K_{X}\). **Proposition 4.1**.: _For every \(\phi\,\in\,H^{0}(X,\,\operatorname{End}^{n}(E_{*})\otimes K_{X}(S))\) the homomorphism \(\widehat{\phi}\) constructed from it in (4.2) vanishes identically._ Proof.: Tensoring the diagram in (2.11) with \(K_{X}(S)\) we have the following commutative diagram \[\begin{array}{ccccccccc}0&\longrightarrow&\mathcal{L}\otimes K_{X}& \longrightarrow&\widetilde{E}\otimes K_{X}&\longrightarrow&\mathcal{L}& \longrightarrow&0\\ &&\Big{\downarrow}&&\Big{\downarrow}q&&\Big{\downarrow}&&\\ 0&\longrightarrow&\mathcal{L}\otimes K_{X}(S)&\longrightarrow&E\otimes K_{ X}(S)&\longrightarrow&\widetilde{\mathcal{L}}&\longrightarrow&0.\end{array} \tag{4.3}\] Take any \(\phi\,\in\,H^{0}(X,\,\operatorname{End}^{n}(E_{*})\otimes K_{X}(S))\). Consider the composition of homomorphisms \[\widetilde{E}(-S)\,\stackrel{{\psi}}{{\longrightarrow}}\,E\, \stackrel{{\phi}}{{\longrightarrow}}\,E\otimes K_{X}(S)\,,\] where \(\psi\) is the homomorphism in (2.11), and denote this composition by \(\widetilde{\phi}\). From (4.3), (4.1) and the construction of the decomposition in Lemma 2.3 it follows that the image of this homomorphism \(\widetilde{\phi}\,:\,\widetilde{E}(-S)\,\longrightarrow\,E\otimes K_{X}(S)\) is contained in the image of the homomorphism \(q\) in (4.3); in other words, the subsheaf \(\phi\circ\psi(\widetilde{E}(-S))\,\subset\,E\otimes K_{X}(S)\) lies in the image of the homomorphism \[\psi\otimes\operatorname{Id}_{K_{X}(S)}\ :\ \widetilde{E}(-S)\otimes K_{X}(S)\,=\, \widetilde{E}\otimes K_{X}\,\longrightarrow\,E\otimes K_{X}(S)\,.\] Consequently, \(\phi\) produces a homomorphism \[\phi^{\prime}\ :\ \widetilde{E}(-S)\ \longrightarrow\ \widetilde{E}\otimes K_{X}\,. \tag{4.4}\] More precisely, \(\phi^{\prime}\) is determined uniquely by the condition \[\widetilde{\phi}\,=\,(\psi\otimes\operatorname{Id}_{K_{X}(S)})\circ\phi^{ \prime}.\] Let \[\phi^{\prime\prime}\ :\ \mathcal{L}(-S)\ \longrightarrow\ \mathcal{L} \tag{4.5}\] denote the following composition of homomorphisms \[\mathcal{L}(-S)\,\stackrel{{\iota^{\prime}}}{{\longrightarrow}} \,\widetilde{E}(-S)\,\stackrel{{\phi^{\prime}}}{{\longrightarrow}} \,\widetilde{E}\otimes K_{X}\,\xrightarrow{p_{0}\otimes\operatorname{Id}_{K_{X }}}\,\mathcal{L}^{*}\otimes K_{X}\,=\,\mathcal{L}\,,\] where \(\iota^{\prime}\) and \(p_{0}\) are the homomorphisms in (2.11) and (2.8) respectively. To prove the proposition it suffices to show that \(\phi^{\prime\prime}\) in (4.5) vanishes identically. Take any \(x_{i}\,\in\,S\). Since \[q(\phi^{\prime}(x_{i})(\widetilde{E}(-S)_{x_{i}}))\,=\,\phi(\psi(x_{i})( \widetilde{E}(-S)_{x_{i}}))\,=\,\phi(\mathcal{L}^{*}(-S)_{x_{i}})\,=\,0\,,\] where \(\psi\), \(\phi^{\prime}\) and \(q\) are the homomorphisms in (2.11), (4.4) and (4.3) respectively, we conclude that \[\phi^{\prime}(x_{i})(\widetilde{E}(-S)_{x_{i}})\,\subset\,(\mathcal{L}\otimes K _{X})_{x_{i}}\,\subset\,(\widetilde{E}\otimes K_{X})_{x_{i}}\,, \tag{4.6}\] where \(\phi^{\prime}\) is the homomorphism in (4.4) and \(\mathcal{L}\,\subset\,\widetilde{E}\) is the subbundle in (2.8). Furthermore, it can be shown that \[\phi^{\prime}(x_{i})(\mathcal{L}(-S)_{x_{i}})\,=\,0\,; \tag{4.7}\] see (2.11) for the subspace \(\mathcal{L}(-S)_{x_{i}}\,\subset\,\widetilde{E}(-S)_{x_{i}}\). Indeed, this again follows from (2.11), (4.3), (4.1) and the construction of the decomposition in Lemma 2.3. In view of (4.6) and (4.7), the homomorphism \(\phi^{\prime\prime}\) in (4.5) vanishes at each \(x_{i}\). Therefore, \(\phi^{\prime\prime}\) produces a homomorphism \[\phi^{\prime\prime\prime}\ :\ \mathcal{L}(-S)\ \longrightarrow\ \mathcal{L}(-S)\,. \tag{4.8}\] Consider the image \(\phi^{\prime}(\mathcal{L}(-S))\,\subset\,\widetilde{E}\otimes K_{X}\), where \(\phi^{\prime}\) is the homomorphism in (4.4). If the homomorphism \(\phi^{\prime\prime\prime}\) in (4.8) in nonzero, then this subsheaf \(\phi^{\prime}(\mathcal{L}(-S))\) produces a holomorphic splitting of the top short exact sequence in (2.11) tensored with \(K_{X}\). Indeed, in that case the homomorphism \(p^{\prime}\otimes\mathrm{Id}_{K_{X}}\) (see (2.11) for \(p^{\prime}\)) maps \(\phi^{\prime}(\mathcal{L}(-S))\) surjectively to \(\mathcal{L}^{*}(-S)\otimes K_{X}\,=\,\mathcal{L}(-S)\) and hence \(\phi^{\prime}(\mathcal{L}(-S))\) gives a holomorphic splitting of the short exact sequence \[0\,\longrightarrow\,\mathcal{L}(-S)\otimes K_{X}\,\longrightarrow\, \widetilde{E}(-S)\otimes K_{X}\,\longrightarrow\,\mathcal{L}^{*}(-S)\otimes K _{X}\,\longrightarrow\,0\] obtained from the top exact sequence in (2.11) by tensoring it with \(K_{X}\). A holomorphic splitting of the above exact sequence produces a holomorphic splitting of the top short exact sequence in (2.11). But the exact sequence in (2.8) does not split holomorphically, which implies that the top short exact sequence in (2.11) does not split holomorphically. This implies that \(\phi^{\prime\prime\prime}\,=\,0\) (see (4.8)), and hence \(\phi^{\prime\prime}\,=\,0\) (see (4.5)). As noted before, to prove the proposition it is enough to show that \(\phi^{\prime\prime}\) vanishes identically. This completes the proof. **Corollary 4.2**.: _The endomorphism \(\mathcal{S}(D_{0},\,\mathcal{L})\,:\,\mathcal{L}\,\longrightarrow\,\mathcal{ L}\) in Corollary 2.6(2) does not depend on the parabolic connection \(D_{0}\)._ Proof.: The space of parabolic connections on \(E_{*}\) is an affine space for the vector space \(H^{0}(X,\,\mathrm{End}^{n}(E_{*})\otimes K_{X}(S))\). Note that for any parabolic connection \(D\) on \(E_{*}\) and any \(\phi\,\in\,H^{0}(X,\,\mathrm{End}^{n}(E_{*})\otimes K_{X}(S))\), we have \[\mathcal{S}(D+\phi,\,\mathcal{L})\,=\,\mathcal{S}(D,\,\mathcal{L})+\widehat{ \phi},\] where \(\widehat{\phi}\) is constructed in (4.2) from \(\phi\). Therefore, from Proposition 4.1 it follows immediately that \(\mathcal{S}(D+\phi,\,\mathcal{L})\,=\,\mathcal{S}(D,\,\mathcal{L})\). As before, let \({\mathcal{L}}_{*}\) denote the holomorphic line bundle \({\mathcal{L}}\) in (2.9) equipped with the parabolic structure on it induced by \(E_{*}\) for the inclusion map \(\iota\) in 2.9. We denote by \(E_{*}/{\mathcal{L}}_{*}\) the quotient line bundle \(E/{\mathcal{L}}\) in (2.9) equipped with the parabolic structure on it induced by \(E_{*}\). So from (2.9) we have a short exact sequence of parabolic bundles \[0\,\longrightarrow\,{\mathcal{L}}_{*}\,\longrightarrow\,E_{*}\,\longrightarrow \,E_{*}/{\mathcal{L}}_{*}\,\longrightarrow\,0\,. \tag{4.9}\] For notational convenience, both \(\operatorname{Sym}^{0}(E_{*})\) and \(({\mathcal{L}}_{*})^{0}\) will denote the trivial holomorphic line bundle \({\mathcal{O}}_{X}\) equipped with the trivial parabolic structure (no nonzero parabolic weights). Since \(\operatorname{Sym}^{r-1}(E_{*})\) is a quotient of \((E_{*})^{\otimes(r-1)}\), we have a natural homomorphism of parabolic bundles \[\tau_{j}\;:\;\operatorname{Sym}^{j-1}(E_{*})\otimes({\mathcal{L}}_{*})^{r-j} \,\longrightarrow\,\operatorname{Sym}^{r-1}(E_{*})\] for every \(1\,\leq\,j\,\leq\,r\) (see (4.9)). This \(\tau_{j}\) is an injective homomorphism, and its image is a parabolic subbundle of \(\operatorname{Sym}^{r-1}(E_{*})\). Let \[{\mathcal{F}}_{*}^{j}\;:=\,\operatorname{Image}(\tau_{j})\,\subset\, \operatorname{Sym}^{r-1}(E_{*})\] be the parabolic subbundle; its rank is \(j\). So we have a filtration of parabolic subbundles \[0\,=\,{\mathcal{F}}_{*}^{0}\,\subset\,{\mathcal{F}}_{*}^{1}\,\subset\,{ \mathcal{F}}_{*}^{2}\,\subset\,\cdots\,\subset\,{\mathcal{F}}_{*}^{r-1}\, \subset\,{\mathcal{F}}_{*}^{r}\,=\,\operatorname{Sym}^{r-1}(E_{*}). \tag{4.10}\] The holomorphic vector bundle underlying any \({\mathcal{F}}_{*}^{i}\) will be denoted by \({\mathcal{F}}_{0}^{i}\). For any \(1\,\leq\,j\,\leq\,r\), the quotient parabolic line bundle \({\mathcal{F}}_{*}^{j}/{\mathcal{F}}_{*}^{j-1}\) in (4.10) actually has the following description: \[{\mathcal{F}}_{*}^{j}/{\mathcal{F}}_{*}^{j-1}\,=\,({\mathcal{L}}_{*})^{r-j} \otimes(E_{*}/{\mathcal{L}}_{*})^{j-1}\,. \tag{4.11}\] Indeed, this follows immediately from (4.9); by convention, \((E_{*}/{\mathcal{L}}_{*})^{0}\) is the trivial line bundle \({\mathcal{O}}_{X}\) with the trivial parabolic structure. It can be shown that \[({\mathcal{L}}_{*})^{*}\,=\,E_{*}/{\mathcal{L}}_{*}. \tag{4.12}\] Indeed, from (2.17) it follows that \({\mathcal{L}}_{*}\otimes(E_{*}/{\mathcal{L}}_{*})\,=\,\det E_{*}\) is the trivial line bundle \({\mathcal{O}}_{X}\) with the trivial parabolic structure, and hence (4.12) holds. Therefore, from (4.11) it follows that \[\operatorname{par-deg}({\mathcal{F}}_{*}^{j}/{\mathcal{F}}_{*}^{j-1})\,=\,(2j -r-1)\cdot\operatorname{par-deg}(E_{*}/{\mathcal{L}}_{*})\,=\,(2j-r-1)\cdot \left(1-g-n+\sum_{i=1}^{n}\frac{c_{i}+1}{2c_{i}+1}\right)\,, \tag{4.13}\] where \(g\,=\,\operatorname{genus}(X)\). Now from (4.10) and (4.13) it is deduced that \[\operatorname{par-deg}({\mathcal{F}}_{*}^{j})\,=\,\sum_{i=1}^{j}\operatorname {par-deg}({\mathcal{F}}_{*}^{i}/{\mathcal{F}}_{*}^{i-1})\,=\,j(r-j)\cdot\left(g -1+\sum_{i=1}^{n}\frac{c_{i}}{2c_{i}+1}\right)\,. \tag{4.14}\] **Lemma 4.3**.: _Let \(D\) be any parabolic connection on the parabolic bundle \(\operatorname{Sym}^{r-1}(E_{*})\). Then the following two hold:_ 1. _For any_ \(1\,\leq\,j\,\leq\,r-1\)_, the parabolic subbundle_ \({\mathcal{F}}_{*}^{j}\) _in (_4.10_) is not preserved by_ \(D\)_._ 2. \(D({\mathcal{F}}_{0}^{j})\,\subset\,{\mathcal{F}}_{0}^{j+1}\otimes K_{X}(S)\)_, where_ \({\mathcal{F}}_{0}^{i}\) _is the holomorphic vector bundle underlying_ \({\mathcal{F}}^{i}\)_, for all_ \(1\,\leq\,j\,\leq\,r-1\) Proof.: From (4.14) it follows that \(\mbox{par-deg}({\mathcal{F}}^{j}_{*})\,\neq\,0\) (in fact, \(\mbox{par-deg}({\mathcal{F}}^{j}_{*})\,>\,0\)) for all \(1\,\leq\,j\,\leq\,r-1\). Consequently, \(D\) does not preserve \({\mathcal{F}}^{j}_{*}\). For any \(1\,\leq\,j\,\leq\,r-2\), and any \(2\,\leq\,k\,\leq\,r-j\), consider the parabolic line bundle \[({\mathcal{F}}^{j}_{*}/{\mathcal{F}}^{j-1}_{*})^{*}\otimes({\mathcal{F}}^{j+k }_{*}/{\mathcal{F}}^{j+k-1}_{*})\,=\,(({\mathcal{L}}_{*})^{r-j}\otimes(E_{*}/{ \mathcal{L}}_{*})^{j-1})^{*}\otimes(({\mathcal{L}}_{*})^{r-j-k}\otimes(E_{*}/{ \mathcal{L}}_{*})^{j+k-1})\] \[\,=\,({\mathcal{L}}_{*})^{r-j-k-(r-j)}\otimes(E_{*}/{\mathcal{L}}_{*})^{j+k-1 -(j-1)}\,=\,({\mathcal{L}}_{*})^{-k}\otimes(E_{*}/{\mathcal{L}}_{*})^{k}\,=\, (E_{*}/{\mathcal{L}}_{*})^{2k};\] see (4.11) and (4.12) for the above isomorphisms. The holomorphic line bundle underlying the parabolic line bundle \(({\mathcal{F}}^{j}_{*}/{\mathcal{F}}^{j-1}_{*})^{*}\otimes({\mathcal{F}}^{j+k }_{*}/{\mathcal{F}}^{j+k-1}_{*})\,=\,(E_{*}/{\mathcal{L}}_{*})^{2k}\) will be denoted by \(\xi_{r,k}\). We have \[\mbox{degree}(\xi_{r,k})\,=\,2k\cdot\mbox{degree}(E/{\mathcal{L}})+\sum_{i=1} ^{n}\left[\frac{2k(c_{i}+1)}{2c_{i}+1}\right]\] \[\,=\,2k(1-g-n)+kn+\sum_{i=1}^{n}\left[\frac{k}{2c_{i}+1}\right]\,=\,k(2-2g-n) ++\sum_{i=1}^{n}\left[\frac{k}{2c_{i}+1}\right]\,,\] where \([t]\,\in\,{\mathbb{Z}}\) denotes the integral part of \(t\), meaning \(0\,\leq\,t-[t]\,<\,1\). This implies that \[\mbox{degree}(\xi_{r,k})\,<\,2-2g-n\,=\,-\mbox{degree}(K_{X}(S))\] (recall that \(n\,\geq\,3\) if \(g\,=\,0\)), and hence \(\mbox{degree}(\xi_{r,k}\otimes K_{X}(S))\,<\,0\). Consequently, we have \[H^{0}(X,\,\xi_{r,k}\otimes K_{X}(S))\,=\,0\,.\] This implies that \[H^{0}(X,\,({\mathcal{F}}^{j}_{*}/{\mathcal{F}}^{j-1}_{*})^{*}\otimes({ \mathcal{F}}^{j+k}_{*}/{\mathcal{F}}^{j+k-1}_{*})\otimes K_{X}(S))\,=\,0\,.\] From (4.15) it is deduced that the following composition of homomorphisms \[{\mathcal{F}}^{j}_{0}\,\stackrel{{ D}}{{\longrightarrow}}\,{ \mathcal{F}}^{r}_{0}\otimes K_{X}(S)\,\longrightarrow\,({\mathcal{F}}^{r}_{0 }/{\mathcal{F}}^{j+1}_{0})\otimes K_{X}(S)\] vanishes identically, where \({\mathcal{F}}^{\ell}_{0}\) is the holomorphic vector bundle underlying the parabolic bundle \({\mathcal{F}}^{\ell}_{*}\). To see this, observe that the parabolic vector bundle \[\mbox{Hom}({\mathcal{F}}^{j}_{*},\,({\mathcal{F}}^{r}_{*}/{\mathcal{F}}^{j+1}_ {*})\otimes K_{X}(S))\,=\,({\mathcal{F}}^{r}_{*}/{\mathcal{F}}^{j+1}_{*}) \otimes K_{X}(S)\otimes({\mathcal{F}}^{j}_{*})^{*}\,=\,({\mathcal{F}}^{r}_{*}/ {\mathcal{F}}^{j+1}_{*})\otimes({\mathcal{F}}^{j}_{*})^{*}\otimes K_{X}(S)\] has a filtration of parabolic subbundles such that the successive quotients are \[({\mathcal{F}}^{j}_{*}/{\mathcal{F}}^{j-1}_{*})^{*}\otimes({\mathcal{F}}^{j+k }_{*}/{\mathcal{F}}^{j+k-1}_{*})\otimes K_{X}(S)\,,\ \ \ 2\,\leq\,k\,\leq\,r-j.\] So (4.15) implies that the composition of homomorphisms in (4.16) vanishes identically. Since the composition of homomorphisms in (4.16) vanishes identically we have \[D({\mathcal{F}}^{j}_{*})\,\subset\,{\mathcal{F}}^{j+1}_{*}\] for all \(1\,\leq\,j\,\leq\,r-1\). From (4.11) it follows that for any \(1\,\leq\,j\,\leq\,r-1\), the parabolic line bundle \[({\mathcal{F}}^{j}_{*}/{\mathcal{F}}^{j-1}_{*})^{*}\otimes({\mathcal{F}}^{j+1}_ {*}/{\mathcal{F}}^{j}_{*})\,=\,(E_{*}/{\mathcal{L}}_{*})\otimes{\mathcal{L}}^{* }_{*}\,=\,(E_{*}/{\mathcal{L}}_{*})^{\otimes 2}\] is \(TX(-S)\,=\,K_{X}(S)^{*}\) equipped with the parabolic weight \(\frac{1}{2c_{i}+1}\) at each \(x_{i}\,\in\,S\) (see (4.12) for the above isomorphism). Therefore, from Lemma 4.3(2) we conclude that for any parabolic connection \(D\) on the parabolic bundle \(\operatorname{Sym}^{r-1}(E_{*})\), the second fundamental forms for the parabolic subbundles in (4.10) are given by a collection of holomorphic homomorphisms \[\psi(D,j)\ \in\ H^{0}(X,\,\operatorname{Hom}(\mathcal{F}_{*}^{j}/\mathcal{F}_{*}^{ j-1},\,\mathcal{F}_{*}^{j+1}/\mathcal{F}_{*}^{j})\otimes K_{X}(S))\,=\,H^{0}(X,\, \mathcal{O}_{X}) \tag{4.17}\] with \(1\,\leq\,j\,\leq\,r-1\). **Corollary 4.4**.: _For each \(1\,\leq\,j\,\leq\,r-1\), the section \(\psi(D,j)\) in (4.17) is a nonzero constant._ Proof.: From Lemma 4.3(1) it follows immediately that \(\psi(D,j)\,\neq\,0\). ## 5. Differential operators on parabolic bundles In this section we will describe differential operators between parabolic vector bundles. As before, fix a compact Riemann surface \(X\) and a reduced effective divisor \(S\,=\,\sum_{i=1}^{n}x_{i}\) on it; if \(\operatorname{genus}(X)\,=\,0\), then assume that \(n\,\geq\,3\). For each point \(x_{i}\,\in\,S\) fix an integer \(N_{i}\,\geq\,2\). We will consider parabolic bundles on \(X\) with parabolic structure on \(S\) such that all the parabolic weights at each \(x_{i}\,\in\,S\) are integral multiplies of \(1/N_{i}\). There is a ramified Galois covering \[\varphi\,:\,Y\,\longrightarrow\,X \tag{5.1}\] satisfying the following two conditions: * \(\varphi\) is unramified over the complement \(X\setminus S\), and * for every \(x_{i}\,\in\,S\) and one (hence every) point \(y\,\in\,\varphi^{-1}(x_{i})\), the order of the ramification of \(\varphi\) at \(y\) is \(N_{i}\). Such a ramified Galois covering \(\varphi\) exists; see [Na, p. 26, Proposition 1.2.12]. Let \[\Gamma\,:=\,\operatorname{Gal}(\varphi)\,:=\,\operatorname{Aut}(Y/X)\,\subset \,\operatorname{Aut}(Y) \tag{5.2}\] be the Galois group for \(\varphi\). So the restriction \[\varphi^{\prime}\,:=\,\varphi\big{|}_{Y^{\prime}}\,:\,Y^{\prime}\,:=\,Y \setminus\varphi^{-1}(S)\ \longrightarrow\ X^{\prime}\,:=\,X\setminus S \tag{5.3}\] is an etale Galois covering with Galois group \(\Gamma\). As before, a holomorphic vector bundle \(V\) on \(Y\) is called an _orbifold bundle_ if \(\Gamma\) acts on \(V\) as holomorphic bundle automorphisms over the action of \(\Gamma\) on \(Y\). Consider the trivial vector bundle \[\mathbb{C}[\Gamma]_{Y}\,:=\,Y\times\mathbb{C}[\Gamma]\,\longrightarrow\,Y\,, \tag{5.4}\] where \(\mathbb{C}[\Gamma]\) is the group algebra for \(\Gamma\) with coefficients in \(\mathbb{C}\). The usual action of \(\Gamma\) on \(\mathbb{C}[\Gamma]\) and the Galois action of \(\Gamma\) on \(Y\) together produce an action of \(\Gamma\) on \(Y\times\mathbb{C}[\Gamma]\). This action makes \(Y\times\mathbb{C}[\Gamma]\,=\,\mathbb{C}[\Gamma]_{Y}\) an orbifold bundle on \(Y\). Let \[\mathcal{E}_{*}\ \longrightarrow\ X \tag{5.5}\] be the corresponding parabolic vector bundle on \(X\) with parabolic structure on \(S\) [Bi], [Bo1], [Bo2]. The action of \(\Gamma\) on the vector bundle \(\mathbb{C}[\Gamma]_{Y}\) in (5.4) produces an action of \(\Gamma\) on its direct image \(\varphi_{*}\mathbb{C}[\Gamma]_{Y}\) over the trivial action of \(\Gamma\) on \(X\). We have \[\mathcal{E}_{0}\,=\,(\varphi_{*}\mathbb{C}[\Gamma]_{Y})^{\Gamma}\,\subset\, \varphi_{*}\mathbb{C}[\Gamma]_{Y}\,, \tag{5.6}\] where \((\varphi_{*}\mathbb{C}[\Gamma]_{Y})^{\Gamma}\) is the \(\Gamma\)-invariant part, and \(\mathcal{E}_{0}\) is the holomorphic vector bundle underlying the parabolic bundle \(\mathcal{E}_{*}\) in (5.5). It can be shown that the holomorphic vector bundle \(\mathcal{E}_{0}\,=\,(\varphi_{*}\mathbb{C}[\Gamma]_{Y})^{\Gamma}\) is identified with \(\varphi_{*}\mathcal{O}_{Y}\). Indeed, there is a natural \(\Gamma\)-equivariant isomorphism \[\varphi_{*}\mathbb{C}[\Gamma]_{Y}\ \stackrel{{\sim}}{{\longrightarrow}}\ ( \varphi_{*}\mathcal{O}_{Y})\otimes_{\mathbb{C}}\mathbb{C}[\Gamma]\,;\] it is in fact given by the projection formula. Therefore, the natural isomorphism \[\varphi_{*}\mathcal{O}_{Y}\ \stackrel{{\sim}}{{\longrightarrow}}\ ((\varphi_{*}\mathcal{O}_{Y})\otimes_{\mathbb{C}}\mathbb{C}[\Gamma])^{\Gamma}\] (any complex \(\Gamma\)-module \(M\) is naturally identified with \((M\otimes_{\mathbb{C}}\mathbb{C}[\Gamma])^{\Gamma}\)) produces an isomorphism \[\varphi_{*}\mathcal{O}_{Y}\ \stackrel{{\sim}}{{\longrightarrow}}\ ( \varphi_{*}\mathbb{C}[\Gamma]_{Y})^{\Gamma}. \tag{5.7}\] The direct image \(\varphi_{*}\mathcal{O}_{Y}\) has a natural parabolic structure which we will now describe. Take any \(x_{i}\,\in\,S\). Fix an analytic open neighborhood \(U\,\subset\,X\) of \(x_{i}\) such that \(U\bigcap S\,=\,x_{i}\). Let \(\mathcal{U}\,:=\,\varphi^{-1}(U)\,\subset\,Y\) be the inverse image. The restriction of \(\varphi\) to \(\mathcal{U}\) will be denoted by \(\widetilde{\varphi}\). Let \(\widetilde{D}_{i}\,:=\,\varphi^{-1}(x_{i})_{\text{red}}\,\subset\,Y\) be the reduced inverse image. For all \(k\,\in\,[1,\,N_{i}]\), define the vector bundle \[V_{k}\,:=\,\widetilde{\varphi}_{*}\mathcal{O}_{\mathcal{U}}(-(N_{i}-k)) \widetilde{D}_{i})\,\longrightarrow\,U\,.\] So we have a filtration of subsheaves of \(V_{N_{i}}\,=\,(\varphi_{*}\mathcal{O}_{Y})\big{|}_{U}\): \[0\,\subset\,V_{1}\,\subset\,V_{2}\,\subset\,\cdots\,\subset\,V_{N_{i}-1}\, \subset\,V_{N_{i}}\,=\,(\varphi_{*}\mathcal{O}_{Y})\big{|}_{U}\,.\] The restriction of this filtration of subsheaves to \(x_{i}\) gives a filtration of subspaces \[0\,\subset\,(V_{1})^{\prime}_{x_{i}}\,\subset\,(V_{2})^{\prime}_{x_{i}}\, \subset\,\cdots\,\subset\,(V_{N_{i}-1})^{\prime}_{x_{i}}\,\subset\,(V_{N_{i}}) _{x_{i}}\,=\,(\varphi_{*}\mathcal{O}_{Y})_{x_{i}} \tag{5.8}\] of the fiber \((\varphi_{*}\mathcal{O}_{Y})_{x_{i}}\). We note that \((V_{k})^{\prime}_{x_{i}}\) in (5.8) is the image, in the fiber \((\varphi_{*}\mathcal{O}_{Y})_{x_{i}}\), of the fiber \((V_{k})_{x_{i}}\) over \(x_{i}\) of the vector bundle \(V_{k}\). The parabolic structure on \(\varphi_{*}\mathcal{O}_{Y}\) is defined as follows. The parabolic divisor is \(S\). The quasiparabolic filtration over any \(x_{i}\,\in\,S\) is the filtration of \((\varphi_{*}\mathcal{O}_{Y})_{x_{i}}\) constructed in (5.8). The parabolic weight of the subspace \((V_{k})_{x_{i}}\) in (5.8) is \(\frac{N_{i}-k}{N_{i}}\). The resulting parabolic vector bundle is identified with \(\mathcal{E}_{*}\) in (5.5); recall from (5.6) and (5.7) that \(\mathcal{E}_{0}\) is identified with \(\varphi_{*}\mathcal{O}_{Y}\). The trivial connection on the trivial vector bundle \(\mathbb{C}[\Gamma]_{Y}\,:=\,Y\times\mathbb{C}[\Gamma]\) in (5.4) is preserved by the action of the Galois group \(\Gamma\) on \(\mathbb{C}[\Gamma]_{Y}\). Therefore, this trivial connection produces a parabolic connection on the corresponding parabolic vector bundle \(\mathcal{E}_{*}\) in (5.5). This parabolic connection on \(\mathcal{E}_{*}\) will be denoted by \(\nabla^{\mathcal{E}}\). Using the isomorphism between \(\mathcal{E}_{0}\) and \(\varphi_{*}\mathcal{O}_{Y}\) (see (5.6) and (5.7)), the logarithmic connection on \(\mathcal{E}_{0}\) defining the above parabolic connection \(\nabla^{\mathcal{E}}\) on \(\mathcal{E}_{*}\) produces a logarithmic connection on \(\varphi_{*}{\mathcal{O}}_{Y}\). This logarithmic connection on \(\varphi_{*}{\mathcal{O}}_{Y}\) given by \(\nabla^{\mathcal{E}}\) is easy to describe. To describe it, take the de Rham differential \(d\,:\,{\mathcal{O}}_{Y}\,\longrightarrow\,K_{Y}\) on \(Y\). Let \[\varphi_{*}d\,:\,\varphi_{*}{\mathcal{O}}_{Y}\,\longrightarrow\,\varphi_{*}K_{Y} \tag{5.9}\] be its direct image. On the other hand, using the projection formula, the natural homomorphism \[K_{Y}\,\hookrightarrow\,K_{Y}\otimes{\mathcal{O}}_{Y}(\varphi^{-1}(S)_{\rm red })\,=\,\varphi^{*}(K_{X}\otimes{\mathcal{O}}_{X}(S))\,.\] produces a homomorphism \[\varphi_{*}K_{Y}\,\,\longrightarrow\,\,\varphi_{*}(\varphi^{*}(K_{X}\otimes{ \mathcal{O}}_{X}(S)))\,=\,(\varphi_{*}{\mathcal{O}}_{Y})\otimes K_{X}\otimes{ \mathcal{O}}_{X}(S)\,.\] Combining this with \(\varphi_{*}d\) in (5.9) we obtain homomorphisms \[\varphi_{*}{\mathcal{O}}_{Y}\,\longrightarrow\,\varphi_{*}K_{Y}\,\, \longrightarrow\,\,(\varphi_{*}{\mathcal{O}}_{Y})\otimes K_{X}\otimes{ \mathcal{O}}_{X}(S)\,.\] This composition of homomorphisms \(\varphi_{*}{\mathcal{O}}_{Y}\,\longrightarrow\,(\varphi_{*}{\mathcal{O}}_{Y} )\otimes K_{X}\otimes{\mathcal{O}}_{X}(S)\) defines a logarithmic connection on \(\varphi_{*}{\mathcal{O}}_{Y}\). This logarithmic connection coincides with the one that defines the above constructed parabolic connection \(\nabla^{\mathcal{E}}\) on \({\mathcal{E}}_{*}\). The parabolic connection \(\nabla^{\mathcal{E}}\) on \({\mathcal{E}}_{*}\) defines a nonsingular holomorphic connection \(\nabla^{\prime}\) on \[{\mathcal{E}}^{\prime}_{0}\,:=\,{\mathcal{E}}_{0}\big{|}_{X^{\prime}}\,=\, \varphi_{1*}{\mathcal{O}}_{Y^{\prime}}\] over \(X^{\prime}\) (see (5.3)). For any holomorphic vector bundle \(V^{\prime}\) on \(X^{\prime}\), note that \[J^{k}(V^{\prime}\otimes{\mathcal{E}}^{\prime}_{0})\,=\,J^{k}(V^{\prime}) \otimes{\mathcal{E}}^{\prime}_{0} \tag{5.10}\] for all \(k\,\geq\,0\). To see this isomorphism, for any \(x\,\in\,X^{\prime}\) and \(u\,\in\,({\mathcal{E}}^{\prime}_{0})_{x}\), let \(\widetilde{u}\) denote the unique flat section of \({\mathcal{E}}^{\prime}_{0}\) for the connection \(\nabla^{\prime}\), defined on any simply connected open neighborhood of \(x\), such that \(\widetilde{u}(x)\,=\,u\). Now the homomorphism \[J^{k}(V^{\prime})\otimes{\mathcal{E}}^{\prime}_{0}\,\longrightarrow\,J^{k}(V ^{\prime}\otimes{\mathcal{E}}^{\prime}_{0})\] that sends any \(v\otimes u\) to the image of \(v\otimes\widetilde{u}\), where \(v\,\in\,J^{k}(V^{\prime})_{x}\) and \(u\,\in\,({\mathcal{E}}^{\prime}_{0})_{x}\) with \(x\,\in\,X^{\prime}\), is evidently an isomorphism. Take holomorphic vector bundles \(V^{\prime}\) and \(W^{\prime}\) on a nonempty Zariski open subset \(U\,\subset\,X^{\prime}\). Recall that a holomorphic differential operator of order \(k\) from \(V^{\prime}\) to \(W^{\prime}\) is a holomorphic homomorphism \(J^{k}(V^{\prime})\,\longrightarrow\,W^{\prime}\). Let \[D^{\prime}\,:\,J^{k}(V^{\prime})\,\longrightarrow\,W^{\prime}\] be a holomorphic differential operator of order \(k\) from \(V^{\prime}\) to \(W^{\prime}\) on \(U\). We will show that \(D^{\prime}\) extends to a holomorphic differential operator \[\widetilde{D^{\prime}}\,:\,J^{k}(V^{\prime}\otimes{\mathcal{E}}^{\prime}_{0}) \,\longrightarrow\,W^{\prime}\otimes{\mathcal{E}}^{\prime}_{0} \tag{5.11}\] from \(V^{\prime}\otimes{\mathcal{E}}^{\prime}_{0}\) to \(W^{\prime}\otimes{\mathcal{E}}^{\prime}_{0}\) over \(U\). To construct \(\widetilde{D^{\prime}}\), using the isomorphism in (5.10) we have \[J^{k}(V^{\prime}\otimes{\mathcal{E}}^{\prime}_{0})\,=\,J^{k}(V^{\prime}) \otimes{\mathcal{E}}^{\prime}_{0}\,\,\xrightarrow{\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ product \(V_{*}\otimes\mathcal{E}_{*}\) (respectively, \(W_{*}\otimes\mathcal{E}_{*}\)) will be denoted by \((V_{*}\otimes\mathcal{E}_{*})_{0}\) (respectively, \((W_{*}\otimes\mathcal{E}_{*})_{0}\)), where \(\mathcal{E}_{*}\) is the parabolic bundle in (5.5). **Definition 5.1**.: A _holomorphic differential operator_ of order \(k\) from \(V_{*}\) to \(W_{*}\) over an open subset \(\widetilde{U}\,\subset\,X\) is a holomorphic homomorphism \[D^{\prime}\,:\,J^{k}(V^{\prime})\,\longrightarrow\,W^{\prime}\] over \(U\,:=\,\widetilde{U}\bigcap X^{\prime}\) such that the homomorphism \[\widetilde{D^{\prime}}\,:\,J^{k}(V^{\prime}\otimes\mathcal{E}_{0}^{\prime}) \,\longrightarrow\,W^{\prime}\otimes\mathcal{E}_{0}^{\prime}\] in (5.11) extends to a holomorphic homomorphism \(J^{k}((V_{*}\otimes\mathcal{E}_{*})_{0})\,\longrightarrow\,(W_{*}\otimes \mathcal{E}_{*})_{0}\) over entire \(\widetilde{U}\). It is straightforward to check that the above definition does not depend on the choice of the map \(\varphi\). We denote by \(\operatorname{Diff}_{X}^{k}(V_{*},\,W_{*})\) the sheaf of holomorphic differential operators of order \(k\) from \(V_{*}\) to \(W_{*}\). Define \[\operatorname{DO}_{P}^{k}(V_{*},\,W_{*})\ :=\ H^{0}(X,\,\operatorname{Diff}_{X}^{k} (V_{*},\,W_{*}))\] to be the space of all holomorphic differential operators of order \(k\) from \(V_{*}\) to \(W_{*}\) over \(X\). Let \(\mathbb{V}\) and \(\mathbb{W}\) denote the orbifold vector bundles on \(Y\) corresponding to the parabolic vector bundles \(V_{*}\) and \(W_{*}\) respectively. Consider the space \[\operatorname{DO}^{k}(\mathbb{V},\,\mathbb{W})\,:=\,H^{0}(Y,\,\operatorname{ Hom}(J^{k}(\mathbb{V}),\,\mathbb{W}))\] of holomorphic differential operators of order \(k\) from \(\mathbb{V}\) to \(\mathbb{W}\) over \(Y\). Then the actions of \(\Gamma\) on \(\mathbb{V}\) and \(\mathbb{W}\) together produce an action of \(\Gamma\) on \(\operatorname{DO}^{k}(\mathbb{V},\,\mathbb{W})\). Let \[H^{0}(Y,\,\operatorname{Hom}(J^{k}(\mathbb{V}),\,\mathbb{W}))^{\Gamma}\,=\, \operatorname{DO}^{k}(\mathbb{V},\,\mathbb{W})^{\Gamma}\,\subset\, \operatorname{DO}^{k}(\mathbb{V},\,\mathbb{W})\] be the space of all \(\Gamma\)-invariant differential operators of order \(k\) from \(\mathbb{V}\) to \(\mathbb{W}\). **Proposition 5.2**.: _There is a natural isomorphism_ \[\operatorname{DO}^{k}(\mathbb{V},\,\mathbb{W})^{\Gamma}\ \stackrel{{ \sim}}{{\longrightarrow}}\ \operatorname{DO}_{P}^{k}(V_{*},\,W_{*})\,.\] Proof.: We will first prove that \[\varphi_{*}\mathbb{V}\,=\,(V_{*}\otimes\mathcal{E}_{*})_{0}\,, \tag{5.12}\] where \(\mathcal{E}_{*}\) is the parabolic bundle in (5.5) and \((V_{*}\otimes\mathcal{E}_{*})_{0}\) is the vector bundle underlying the parabolic vector bundle \(V_{*}\otimes\mathcal{E}_{*}\). To prove (5.12), first note that \[\varphi_{*}\mathbb{V}\,=\,(\varphi_{*}(\mathbb{V}\otimes\mathbb{C}[\Gamma]_{Y }))^{\Gamma}\, \tag{5.13}\] where \(\mathbb{C}[\Gamma]_{Y}\) is the orbifold bundle in (5.4). Since \(\mathcal{E}_{*}\) and \(V_{*}\) correspond to the orbifold bundles \(\mathbb{C}[\Gamma]_{Y}\) and \(\mathbb{V}\) respectively, the parabolic bundle corresponding to the orbifold bundle \(\mathbb{V}\otimes\mathbb{C}[\Gamma]_{Y}\) is \(V_{*}\otimes\mathcal{E}_{*}\). In particular, we have \[(\varphi_{*}(\mathbb{V}\otimes\mathbb{C}[\Gamma]_{Y}))^{\Gamma}\,=\,(V_{*} \otimes\mathcal{E}_{*})_{0}\,.\] This and (5.13) together give the isomorphism in (5.12). Let \(D\,:\,\mathbb{V}\,\longrightarrow\,\mathbb{W}\) be a holomorphic differential operator of order \(k\) on \(Y\). Taking its direct image for the map \(\varphi\), we have \[\varphi_{*}D\;:\;\varphi_{*}\mathbb{V}\;\longrightarrow\;\varphi_{*}\mathbb{W}\,.\] Now if \(D\,\in\,\operatorname{DO}^{k}(\mathbb{V},\,\mathbb{W})^{\Gamma}\), then clearly \[\varphi_{*}D((\varphi_{*}\mathbb{V})^{\Gamma})\;\subset\;(\varphi_{*}\mathbb{W })^{\Gamma}\,.\] Let \[D_{\varphi}\;:=\;(\varphi_{*}D)\big{|}_{(\varphi_{*}\mathbb{V})^{\Gamma}}\;:\;( \varphi_{*}\mathbb{V})^{\Gamma}\;\longrightarrow\;(\varphi_{*}\mathbb{W})^{\Gamma}\] be the restriction of \(\varphi_{*}D\) to \((\varphi_{*}\mathbb{V})^{\Gamma}\,\subset\,\varphi_{*}\mathbb{V}\). Using (5.12) it is now straightforward to check that \(D_{\varphi}\) defines a holomorphic differential operator of order \(k\) from the parabolic bundle \(V_{*}\) to \(W_{*}\). The corresponding homomorphism \(J^{k}((V_{*}\otimes\mathcal{E}_{*})_{0})\,\longrightarrow\,(W_{*}\otimes \mathcal{E}_{*})_{0}\) in Definition 3.3 is given by \(\varphi_{*}D\) using the isomorphism in (5.12). The isomorphism in the proposition sends any \(D\,\in\,\operatorname{DO}^{k}(\mathbb{V},\,\mathbb{W})^{\Gamma}\) to \(D_{\varphi}\,\in\,\operatorname{DO}^{k}_{P}(V_{*},\,W_{*})\) constructed above from \(D\). For the inverse map, given any \(\mathbf{D}\,\in\,\operatorname{DO}^{k}_{P}(V_{*},\,W_{*})\), consider the homomorphism \[J^{k}((V_{*}\otimes\mathcal{E}_{*})_{0})\,\longrightarrow\,(W_{*}\otimes \mathcal{E}_{*})_{0}\] in Definition 3.3 given by the differential operator \(\mathbf{D}\). Using the isomorphism in (5.12) it produces a holomorphic differential operator from \(\mathbb{V}\) to \(\mathbb{W}\). This differential operator is evidently fixed by the action of \(\Gamma\) on \(\operatorname{DO}^{k}(\mathbb{V},\,\mathbb{W})\). ### Another description of differential operators on parabolic bundles We will give an alternative description of the holomorphic differential operators between two parabolic vector bundles. Let \(\operatorname{Diff}^{k}_{Z}(A,\,B)\) denote the sheaf of holomorphic differential operators of order \(k\) from a holomorphic vector bundle \(A\) on a complex manifold \(Z\) to another holomorphic vector bundle \(B\) on \(Z\). The sheaf \(\operatorname{Diff}^{k}_{Z}(\mathcal{O}_{Z},\,\mathcal{O}_{Z})\,=\,J^{k}( \mathcal{O}_{Z})^{*}\) has both left and right \(\mathcal{O}_{Z}\)-module structures, and \[\operatorname{Diff}^{k}_{Z}(A,\,B)\;=\;B\otimes_{\mathcal{O}_{Z}}\operatorname {Diff}^{k}_{Z}(\mathcal{O}_{Z},\,\mathcal{O}_{Z})\otimes_{\mathcal{O}_{Z}}A^{ *}\,. \tag{5.14}\] We have a short exact sequence of holomorphic vector bundles \[0\,\longrightarrow\,\operatorname{Diff}^{k}_{Z}(\mathcal{O}_{Z},\,\mathcal{O }_{Z})\,\stackrel{{\alpha}}{{\longrightarrow}}\,\operatorname{ Diff}^{k+1}_{Z}(\mathcal{O}_{Z},\,\mathcal{O}_{Z})\,\stackrel{{\eta}}{{ \longrightarrow}}\,\operatorname{Sym}^{k+1}(TZ)\,\longrightarrow\,0\,, \tag{5.15}\] where \(\eta\) is the symbol map. The homomorphism \[\operatorname{Id}_{B}\otimes\alpha\otimes\operatorname{Id}_{A^{*}}\,:\,B \otimes_{\mathcal{O}_{Z}}\operatorname{Diff}^{k}_{Z}(\mathcal{O}_{Z},\, \mathcal{O}_{Z})\otimes_{\mathcal{O}_{Z}}A^{*}\,\longrightarrow\,B\otimes_{ \mathcal{O}_{Z}}\operatorname{Diff}^{k+1}_{Z}(\mathcal{O}_{Z},\,\mathcal{O}_{ Z})\otimes_{\mathcal{O}_{Z}}A^{*}\,,\] where \(\alpha\) is the homomorphism in (5.15), coincides with the natural inclusion map \[\operatorname{Diff}^{k}_{Z}(A,\,B)\,\hookrightarrow\,\operatorname{Diff}^{k+1 }_{Z}(A,\,B).\] The holomorphic differential operators between two parabolic vector bundles will be described along the above line. Consider the pair \((Y,\,\varphi)\) in (5.1). The action of \(\Gamma\,=\,\operatorname{Gal}(\varphi)\) on \(Y\) produces an action of \(\Gamma\) on \(\mathcal{O}_{Y}\). This action of \(\Gamma\) on \(\mathcal{O}_{Y}\) induces an action of \(\Gamma\) on \(J^{k}(\mathcal{O}_{Y})\), which in turn induces an action of \(\Gamma\) on the dual vector bundle \(J^{k}({\mathcal{O}}_{Y})^{*}\,=\,\mathrm{Diff}^{k}_{Y}({\mathcal{O}}_{Y},\,{ \mathcal{O}}_{Y})\). As mentioned before, \(\mathrm{Diff}^{k}_{Y}({\mathcal{O}}_{Y},\,{\mathcal{O}}_{Y})\) is equipped with left and right \({\mathcal{O}}_{Y}\)-module structures. These module structures are \(\Gamma\)-equivariant. Let \({\mathcal{J}}^{k}_{*}\) denote the parabolic vector bundle on \(X\) associated to the orbifold vector bundle \(J^{k}({\mathcal{O}}_{Y})^{*}\,=\,\mathrm{Diff}^{k}_{Y}({\mathcal{O}}_{Y},\,{ \mathcal{O}}_{Y})\) on \(Y\). Note that the rank of \({\mathcal{J}}^{k}_{*}\) is \(k+1\). The parabolic line bundle \({\mathcal{J}}^{0}_{*}\) is the trivial line bundle \({\mathcal{O}}_{X}\) equipped with the trivial parabolic structure. The underlying holomorphic vector bundle for the parabolic bundle \({\mathcal{J}}^{1}_{*}\) is \({\mathcal{O}}_{X}\oplus TX(-S)\). The quasiparabolic filtration of \({\mathcal{J}}^{1}_{*}\) over any point \(x_{i}\,\in\,S\) is \[TX(-S)_{x_{i}}\,\subset\,({\mathcal{O}}_{X})_{x_{i}}\oplus TX(-S)_{x_{i}}\,=\, ({\mathcal{J}}^{1}_{0})_{x_{i}}\,.\] The parabolic weight of \(TX(-S)_{x_{i}}\) is \(\frac{1}{N_{i}}\) and the parabolic weight of \(({\mathcal{J}}^{1}_{0})_{x_{i}}\) is \(0\). Let \[TX(-S)_{*}\,\longrightarrow\,X \tag{5.16}\] denote the parabolic line bundle defined by \(TX(-S)\) equipped with the parabolic weight \(\frac{1}{N_{i}}\) at each \(x_{i}\,\in\,S\). So \[{\mathcal{J}}^{1}_{*}\,=\,TX(-S)_{*}\oplus{\mathcal{O}}_{X},\] where \({\mathcal{O}}_{X}\) has the trivial parabolic structure. Using the homomorphism \(\alpha\) in (5.15) for \(Y\) and \(j\,=\,k\) we see that \({\mathcal{J}}^{j}_{*}\) is a parabolic subbundle of \({\mathcal{J}}^{j+1}_{*}\) for all \(j\,\geq\,0\). Consequently, we have filtration of parabolic subbundles \[{\mathcal{J}}^{0}_{*}\,\subset\,{\mathcal{J}}^{1}_{*}\,\subset\,\cdots\, \subset\,{\mathcal{J}}^{k-1}_{*}\,\subset\,{\mathcal{J}}^{k}_{*} \tag{5.17}\] for all \(k\,\geq\,0\) such that each successive quotient is a parabolic line bundle. We will describe the quotient parabolic line bundle \({\mathcal{J}}^{j}_{*}/{\mathcal{J}}^{j-1}_{*}\) in (5.17) for all \(1\,\leq\,j\,\leq\,k\). The holomorphic line bundle underlying the parabolic bundle \({\mathcal{J}}^{j}_{*}/{\mathcal{J}}^{j-1}_{*}\) is \[(TX)^{\otimes j}(-jS)\otimes{\mathcal{O}}_{X}\left(\sum_{i=1}^{n}\left[\frac{j }{N_{i}}\right]x_{i}\right)\,,\] where \(\left[\frac{j}{N_{i}}\right]\,\in\,\mathbb{Z}\) is the integral part of \(\frac{j}{N_{i}}\), and its parabolic weight at any \(x_{i}\,\in\,S\) is \(\frac{j}{N_{i}}-\left[\frac{j}{N_{i}}\right]\). Indeed, from (5.15) we know that the parabolic line bundle \({\mathcal{J}}^{j}_{*}/{\mathcal{J}}^{j-1}_{*}\) corresponds to the orbifold line bundle \((TY)^{\otimes j}\) on \(Y\). On the other hand, the parabolic line bundle \(TX(-S)_{*}\) defined in (5.16) corresponds to the orbifold line bundle \(TY\). Therefore, we have \[{\mathcal{J}}^{j}_{*}/{\mathcal{J}}^{j-1}_{*}\,=\,TX(-S)^{\otimes j}_{*}\,. \tag{5.18}\] The above description of \({\mathcal{J}}^{j}_{*}/{\mathcal{J}}^{j-1}_{*}\) follows immediately from (5.18). The \(\Gamma\)-equivariant left and right \({\mathcal{O}}_{Y}\)-module structures on \(\mathrm{Diff}^{k}_{Y}({\mathcal{O}}_{Y},\,{\mathcal{O}}_{Y})\) produces left and right \({\mathcal{O}}_{X}\)-module structures on \({\mathcal{J}}^{k}_{*}\). Then, for any two parabolic bundles \(V_{*}\) and \(W_{*}\) over \(X\), it follows from Proposition 5.2 and (5.14) that \(\mathrm{Diff}^{k}_{X}(V_{*},\,W_{*})\) coincides with the holomorphic vector bundle underlying the parabolic tensor product \[W_{*}\otimes_{{\mathcal{O}}_{X}}{\mathcal{J}}^{k}_{*}\otimes_{{\mathcal{O}}_{ X}}V_{*}^{*}\,;\] in other words, we have \[\mathrm{Diff}^{k}_{X}(V_{*},\,W_{*})\,=\,(W_{*}\otimes_{{\mathcal{O}}_{X}}{ \mathcal{J}}^{k}_{*}\otimes_{{\mathcal{O}}_{X}}V_{*}^{*})_{0}\,.\] ### The symbol map Consider the quotient map \[\gamma\ :\ {\mathcal{J}}_{*}^{k}\,\longrightarrow\,{\mathcal{J}}_{*}^{k}/{ \mathcal{J}}_{*}^{k-1}\,=\,TX(-S)_{*}^{\otimes k}\] (see (5.17), (5.18)). It produces a map \[\sigma\,:=\,(\operatorname{Id}_{W_{*}}\otimes\gamma\otimes \operatorname{Id}_{V_{*}^{*}})_{0}\,:\,\operatorname{Diff}_{X}^{k}(V_{*},\,W_{ *})\,=\,(W_{*}\otimes_{{\mathcal{O}}_{X}}{\mathcal{J}}_{*}^{k}\otimes_{{ \mathcal{O}}_{X}}\otimes V_{*}^{*})_{0} \tag{5.19}\] \[\qquad\longrightarrow\,(W_{*}\otimes TX(-S)_{*}^{\otimes k}\otimes V _{*}^{*})_{0}\,=\,(TX(-S)_{*}^{\otimes k}\otimes\operatorname{Hom}(V_{*},\,W_ {*})_{*})_{0}\,.\] The above homomorphism \(\sigma\) is the _symbol_ map of differential operators between parabolic bundles. Take any \(\widehat{D}\,\in\,\operatorname{DO}_{P}^{k}(V_{*},\,W_{*})\). Denote by \(\mathbb{V}\) (respectively, \(\mathbb{W}\)) the orbifold bundle on \(Y\) corresponding to \(V_{*}\) (respectively, \(W_{*}\)), and let \[D\,\in\,\operatorname{DO}^{k}(\mathbb{V},\,\mathbb{W})^{\Gamma}\] be the invariant differential operator given by \(\widehat{D}\) using Proposition 5.2. Let \[\sigma(\widehat{D})\,\in\,H^{0}(X,\,(TX(-S)_{*}^{\otimes k}\otimes \operatorname{Hom}(V_{*},\,W_{*})_{*})_{0})\] be the symbol of \(\widehat{D}\) (see (5.19)). Let \[\sigma(D)\,\in\,H^{0}(Y,\,\operatorname{Hom}(\mathbb{V},\,\mathbb{W})\otimes (TY)^{\otimes k})\] be the symbol of \(D\). We have \[\sigma(D)\,\in\,H^{0}(Y,\,\operatorname{Hom}(\mathbb{V},\,\mathbb{W})\otimes (TY)^{\otimes k})^{\Gamma}\] because \(D\) is fixed by the action of \(\Gamma\) on \(\operatorname{DO}^{k}(\mathbb{V},\,\mathbb{W})\). The proof of the following lemma is straightforward. **Lemma 5.3**.: _The parabolic vector bundle \(TX(-S)_{*}^{\otimes k}\otimes\operatorname{Hom}(V_{*},\,W_{*})_{*}\) on \(X\) corresponds to the orbifold vector bundle \(\text{Hom}(\mathbb{V},\,\mathbb{W})\otimes(TY)^{\otimes k}\) on \(Y\). The natural isomorphism_ \[H^{0}(X,\,(TX(-S)_{*}^{\otimes k}\otimes\operatorname{Hom}(V_{*},\,W_{*})_{*} )_{0})\,\stackrel{{\sim}}{{\longrightarrow}}\,H^{0}(Y,\, \operatorname{Hom}(\mathbb{V},\,\mathbb{W})\otimes(TY)^{\otimes k})^{\Gamma}\] _takes the symbol \(\sigma(\widehat{D})\) to the symbol \(\sigma(D)\)._ ## 6. Parabolic opers and differential operators Recall the short exact sequence in (4.9) and the isomorphism in (4.12). For notational convenience, \(({\mathcal{L}}_{*})^{*}\,=\,E_{*}/{\mathcal{L}}_{*}\) will be denoted by \({\mathcal{L}}_{*}^{-1}\). For any \(j\,\leq\,1\), the parabolic line bundle \(({\mathcal{L}}_{*})^{\otimes j}\) (respectively, \(({\mathcal{L}}_{*}^{*})^{\otimes j}\)) will be denoted by \({\mathcal{L}}_{*}^{j}\) (respectively, \({\mathcal{L}}_{*}^{-j}\)). Also, \({\mathcal{L}}_{*}^{0}\) will denote the trivial line bundle \({\mathcal{O}}_{X}\) with the trivial parabolic structure. We note that \[{\mathcal{L}}_{*}^{-2}\,=\,TX(-S)_{*}\,, \tag{6.1}\] where \(TX(-S)_{*}\) is the parabolic line bundle in (5.16). From (5.18) and (6.1) it follows that \[{\mathcal{J}}_{*}^{j}/{\mathcal{J}}_{*}^{j-1}\,=\,{\mathcal{L}}_{*}^{-2j} \tag{6.2}\] for all \(j\,\geq\,1\). For any integer \(r\,\geq\,2\), consider the space of parabolic differential operators of order \(r\) \[\operatorname{DO}_{P}^{r}({\mathcal{L}}_{*}^{1-r},\,{\mathcal{L}}_{*}^{r+1})\, :=\,H^{0}(X,\,\operatorname{Diff}_{X}^{r}({\mathcal{L}}_{*}^{1-r},\,{ \mathcal{L}}_{*}^{r+1}))\] from \(\mathcal{L}_{*}^{1-r}\) to \(\mathcal{L}_{*}^{r+1}\). Let \[\sigma\,:\,\mathrm{DO}^{r}_{P}(\mathcal{L}_{*}^{1-r},\,\mathcal{L}_{*}^{r+1})\, \longrightarrow\,(\mathcal{L}_{*}^{r+1}\otimes(TX(-S)_{*})^{\otimes r}\otimes \mathcal{L}_{*}^{r-1})_{0} \tag{6.3}\] \[=\,(\mathcal{L}_{*}^{r+1}\otimes\mathcal{L}_{*}^{-2r}\otimes\mathcal{L}_{*}^{ r-1})_{0}\,=\,(\mathcal{L}_{*}^{0})_{0}\,=\,\mathcal{O}_{X}\] be the symbol map constructed in (5.19) (see (6.2) for the isomorphism used in (6.3)). Let \[\widetilde{\mathrm{DO}}^{r}_{P}(\mathcal{L}_{*}^{1-r},\,\mathcal{L}_{*}^{r+1} )\,\subset\,\mathrm{DO}^{r}_{P}(\mathcal{L}_{*}^{1-r},\,\mathcal{L}_{*}^{r+1}) \tag{6.4}\] be the affine subspace consisting of parabolic differential operators whose symbol is the constant function \(1\). The following Lemma constructs the sub-principal symbol of the operator: **Lemma 6.1**.: _There is a natural map_ \[\Psi\,:\,\widetilde{\mathrm{DO}}^{r}_{P}(\mathcal{L}_{*}^{1-r},\,\mathcal{L}_ {*}^{r+1})\,\longrightarrow\,H^{0}(X,\,K_{X})\,.\] Proof.: As in (2.27), let \(\mathbf{L}\) denote the orbifold line bundle on \(Y\) corresponding to \(\mathcal{L}\). So the parabolic bundle \(\mathcal{L}_{*}^{1-r}\) (respectively, \(\mathcal{L}_{*}^{r+1}\)) corresponds to the orbifold line bundle \(\mathbf{L}^{1-r}\) (respectively, \(\mathbf{L}^{r+1}\)). Take any \[D\,\in\,\widetilde{\mathrm{DO}}^{r}_{P}(\mathcal{L}_{*}^{1-r},\,\mathcal{L}_ {*}^{r+1}).\] Now Proposition 5.2 says that \(D\) corresponds to a \(\Gamma\)-invariant holomorphic differential operator of order \(r\) from \(\mathbf{L}^{1-r}\) to \(\mathbf{L}^{r+1}\). Let \[\mathcal{D}\,\in\,\mathrm{DO}^{r}(\mathbf{L}^{1-r},\,\mathbf{L}^{r+1})^{\Gamma} \tag{6.5}\] be the \(\Gamma\)-invariant differential operator corresponding to \(D\). As the orbifold bundle \(\mathbf{L}^{2}\) is isomorphic to \(TY\) (see Lemma 2.8), the symbol of \(\mathcal{D}\) is a section of \(\mathcal{O}_{Y}\). Since the symbol of \(D\) is the constant function \(1\), from Lemma 5.3 it follows that the symbol of \(\mathcal{D}\) is the constant function \(1\) on \(Y\). We will now show that a differential operator \(\mathbf{D}\,\in\,\mathrm{DO}^{r}(\mathbf{L}^{1-r},\,\mathbf{L}^{r+1})\) of symbol \(1\) produces a section \[\theta_{\mathbf{D}}\,\in\,H^{0}(Y,\,K_{Y})\,. \tag{6.6}\] Consider the short exact sequence of jet bundles \[0\,\longrightarrow\,\mathbf{L}^{1-r}\otimes K_{Y}^{\otimes r}\,=\,\mathbf{L}^ {r+1}\,\stackrel{{\mu}}{{\longrightarrow}}\,J^{r}(\mathbf{L}^{1- r})\,\stackrel{{\nu}}{{\longrightarrow}}\,J^{r-1}(\mathbf{L}^{1-r})\, \longrightarrow\,0 \tag{6.7}\] (see Lemma 2.8 for the above isomorphism) together with the homomorphism \[\mathbf{D}^{\prime}\,:\,J^{r}(\mathbf{L}^{1-r})\,\longrightarrow\,\mathbf{L}^ {r+1}\] defining the given differential operator \(\mathbf{D}\). Since the symbol of \(\mathbf{D}\) is \(1\), we have \[\mathbf{D}^{\prime}\circ\mu\,=\,\mathrm{Id}_{\mathbf{L}^{r+1}}\,,\] where \(\mu\) is the homomorphism in (6.7). Therefore, \(\mathbf{D}^{\prime}\) produces a holomorphic splitting of the short exact sequence in (6.7). Let \[\tau\,:\,J^{r-1}(\mathbf{L}^{1-r})\,\longrightarrow\,J^{r}(\mathbf{L}^{1-r}) \tag{6.8}\] be the holomorphic homomorphism given by this splitting of the short exact sequence in (6.7), so \(\tau\) is uniquely determined by the following two conditions: * \(\nu\circ\tau\,=\,\operatorname{Id}_{J^{r-1}({\bf L}^{1-r})}\), where \(\nu\) is the projection in (6.7), and * \(\operatorname{image}(\tau)\,=\,\operatorname{kernel}({\bf D}^{\prime})\,\subset \,J^{r}({\bf L}^{1-r})\). Next consider the following natural commutative diagram of homomorphisms of jet bundles: \[\begin{array}{ccccccccc}&&0&&0&&0\\ &&\big{\downarrow}&&\big{\downarrow}&&\big{\downarrow}&&\big{\downarrow}&&\\ 0&\longrightarrow&{\bf L}^{1-r}\otimes K_{Y}^{\otimes r}\,=\,{\bf L}^{r+1}& \stackrel{{\mu}}{{\longrightarrow}}&J^{r}({\bf L}^{1-r})& \stackrel{{\nu}}{{\longrightarrow}}&J^{r-1}({\bf L}^{1-r})& \longrightarrow&0\\ &&\big{\downarrow}&&\big{\downarrow}\varpi&&\big{\downarrow}&&\\ 0&\longrightarrow&J^{r-1}({\bf L}^{1-r})\otimes K_{Y}&\longrightarrow&J^{1}( J^{r-1}({\bf L}^{1-r}))&\stackrel{{\alpha}}{{\longrightarrow}}&J^{r-1}({ \bf L}^{1-r})&\longrightarrow&0\\ &&\big{\downarrow}&&\big{\downarrow}\zeta&&\\ 0&\longrightarrow&J^{r-2}({\bf L}^{1-r})\otimes K_{Y}&\stackrel{{= }}{{\longrightarrow}}&J^{r-2}({\bf L}^{1-r})\otimes K_{Y}&&\\ &&\big{\downarrow}&&\big{\downarrow}&&\\ &&0&&0&&\end{array} \tag{6.9}\] where the horizontal sequences are the natural jet sequences, and the vertical sequence in the left is the jet sequence tensored with \(K_{Y}\); the homomorphism \(\varpi\) is the natural homomorphism of jet bundles. The homomorphism \(\zeta\) in (6.9) is constructed as follows: We have the natural homomorphism \[h_{1}\,:\,J^{1}(J^{r-1}({\bf L}^{1-r}))\,\longrightarrow\,J^{r-1}({\bf L}^{1 -r}).\] On the other hand, we have the composition of homomorphisms \[J^{1}(J^{r-1}({\bf L}^{1-r}))\,\longrightarrow\,J^{1}(J^{r-2}({\bf L}^{1-r})) \,\longrightarrow\,J^{r-1}({\bf L}^{1-r}),\] which will be denoted by \(h_{2}\). Now, we have \(\zeta\,=\,h_{1}-h_{2}\); note that \(J^{r-2}({\bf L}^{1-r})\otimes K_{Y}\) is a subbundle of \(J^{r-1}({\bf L}^{1-r})\). Next consider the homomorphism \[\varpi\circ\tau\,:\,J^{r-1}({\bf L}^{1-r})\,\longrightarrow\,J^{1}(J^{r-1}({ \bf L}^{1-r}))\,,\] where \(\tau\) and \(\varpi\) are the homomorphisms in (6.8) and (6.9) respectively. We have \[\alpha\circ(\varpi\circ\tau)\,=\,\operatorname{Id}_{J^{r-1}({\bf L}^{1-r})}\,, \tag{6.10}\] where \(\alpha\) is the projection in (6.9), because (6.9) is a commutative diagram. From (6.10) it follows immediately that \(\varpi\circ\tau\) gives a holomorphic splitting of the bottom exact sequence in (6.9). But a holomorphic splitting of the bottom exact sequence in (6.9) is a holomorphic connection on \(J^{r-1}({\bf L}^{1-r})\). Let \(\nabla\) denote the holomorphic connection on \(J^{r-1}({\bf L}^{1-r})\) given by \(\varpi\circ\tau\). The holomorphic connection on \(\bigwedge^{r}J^{r-1}({\bf L}^{1-r})\,=\,{\mathcal{O}}_{Y}\) (see Lemma 2.8) induced by \(\nabla\) will be denoted by \(\nabla^{0}\). So the connection \(\nabla^{0}\) is of the form \[\nabla^{0}\,=\,d+\theta_{\bf D}\,,\] where \(\theta_{\bf D}\,\in\,H^{0}(Y,\,K_{Y})\) and \(d\) is the de Rham differential on \({\mathcal{O}}_{Y}\). This \(\theta_{\bf D}\) is the holomorphic 1-form in (6.6). By the construction of it, the form \(\theta_{\mathbf{D}}\) vanishes identically if and only if the above connection \(\nabla\) on \(J^{r-1}(\mathbf{L}^{1-r})\) induces the trivial connection on \(\bigwedge^{r}J^{r-1}(\mathbf{L}^{1-r})\,=\,\mathcal{O}_{Y}\). Therefor \(\theta_{\mathbf{D}}\) should be seen as a sub-principal symbol. Consider \(\theta_{\mathcal{D}}\,\in\,H^{0}(Y,\,K_{Y})\) (as in (6.6)) for the differential operator \(\mathcal{D}\) in (6.5). Since \(\mathcal{D}\) is \(\Gamma\)-invariant, we know that \(\theta_{\mathcal{D}}\) is also \(\Gamma\)-invariant. On the other hand, \[H^{0}(Y,\,K_{Y})^{\Gamma}\,=\,H^{0}(X,\,K_{X})\,.\] The element of \(H^{0}(X,\,K_{X})\) corresponding to \(\theta_{\mathcal{D}}\) will be denoted by \(\theta^{\prime}_{\mathcal{D}}\). Now we have a map \[\Psi\,:\,\widetilde{\operatorname{DO}}^{r}_{P}(\mathcal{L}^{1-r}_{*},\, \mathcal{L}^{r+1}_{*})\,\longrightarrow\,H^{0}(X,\,K_{X})\] that sends any \(D\) to \(\theta^{\prime}_{\mathcal{D}}\) constructed above from \(D\). The following main Theorem deals with the space of all parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers on \(X\) (see Definition 3.3) with given singular set \(S\,:=\,\{x_{1},\,\cdots,\,x_{n}\}\,\subset\,X\) and fixed integers \(c_{i}\,=\,N_{i}\) (see (2.14)). **Theorem 6.2**.: _The space of all parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers on \(X\) is identified with the inverse image_ \[\Psi^{-1}(0)\,\subset\,\widetilde{\operatorname{DO}}^{r}_{P}(\mathcal{L}^{1- r}_{*},\,\mathcal{L}^{r+1}_{*}),\] _where \(\Psi\) is the map in Lemma 6.1._ Proof.: This theorem will be proved using Proposition 3.6, Proposition 5.2, Lemma 5.3 and Lemma 6.1. As before, fix a ramified Galois covering \[\varphi\,:\,Y\,\longrightarrow\,X\] satisfying the following two conditions: * \(\varphi\) is unramified over the complement \(X\setminus S\), and * for every \(x_{i}\,\in\,S\) and one (hence every) point \(y\,\in\,\varphi^{-1}(x_{i})\), the order of ramification of \(\varphi\) at \(y\) is \(2N_{i}+1\). As before, \(\Gamma\) denotes \(\operatorname{Aut}(Y/X)\). Parabolic \(\operatorname{SL}(r,\mathbb{C})\)-opers on \(X\) are in a natural bijective correspondence with the equivariant \(\operatorname{SL}(r,\mathbb{C})\)-opers on \(Y\) (see Proposition 3.6). Equivariant \(\operatorname{SL}(r,\mathbb{C})\)-opers on \(Y\) are in a natural bijective correspondence with the subspace of \(\mathcal{D}\,\in\,\operatorname{DO}^{r}(\mathbf{L}^{1-r},\,\mathbf{L}^{r+1})^ {\Gamma}\) (see (6.5)) defined by all invariant differential operators \(D\) satisfying the following two conditions: * the symbol of \(D\) is the constant function \(1\), and * the element in \(H^{0}(Y,\,K_{Y})\) corresponding to \(D\) (see (6.6)) vanishes (this is equivalent to the vanishing of the sub-principal symbol of \(D\); see [1, p. 13]). (See Proposition 5.2 and Lemma 5.3.) This subspace of \(\operatorname{DO}^{r}(\mathbf{L}^{1-r},\,\mathbf{L}^{r+1})^{\Gamma}\) is in a natural bijective correspondence with \[\Psi^{-1}(0)\,\subset\,\widetilde{\operatorname{DO}}^{r}_{P}(\mathcal{L}^{1-r }_{*},\,\mathcal{L}^{r+1}_{*}),\] where \(\Psi\) is the map in Lemma 6.1. ## Acknowledgements We are very grateful to the referee for helpful comments. This work has been supported by the French government through the UCAJEDI Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR2152IDEX201. The first author is partially supported by a J. C. Bose Fellowship, and school of mathematics, TIFR, is supported by 12-R&D-TFR-5.01-0500.
2304.13191
Towards Explainable and Safe Conversational Agents for Mental Health: A Survey
Virtual Mental Health Assistants (VMHAs) are seeing continual advancements to support the overburdened global healthcare system that gets 60 million primary care visits, and 6 million Emergency Room (ER) visits annually. These systems are built by clinical psychologists, psychiatrists, and Artificial Intelligence (AI) researchers for Cognitive Behavioral Therapy (CBT). At present, the role of VMHAs is to provide emotional support through information, focusing less on developing a reflective conversation with the patient. A more comprehensive, safe and explainable approach is required to build responsible VMHAs to ask follow-up questions or provide a well-informed response. This survey offers a systematic critical review of the existing conversational agents in mental health, followed by new insights into the improvements of VMHAs with contextual knowledge, datasets, and their emerging role in clinical decision support. We also provide new directions toward enriching the user experience of VMHAs with explainability, safety, and wholesome trustworthiness. Finally, we provide evaluation metrics and practical considerations for VMHAs beyond the current literature to build trust between VMHAs and patients in active communications.
Surjodeep Sarkar, Manas Gaur, L. Chen, Muskan Garg, Biplav Srivastava, Bhaktee Dongaonkar
2023-04-25T23:12:13Z
http://arxiv.org/abs/2304.13191v1
# Towards Explainable and Safe Conversational Agents for Mental Health: ###### Abstract Virtual Mental Health Assistants (VMHA) are seeing continual advancements to support the overburdened global healthcare system that gets 60 million primary care visits, and 6 million Emergency Room (ER) visits annually. These systems are built by clinical psychologists, psychiatrists, and Artificial Intelligence (AI) researchers for Cognitive Behavioral Therapy (CBT). At present, the role of VMHAs is to provide emotional support through information, focusing less on developing a reflective conversation with the patient. A more _comprehensive, safe_ and _explainable_ approach is required to build _responsible_ VMHAs to ask follow-up questions or _provide a well-informed response_. This survey offers a systematic critical review of the existing conversational agents in mental health, followed by new insights into the improvements of VMHAs with contextual knowledge, datasets, and their emerging role in clinical decision support. We also provide new directions toward enriching the user experience of VMHAs with explainability, safety, and wholesome trustworthiness. Finally, we provide evaluation metrics and practical considerations for VMHAs beyond the current literature to build trust between VMHAs and patients in active communications. ## 1 Introduction Mental illness is highly prevalent nowadays, constituting a major cause of distress in people's lives with an impact on society's health and well-being, thereby projecting serious challenges for mental health professionals (MHPs) [16]. According to the National Survey on Drug Use and Health, nearly one in five U.S. adults live with a mental illness (52.9 million in 2020) [1]. Reports released in August 20211 indicate that _1.6 million people_ in England were on waiting lists to seek professional help with mental health care. Such an overwhelming rise in the number of patients as compared to MHPs necessitated the use of (i) public health forums (e.g., dialogue4health), (ii) online communities (e.g., r/depression subreddit on Reddit), (iii) Talklife, and (iv) Virtual Mental Health Assistants (VMHAs), for informative healthcare. The anonymous functioning of (i), (ii), (iii) removed the psychological stigma in patients, which even re-rained them from seeing an MHP [11]. Footnote 1: [https://www.theguardian.com/society/2021/aug/29/strain-on-mental-health-care-leaves-8m-people-without-help-say-nhs-leaders](https://www.theguardian.com/society/2021/aug/29/strain-on-mental-health-care-leaves-8m-people-without-help-say-nhs-leaders) In addition, the unavailability of interpersonal interactions from other pure information agents resulted in the need to develop Virtual Mental Health Assistants (VMHAs). **VMHAs**: VMHAs are artificial intelligence (AI)-based agents designed to provide emotional support through structured conversational sequences targeted to screen patients for mental health conditions and alert mental health professionals (MHPs) through _informed triaging_2. Despite the proliferation of research at the intersection of clinical psychology, artifi Figure 1: Taxonomy of Mental Health Conversations: While connecting dots in our investigation from NLP-centered low-level analysis (lexical, morphological, syntactic, semantic) over Mental health conversations to the higher-level analysis (discourses, pragmatics), we determine the evaluation metrics to support VMHAs for better user-level experience in terms of safety and explainability. We further support the emerging areas with AI model development and evaluation in passive conversations. The categories in black color defines the scope of our survey from the view point of user-level explainability and safety; dotted red colour highlights the emerging scope of question/response generation in mental health conversations between VHMAs and patients. cial intelligence (AI), and Natural Language Understanding (NLU), VMHAs missed an opportunity to serve as life-saving contextualized, personalized, and reliable decision support during COVID-19 under the _apollo_ moment [14, 15]. VMHAs' ability to function as simple information agents (e.g., suggest meditation, relaxation exercises, or give positive affirmations) _did not_ bridge the gap between _monitoring the health condition_ and _necessitating an MHP visit_ for the patient. To the best of our knowledge, this is the first critical evaluation that examines contextualization and question/response generation for VMHAs from the viewpoint of user-level explainability and trust (see Figure 1). _This survey facilitates the clinical psychologists, psychiatrists, and AI practitioners of VMHAs to support people at risk of chronic mental disease._ **User-level Explainability**: The sensitive nature of VMHAs raises _safety_ as a major concern of conversational systems, resulting in a negative outcome. For instance, figure 2 presents a real-world query from a user, which was common during the times of the COVID-19 recession. In response to the query-Woebot, Wysa and ChatGPT initiated a responsive conversation without focusing on the context (e.g., connecting mental health with its symptoms). We found assumptive questions (e.g., anxiety) and responses from Wysa, Woebot and ChatGPT with no association to clinical reference or clinical support. On the other hand, the desired VMHA (a) should capture the relationship between the user query and expert questionnaires and (b) tailor the response to reflect on the user's concerns (e.g., _frustrating_ and _disheartening_) about the _long-term unemployment_, which is linked to _mental health_ and _user immediate help_. **Resources to support VMHA**: Prior research demonstrate extensive body of efforts in developing _mental health datasets_ using social media to identify mental health conditions [12]. These datasets represent real-world conversations and are annotated by experts leveraging clinically-grounded knowledge (e.g., MedChatbot [1]) or guidelines (e.g., PHQ-9). Augmenting such datasets with VMHAs can improve the quality of conversations with the user. Semantic enhancements with clinical knowledge and associated guidelines, if remain under-explored, may miss the hidden mental states in a given narrative which is an essential component of question generation. **Trustworthiness**: By definition, _Trust_ is a multi-faceted quality that is studied in the context of humans in humanities and now increasingly gaining importance in AI as systems and humans collaborate closely. Growing concern about (misplaced) _trust_ on _VMHA_ for _Social Media_ (tackling mental health) hampers the adoption of AI techniques during emergency situations like COVID-19 [14]. A recent surge in the use of ChatGPT, in particular for mental health, is emergent for providing crucial personalized advice without clinical explanation, which might hurt user's _safety_, Figure 2: (Left) The outcome from existing VMHAs (e.g., WoeBot, Wysa) and ChatGPT (general purpose chatbot). (Right) Illustration of a knowledge-driven conversational agent in mental health (desired VMHA). The use of questions in PHQ-9 to induce conceptual flow in mental health conversational agents. With clinical knowledge, the agent can detect the user’s mental disturbance and alert MHPs accordingly. and thus, _trust_3. In [22], the author identifies the support for human interaction and explainable alignment with human values as important for trust in AI systems. Footnote 3: [https://tinyurl.com/4sr2hw9b](https://tinyurl.com/4sr2hw9b) To holistically contribute towards _trustworthy_ behavior in a conversational system in mental health, there is a need to critically examine _user-level explainability_, _safety_, the use of clinical knowledge for contextualization, along with testing. **Our Contributions**: This survey spans 5 major research dimensions: (i) What are explainability and safety in VMHAs? (ii) What are the current capabilities and limitations of VMHA?, (iii) What is the current state of AI and the hurdles in supporting VMHAs? (iv) What functionalities can be imagined in VMHA for which patients seek alternative solutions? and (v) What changes in evaluation is required with respect to explainability, safety, and trust? Figure 1 illustrates the survey coverage, exemplified in Figure 2. ## 2Scope of Survey In this section, we explore the state of research in explainability and safety in conversational systems to ensure trust [16]. ### Explanation Conversations in AI happen through large and complex language models (e.g., GPT-3, ChatGPT), which are established as state-of-the-art models for developing intelligent agents to chat with the users by generating human-like questions or responses. The reasons behind the output generated by the Large Language Models (LLM) are unclear and hard to interpret, also known as the "_black box_" effect. The consequences of the black box effect are more concerning than their utility, particularly in mental health. Figure 3 presents a scenario where ChatGPT advises the user about _toxicity in drugs_, which may have a negative consequence. To this end, [1] reports hallucination and harmful question generations as unexpected behaviors shown by such black box models. The study characterizes _hallucination_ as a generated content that _deviates_ significantly from the subject matter or is unreasonable. Recently, Replika, a VMHA, augmented with a GPT-3, provides meditative suggestions to a user expressing self-harm tendencies4. The analysis above supports the critical need for a comprehensive and explainable approach toward the decision-making of VMHAs. According to [17], the explanations are human-centered sentences that signify the reason or justification behind an action and are comprehensible to a human expert. There are many types of explanations [14] and surveys of deployed systems [1] has revealed that most are targeted towards model developers and not the end-users. The users interacting with the VMHAs may need more systematic information than just the decision-making. Thus, this survey is more focused towards "_User-level Explainability_". Footnote 4: [https://ineqc.com/2022/01/20/replika-ai-friend/](https://ineqc.com/2022/01/20/replika-ai-friend/) **User-level Explainability (UsEx)**_is defined as the capability of an AI methodology to provide a post-hoc explanation upon the need of a user and in the form of traceable links to real-world entities and definitions [1]._ Figure 3 illustrates the UsEx wherein the generated follow-up questions from a safe and user-level explainable agent establish semantic connections with clinical guidelines (e.g., PHQ-9). Though UsEx sees promise over foundational general-purpose NLP tasks, its applicability in the mental health context is yet to be examined [11]. ### Safety VMHAs are required to be predominantly safe while at the same time being explainable to prevent undesirable behaviors. One such method is aligning the functioning of VMHA to MHP-defined specifications [15]. Such specifications allow VMHAs absolve the control of generating fabricated content and render it unsafe. [13] identifies three major effects on safety in general-purpose conversational systems: (a) Generating Offensive Content, also known as the _Insigator (Tay) Effect_. It describes the tendencies of a conversational agent to display behaviors like the Microsoft Tay chatbot, which went racial after learning from the internet. (b) _YEA-SAYER (ELIZA)_ effect is defined as the response from a conversational agent to an offensive input from the user. People have been proven to be particularly forthcoming about their mental health problems in interactions with conversational agents, which may increase the danger of "_agreeing with user utterances implying self-harm_". (c) _Imposter_ effect applies to VMHAs that tend to respond _inappropriately_ in sensitive scenarios. To overcome the imposter effect, Deepmind designed _Sparrow_, a conversational agent. It responsibly leverages the live google search to talk with users [11]. The agent generates answers by following the _23 rules_ determined by researchers, such as _not offering financial advice_, _making threatening statements_, or _claiming to be a person_[1]. In the context of mental health, such rules can be replaced Figure 3: A conversational scenario in which a user asks a query with multiple symptoms. Left is a set of generated questions obtained by repetitive prompting ChatGPT. Right is a generation from ALLEVIATE, a knowledge-infused (KI) conversational agent with access to PHQ-9 and clinical knowledge from Mayo Clinic. by clinical specifications to validate the functioning of AI model within the _safe limits_. Source for such specifications are: Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT) [10], International Classification of Diseases (ICD-10) [32], Diagnostic Statistical Manual for Mental Health Disorder (DSM-5) [17], Structured Clinical Interviews for DSM-5 (SCID) [18], and clinical questionnaire-guided lexicons. [15] performs a comparative study on psychotherapy of outpatients in mental health where an AI model within VMHA aligns to clinical guidelines for easy understanding of domain experts through UsEx. ## 3 Knowledge Infused (KI) Learning for Mental Health Conversations Machine-readable knowledge can be categorized into five forms: (a) lexical and linguistic, (b) general-purpose (e.g., Wikipedia, Wikidata), (c) commonsense (e.g., ConceptNet), (d) domain-specific (Unified Medical Language System), and (e) procedural or process-oriented (**PK**) [20]. Knowledge-infused Learning (KIL), a paradigm within AI, defines a set of methodologies incorporating these broader forms of knowledge to address the limitations of current black-box AI. In addition, KiL benefits from data and knowledge to enable safe and explainable operations in mental health [3]. We categorize the KIL-driven efforts at the intersection of conversational AI and mental health into two categories: * **Knowledge Graph-guided Conversations:** Question answering using knowledge graph (KG) is seeing tremendous interest from AI and NLU community through various technological improvements in query understanding, query rewriting, knowledge retrieval, question generation, response shaping, and others. The methods proposed can improve the high-level functionalities of VMHA. For instance, [21]'s HEAL KG can generate a better empathetic response by capturing empathy, expectations, affect, stressors, and feedback types from distress conversations. With HEAL, the model picks an appropriate phrase in the user's query to tailor its response. EmoKG is another KG that connects BioPortal, SNOMEDCT, RxNORM, MedDRA, and emotion ontologies to have a conversation with a user to boost their mental health with food recommendation [11]. Likewise, [16] created suicide KG to train conversational agents that can sense whether the user interacting has suicidal indication (e.g., relationship issues, family issues) or suicide risk tendencies before yielding a response or asking follow-up questions. [20] explained the evolution of KG in VMHA during a conversation for adaptive communications. Augmentation of KG demands improvement in metrics to examine the safety and user-level explainability through proxy measures such as logical coherence, semantic relations, and others (covered in section 6.1 and [3]). * **Lexicon or Process-guided Conversations:** Lexicons in mental health were created to resolve ambiguities in human language. For instance, the following two sentences: "I am feeling on the edge." and "I am feeling anxious," are similar, provided there is a lexicon with "Anxiety" as a category and "feeling on the edge" as its concept. [22] created a PHQ-9 lexicon to study realistic mental health conversations on social media clinically. [19] leveraged PHQ-9 and SNOMED-CT lexicons to train a question-generating agent for paraphrasing questions in PHQ-9 to introduce _Diversity in Generation (DiG)_ [12]. With DiG, a VMHA can paraphrase its question to acquire a meaningful response from a user while still keeping engagement. _Clinical specifications5_(PK) include questionnaires such as PHQ-9 (depression), Columbia Suicide Severity Rating Scale (C-SSRS; suicide), Generalized Anxiety Disorder (GAD-7) [3]. It provides a sequence of questions clinicians follow to interview patients. Such questions are safe and medically validated. [14] developed MIRA, a VMHA with knowledge of clinical specification to meaningfully respond to queries on mental health issues and interpersonal needs during COVID-19. [13] leverage Relational Frame Theory (RFT), a procedural knowledge in clinical psychology to capture events between conversations and labels as positive and negative. [15] develops KakaoTalk, a chatbot with prenatal and postnatal care knowledge database of Korean clinical assessment questionnaires and responses that enable the VMHA to carry out thoughtful and contextual conversations with users. Footnote 5: also called clinical guidelines and clinical process knowledge Using KGs through mechanisms of KIL can propel context understanding in VMHA for a safe and explainable conversation. Datasets or VMHAs which use mental health-related knowledge (**MK**) as either KG or lexical are marked as \(\boldsymbol{\check{}}\) in Tables 1 and 2. ## 4 Safe and Explainable Language Models in Mental Health Language models (e.g., Blenderbot, DialoGPT) and in-use conversational agents (e.g., Xiaoice, Tay, Siri) were questioned in the context of safety during the _first workshop on safety in conversational AI_. 70% participants in the workshop were unsure of whether present-day conversational systems or language models within them are capable of safe generation. Following it, [23] introduced _Bot-Adversarial Dialogue_ and _Bot Baked In_ methods to introduce _safety_ in conversational systems. The study was performed on _Blenderbot_, which had mixed opinions on safety, and _DialoGPT_, to enable AI models to detect unsafe/safe utterances, avoid sensitive topics, and provide responses that are gender-neutral. The study utilizes knowledge from Wikipedia (for offensive words) and knowledge-powered methods to train conversational agents [13]. Alternatively, safety in conversational systems can be introduced through clinical guidelines. [14] develop safety lexicons from PHQ-9 and GAD-7, for safe and explainable functioning of language models. The study showed an 85% improvement in safety across Sequence to Sequence and Attention-based language models. In addition, explainability saw an uptake of 23% across the same language models. Similar results were observed when PHQ-9 was used in explainable training of language models [11]. VMHA can align with clinical guidelines through reinforcement learning. For example, the _policy gradient-based learning_ can assist conversational systems in accounting for safe generation, either through specialized datasets on response rewriting [12] or tree-based rewards guided by process knowledge in mental health [14]. Though there is an initiative to attain safety in conversations from AI-powered agents, an effort is needed to achieve UsEx. In mental health, the indicators of signs and symptoms, causes, disorders, medications, and other comorbid conditions possess probabilistic relationships with one another. Hence, the augmentation of the knowledge base or infusion of knowledge to improve AI's decision-making is crucial to human understandability [15]. ## 5 Virtual Mental Health Assistants Despite the positive potentials of the language models, our observations indicate the in-capabilities of VMHAs to comprehend the behavioral and emotional instability, self-harm tendencies, and user's latent psychological mindset. VMHAs (e.g., as exemplified in Figure 3 and 2) generate incoherent and unsafe responses when a user tries to seek a response for clinically relevant questions. In this section, we outline the capabilities of well-established VMHAs and inspect limitations in the context of UsEx and safety following taxonomy in figure 1. * WoeBot is introduced as a part of the growing industry of digital mental health space as an "_Automated Coach_" that can deliver a coach-like or sponsor-like experience without the human intervention to facilitate the "_good thinking hygiene_" 6. WoeBot deploys featuring lessons (via texts and "stories"), interactive exercises, and videos that were tuned around Cognitive Behavioral Therapy (CBT) [13]. Footnote 6: [https://woebothealth.com/why-we-need-mental-health-chatbots/](https://woebothealth.com/why-we-need-mental-health-chatbots/) * Wysa, a mental health application, uses CBT conversational agent to have empathetic/ therapeutic conversations and activities thereby helping its users with several mental health problems [10]. Based on a series of question-answering mechanisms, Wysa suggests a set of relaxing activities for elevating mental well-being. With the historical evolution of VMHAs (see Table 2) from behavioral health coaching [12] to KG-based intellectual VMHAs such as ALLEVIATE [14], we examine the possibilities of new research directions to facilitate the expression of empathy in passive communications [12]. The existing studies suggest the risk of oversimplification of mental conditions and therapeutic approaches without considering latent or external contextual knowledge [11]. Thinking beyond the low-level analysis of classification and prediction, the high-level analysis of VMHAs would enrich the User-Level (UL) experience and informedness of MHPs [14]. Limiting our discovery to context-based high-level analysis, the System-Level (SL) observations for WoeBot and Wysa suggest the UL tracking of human behavior, such as gratitude/ mindfulness and frequent mood changes (an emotional spectrum) during the day. Contributions toward this endeavor have emerged through exclusive studies with _trustworthiness_ of WoeBot and Wysa through ethical research protocols, as it is mandatory to concur ethical dimensions due to the sensitive nature of VMHAs. The lack of _ethical dimensions_ in WoeBot and Wysa is exemplified through non-clinical grounding and lack of contextual awareness in responses to the emergencies such as disclosure of immediate harm or suicidal ideation [16]. To this end, the development of _safe and explainable_ VMHAs shall enhance their capabilities of reading between the lines result \begin{table} \begin{tabular}{c c c c|c c c|c c c c} \hline \hline \multicolumn{2}{c}{Datasets} & Safety & UsEx & \multicolumn{2}{c}{KI} & \multicolumn{2}{c|}{DiG} & \multicolumn{2}{c}{FAIR Principle} \\ & & & PK & MK & & & F & A & I & R \\ \hline [13] & CounselChat & ✓ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✗ & \(\dagger\) \\ [15] & CC & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✗ & \(\dagger\) \\ [1] & SNAP Counseling & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ & ✗ & ✗ \\ [10] & Empathetic Dialogues & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ \\ [16] & Roleplay & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ & ✗ & ✓ \\ [17] & CC-44 & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & \(\dagger\) & ✗ & ✗ \\ [14] & PRIMATE & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ \\ [14] & ProKnow-data & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ [15] & MITI & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Lists of conversational datasets created with support from MHPs, crisis counselors, nurse practitioners, or trained annotators. We have not included datasets created using crowdsource workers without proper annotation guidelines. KI: Knowledge infusion; PK: Process Knowledge; MK: Medical Knowledge; DiG: Diversity in Generation; UsEx: User-level Explainability. Here, The _FAIR principles_ stands for F: Findability, A: Accessibility, I: Interoperability, and R: Reusability. \(\dagger\): partial fulfillment of the corresponding principle. ing in accountable and fair conversational agents. For a well-aware (about user's depression) dialogue agent, it is perhaps _safer_ to avoid mentioning or inquiring about the topics that can worsen the users' mental health condition [1]. Although WoeBot employs medical and process knowledge, to **explain** the decision-making, we investigate the relevant datasets for FAIR principles7 (see Table 1) and evaluation metrics for quantitative and qualitative performance analysis of VMHAs' question-response generation module in active communication [1]. We further investigate existing evaluation metrics from _passive communication_ to support the VMHAs for _active communication_. Footnote 7: [https://www.go-fair.org/fair-principles/](https://www.go-fair.org/fair-principles/) ## 6 Discussion The field of AI-powered automated VMHAs is still in its nascent phase and continuously evolving to provide accessible health care to an increasing number of patients with mental illnesses. However, repetitive question/answer functionality within the models fails to sustain the user's engagement. Irrespective of deploying state-of-the-art VMHAs to mitigate the problems of the overburdened healthcare systems, the gap still remains between user's clinical needs and VMHAs that is yet to be connected. Despite the significant amount of studies in realizing the requirement of _safety, harmlessness, explainability, curation of process and medical knowledge-based datasets and knowledge-infused learning methods_, they have never been incorporated or evaluated to enhance the contextualized conversations within a VMHA and their role in the emerging areas of mental healthcare. Hence, there is an urgent need to incorporate high-level contextual analysis and infuse new technical abilities of AI for VMHA. We outline two sub-sections to discuss: (i) the need of revamping the _evaluation metrics_, and (ii) _emerging_ areas for developing safe and explainable VMHAs. ### Evaluation Method All the notable earlier work [23] included subjective measures involving human-in-the-loop to evaluate a conversational system for its utility in the general purpose domain. Due to the expensive nature of human-based evaluation procedures, researchers have started using machine learning-based automatic quantitative metrics (e.g., BLEURT, BERTScore [10], BLEU [11], ROUGE [12]) to evaluate the semantic similarity of the machine-translated text. [13] highlights the disagreement of users with existing metrics thereby lowering their expectations. Also, most of these traditional quantitative metrics are reference-based which is limited in availability and very difficult to ensure the quality of the human-written references [1]. To address these issues and holistically evaluate a desired VMHA with respect to _explainability_, _safer_, and _knowledge process inclusion_, we need to revamp the metrics to bring VMHA systems closer to real-time applications. Qualitative MetricWe define mutlimetric evaluation strategy by instilling metrics that correlate well with human judgement and can provide more granular analysis towards more realistic VMHAs. * **Adherence:** Adherence, a long-standing discussion in the healthcare sector [1], is defined as a commitment towards the goal (e.g., long-term therapy, physical activity, or medicine). Despite the AI community showing a significant interest in evaluating the adherence of users [15] towards health assistants, the lack of _safe_ response, in terms of _DiG_ and _UsEx_ in VMHAs, add the criticism with loss of adherence. This situation necessitates the requirement of adherence as a qualitative metric towards realizing more _realistic_ and _contextual_ VMHAs while treating patients with serious mental illness. * **Harmlessness:** The conversational agents tend to generate harmful, unsafe, and sometimes incoherent infor mation [22]. Although researchers have made many efforts to curb the toxicity in the proliferation of hateful speech and biases in social media, much need to be realized when VMHAs are trained using the same datsets. * **Transparency:** The transparency and interpretability for understandable models (TIFU) framework emphasize the "explainability" of VMHAs by focusing on _UsEx_ and _DiG_, thereby processing the knowledge to obtain clinically-verified responses [11]. KI Metric:In this section, we provide metrics that describe _DiG_, _safety_, _MK_ and _PK_ in table 2. \(\checkmark\) and \(\checkmark\) tell whether VHMA has been tested for these KI metrics. * **Safety:** Even though the datasets have been verified to be safe [10], it is quite difficult to evaluate the models based on acceptable standards of safety because of their black-box nature. To include safety as a metric of evaluating conversational models, [15] introduces a safety lexicon as a glossary of clinical terms that the MHP would understand in their dataset. [1] emphasizes on the underlying bias of a data-driven model and the need of an idea for contextual safety in the dialogue systems. * **Logical Coherence (LC):** LC is a qualitative check of the logical relationship between a user's input and the follow-up questions measuring _PK_ and _MK_. [13] used LC to ensure the reliable output from the RoBERTa model trained on the MNLI challenge and natural language inference GLUE benchmark, hence, opening new research directions towards safer models for MedNLI dataset [12]. * **Semantic Relations (SR):** SR measures the extent of similarity between the response generation and the user's query [13]. [14] highlights the use of SR for logical ordering of question-generations and hence, preventing language models from hallucinations. It further enhances the use of VMHAs for profiling the user's illness and generating appropriate responses through _DiG_. ### Emerging Areas of VMHAs Mental Health TriageMental Health Triage is a risk assessment that categorizes the severity of the mental disturbance before suggesting psychiatric help to the users and categorizes them on the basis of urgency.The screening and triage system could fulfill more complex requirements to achieve automated triage empowered by AI. A recent surge in the use of screening mechanisms by Babylon9 and Limbic10 has given new research directions towards a _trustworthy_ and _safe_ models in near future. Footnote 8: [https://en.wikipedia.org/wiki/Mental_health_triage](https://en.wikipedia.org/wiki/Mental_health_triage) Footnote 9: [https://tinyurl.com/2p8be744](https://tinyurl.com/2p8be744) Motivational InterviewingMotivational Interviewing (MI) is a directive, user-centered counseling style for eliciting behavior change by helping clients to explore and resolve ambivalence. In contrast to the assessment of severity in mental health triaging, MI enables more interpersonal relationships for cure with a possible extension of MI for mental illness domain [23]. [24] suggest human-like empathetic response generation in MI with support for _UsEx_ and _contextualization_ with clinical knowledge. Recent works in identifying the interpersonal risk factors [15] from offline text documents further support MI for active communications. Clinical Diagnostic Interviewing (CDI):CDI is a direct client-centered interview between a clinician and patient without any intervention. With multiple modalities of the CDI data (e.g., video, text, audio), the applications are developed in accordance with Diagnostic and Statistical Manual of Mental Disorders (DSM-V) to facilitate a quick gathering of detailed information about the patient. In contrast to the in-person sessions (leveraged on both verbal and non-verbal communication), the conversational agents miss the _personalized_ and _contextual_ information from non-verbal communication hindering the efficacy of VMHAs. ### Practical Considerations We now consider two practical considerations with VMHAs. Difference in human v/s machine assistance:For the VMHAs to be accepted by people in need, it is important that they feel the output of the system is valuable and useful. If the user had sought the help of a human mental health professional in the past, she would expect a similarly realistic conversational experience from the VMHA. However, getting training data from real conversations is expensive and fraught with data privacy and annotation challenges. To maintain the confidentiality of user data, approaches akin to popular methods used in recommendation literature for creating training data from user data could be used: (a) anonymize real data, (b) abstract from real data to create representative (but inaccurate) samples, and (c) generate synthetic conversations based on characteristics of real data. In recommendations, user data is used to create personas while in the case of VMHA, real conversations can be used to create conversation templates and assign user profiles [15]. But high-quality annotations on (conversation) data are a more significant problem and widespread in learning-based AI tasks. Perception of quality with assistance offered:A well-understood result in marketing is that people perceive the quality of a service based on the price paid for it as well as the word of mouth buzz around it [10]. In the case of VMHAs, it is an open question whether help offered by VMHAs will be considered inferior to that offered by professionals. More crucially, if a user perceives it negatively, will this further aggravate the user's mental condition? ## 7 Conclusion From 297 studies on mental health (active and passive communications), we present a systematic survey of \(\sim\) 80 intelligent technologies for improving the user experience through VMHA or potential VMHA. We first propose a taxonomy of the mental healthcare domain for social NLP research, thrusting on benchmarking evaluation metrics for active communi cations. We then discussed the efforts in knowledge-driven AI for mental health, its connection with _UsEx_ and _safety_, and provided methods of improving and enhancing VMHA to support triaging, motivational interviewing, and diagnostic interviews. Finally, the survey sees its extension to "personalization" in VMHA, which is needed to perform tasks like screening, triaging and MI. Recently, Anthropic's Claude(a competitor of ChatGPT) is another effort to induce better safety with UsEx in the conversational system [1]. ## Ethical Statement We adhere to anonymity, data privacy, intended use, and practical implication of the VMHAs. The questionnaires and rating scales described as clinical process knowledge do not contain personally identifiable information. The datasets covered in the survey are publicly available and can be obtained from user-author agreement forms. The text conversation in the figures are abstract and has no relevance with the real-time data source or any person.
2303.17350
Partial condensation of mobile excitons in graphene multilayers
At a large displacement field, in rhomboedral and Bernal-stacked graphene a normal paramagnetic state transitions to a correlated state. Recent experiments showed that such systems have several phase transitions as a function of the carrier density. The phase adjacent to a paramagnetic state has anomalously high resistance and reduced degeneracy of the Fermi sea. We show that both phenomena can be explained through a concept of partial intervalley exciton condensation: a fraction of particles condenses into excitons, and another forms an intervalley coherent Fermi liquid. The exciton part of the system do not contribute to the electrical current thus increasing the resistance. Within this paradigm, the increase in the resistance has entirely geometrical origin. We check validity of the phenomenological theory through numerical calculations. We also show that the quantum oscillation data should not be very different between the partial excitonic state and the intervalley coherent states suggested by other authors. Further, we suggest STM/AFM or Raman spectroscopy to have a conclusive evidence for the occurrence of the partial exciton condensation that we suggest in this paper.
Igor V. Blinov, Chunli Huang, Nemin Wei, Qin Wei, Tobias Wolf, Allan H. MacDonald
2023-03-30T13:10:04Z
http://arxiv.org/abs/2303.17350v1
# Partial condensation of mobile excitons in graphene multilayers ###### Abstract At a large displacement field, in rhomboedral and Bernal-stacked graphene a normal paramagnetic state transitions to a correlated state. Recent experiments showed that such systems have several phase transitions as a function of the carrier density. The phase adjacent to a paramagnetic state has anomalously high resistance and reduced degeneracy of the Fermi sea. We show that both phenomena can be explained through a concept of partial intervalley exciton condensation: a fraction of particles condenses into excitons, and another forms an intervalley coherent Fermi liquid. The exciton part of the system do not contribute to the electrical current thus increasing the resistance. Within this paradigm, the increase in the resistance has entirely geometrical origin. We check validity of the phenomenological theory through numerical calculations. We also show that the quantum oscillation data should not be very different between the partial excitonic state and the intervalley coherent states suggested by other authors. Further, we suggest STM/AFM or Raman spectroscopy to have a conclusive evidence for the occurrence of the partial exciton condensation that we suggest in this paper. IntroductionA long held belief was that two-dimensional systems are intrinsically unstable [1; 2] because thermal fluctuations will unavoidably destroy any kind of long-range order. However, further research of the two-dimensional systems beyond harmonic approximation showed that in principle, slight bending in the third dimension could stabilize structures. Non-negligible harmonicity present in the graphene because of the strong carbon bonds couples the deformation in the third dimension to in-plane long-range phonons [3; 4]. As a result, quasi-two-dimensional sheet can, at least theoretically, become stable [5]. Later, it was demonstrated experimentally [6; 7; 8] that stable (or quasi-stable) atomically-thick structures of graphene can be obtained from graphite through exfoliation [9]. While full of defects, both encapsulated and free-standing graphene showed sufficient stability under a wide range of external conditions. Originating electron quasiparticles in graphene while have an exotic quasi-relativistic dispersion relation [10; 11], do not show interaction-induced collective phases [12] for low densities in the absence of the magnetic field. When several layers of graphene put in contact, hybridization between them flattens the band structure, effectively generating mass for the low-energy quasiparticles. As a result, because of the increase in the density of states electrons in graphene multilayers, correlated phases in multilayer do occur [13; 12]. Further, a low-energy bands can be flattened even more, if two graphene sheets are rotated by a small angle \(\theta\) with respect to each other [14]. So called moire materials drawn a lot of attention because of the rich variety of the symmetry broken phases [15; 16; 17] present in the system and even superconductivity of yet unknown origin [18][2; 3; 4; 5; 6; 7]. Such materials are hard to make and suffer of large disorder inevitably present in the system because of preparation process. Another problem lies in the theoretical realm: large number of bands \(\propto 1/\theta\approx 10^{2}\) present in the system makes mathematical description complicated and, as a result, there are still gaps in understanding. A new hope was brought by recent experiments in ABC-stacked graphene trilayers [19; 20] and AB-bilayers [21]. Canonical description of the non-interacting model at low doping consists of 6 and 4 bands correspondingly for each spin and valley making theoretical description much easier than in the moire systems. Both systems, trilayer and bilayer, at a non-zero perpendicular electrical field show a cascade of phase transitions as a function of hole/electron density and displacement field. Far away from the charge neutrality the system is fully symmetric in \(SU(2)\times SU(2)\) isospin space (paramagnetic phase) and, as a result, Fermi sea is 4-fold degenerate. The first transition on the hole-doped side happens at large (\(>0.2\)V/nm) non-zero displacement field and a hole density of order \(0.3\times 10^{12}\) cm\({}^{-2}\approx 2\times 10^{-4}\) per unit cell per flavor. The transition was first seen in the quantum oscillation data [19]. As the system crosses from the paramagnetic phase to the first correlated phase, Fermi sea degeneracy is reduced by a factor of 2. This phase was dubbed as a partially isospin polarized (PIP) phase. At low temperatures, on the interface between PIP and the paramagnetic one, a superconducting phase arises [20]. While several mechanisms for the superconductivity are present [22; 23; 24], its exact mechanism remains murky. One of the possible scenarios for the emergence of the superconducting phase is the appearance of the attractive interactions through the fluctuations [23; 24] of the order parameter of the first coherent phase. Coherent phase appears in a regime of hole doping with the two Fermi surfaces (annular Fermi surface) present within each flavor. Our belief is that this is the most important feature of the system and will exploit it in order to build a simple model and explain the observable features of the system. The symmetry broken phase does not show signatures of spin or valley polarization. As a candidate, an intervalley coherent phase was suggested [22; 23; 25]. However, such phase does not explain all observable facts. Namely, while coherence effectively reduces the number of Fermi surfaces, thus explaining the change in the quantum oscillations such phase do not seem to explain [26] increase of the resistance [20]. We aim to build a theory that would explain both the change in the quantum oscillations and most importantly, enhancement of the system's resistance. _Simple model_ Even though bilayer and trilayer have different band structures, we focus on their similarity rather than a difference to put forward the idea that the origin of the first correlated present in both could be the same. The first transition reduces the degeneracy of the Fermi surface by a factor of 2, hence the minimal model should have two flavors. Second, two features we think are of the outmost importance are the annular Fermi surface and the trigonal warping. The simplest dispersion relation for two flavors (two valleys) that has both properties is \[\hat{\epsilon}(p)=\left(mp^{2}/2+\lambda p^{4}/4\right)\hat{\tau}_{0}+\Delta \hat{\tau}_{3}p^{3}\cos\left(3\alpha_{p}\right)/2, \tag{1}\] where \(m>0\) and \(\lambda<0\), \(\tau_{i}\)-s are the Pauli matrices in valley space. For negligibly small trigonal warping, Fermi momentums are \(p_{F\pm}=m/4\pm 1/2\sqrt{(m/2)^{2}+\lambda\mu}\). The condition for the annular Fermi surface to exist then is \(0<\mu<m^{2}/|4\lambda|\) and \(\lambda<0\) (Fig. 4). In what follows, we use \(m\) as an interaction scale, while \(p\) is taken dimensionless \(p=a_{0}k\). However, fitting to the 6-band model shows that \(m\) is approximately \(5\times 10^{3}\) K. In what follows, we will use parameters more relevant for the trilayer than the bilayer. Interactions, present in the system, can drive it to a symmetry broken phase. We choose the momentum-independent \(SU(2)\)-symmetric interaction \[H=\sum_{p}\hat{\epsilon}_{k}\psi_{k}^{\dagger}\psi_{k}+\frac{\lambda}{S}\sum_{ kk^{\prime}}(\psi_{k}^{\dagger}\hat{\tau}\psi_{k})\cdot(\psi_{k^{\prime}}^{ \dagger}\hat{\tau}\psi_{k^{\prime}}), \tag{2}\] where \(\psi_{k}=(c_{k+},c_{k-})\) are the two-dimensional spinors in the valley space, \(c_{kv}\) are the annihilation operators at momentum \(k\) and valley \(v\), \(\hat{\tau}=(\tau_{x},\tau_{y},\tau_{z})\) is a vector of Pauli matrices. The opening of the second Fermi surface at \(\mu=0\) gives a substantial increase in the density of states, and consequently, can help to drive the system across the transition by a Stoner-like scenario. When \(\Delta=0\) the model is \(SU(2)\) symmetric in valley space and, as a result, valley-polarized (\((\psi_{k}^{\dagger}\tau_{z}\psi_{k})\neq 0\)) and valley-coherent (\((\psi_{k}^{\dagger}\tau_{x,y}\psi_{k})\neq 0\)) phases have the same energy. The non-zero trigonal warping, in turn, makes the valley coherence more preferable with energy difference scaling as \(\propto\Delta^{2}\). _Intervalley response in the electron-hole channel_ Usually, the homogenous solution of the mean-field equation corresponds to a more energetically favorable phase. However, in the small number of cases, usually when a discrepancy between the two Fermi surfaces is present, a phase with spontaneously broken spatial symmetry can arise. The likelihood of such phase is indicated by the peak in the undressed response in the channel of interest at a non-zero transferred momentum. For a system with dispersion (1), whenever the two Fermi surfaces are present, the response in the intervalley electron-hole channel \(\tau(\bar{q})\) has a minimum (Fig. 2) at a momentum approximately equal to the difference between the Fermi momentums averaged over the angle. In our case, the presence of the annulus means that the inner Fermi surface has an electron-like dispersion, and the outer Fermi surface has a hole-like dispersion. Then for an intervalley excitations in the electron-hole channel we could think of two different types of processes: excitations between the same Fermi surfaces, and excitations between two different Fermi surfaces. Response for excitations between the same Fermi surfaces is a monotonically increasing function of the transferred momentum. The smallest momentum at which the excitation between different Fermi surfaces at a vanishingly small transferred frequency become allowed is \(\min(p_{F+}(\theta_{p})-p_{F-}(\theta_{p}))\). As the response between the Fermi surfaces is a monotonic function too, we should expect the enhancement of the response at the transferred momentum equal to the difference between two Fermi surfaces. The minima in the undressed response means that the divergence in the RPA-response \((1+\lambda\tau^{-1}(q))^{-1}\) should arise earlier for non-zero transferred momentum, meaning that the symmetry broken state with particles in different valleys coupled through a non-zero momentum will potentially more stable than the quasi-homogenous state. The symmetry broken state will be characterized by the order parameter \[x_{vv^{\prime}}(q)=\frac{1}{S}\sum_{p}\langle c_{v^{\prime}}(p)c_{v}^{\dagger} (p+q)\rangle, \tag{3}\] where \(v\neq v^{\prime}\) are the valley indices \(+/-\). If trigonal warping is applied to the system, the energy of the system is \(E_{v}=E_{v}\), and the energy of the system is \(E_{v}=E_{v}\). The energy of the system is \(E_{v}=E_{v}\), and the energy of the system is \(E_{v}=E_{v}\). The energy of the system is \(E_{v}=E_{v}\). ing is present, the full rotational symmetry reduces to the \(C_{3}\) symmetry: as a result, there is a set vectors \(g_{1},g_{2},g_{3}\) with absolute value \(q\) related by \(C_{3}\)-rotation where the instability is the strongest. We expect than the resulting mean-field state will be periodic with reciprocal lattice vectors equal to \(g_{1},g_{2}\) and \(g_{3}\). _Mean-field state_ We choose the reciprocal vectors to be \(a_{1}=(q_{min},0)\), \(C_{3}a_{1}\), \(C_{3}^{2}a_{1}\), where \(C_{3}\) is a rotation by \(2\pi/3\). The resulting mean-field Hamiltonian has a form: \[\hat{H}=\sum_{k}\psi_{k}^{\dagger}\hat{\epsilon}_{k}\psi_{k}+\sum_{k,g_{i}} \psi_{k+g_{i}}^{\dagger}\hat{x}_{g_{i}}\psi_{k}, \tag{4}\] where \(\psi_{k}\) are the two-dimensional spinors, the coupling between two different valleys is \(\hat{x}_{q}=\hat{\tau}_{x}(x_{+-}(q)+x_{-+}(q))/2+\hat{\tau}_{y}(x_{+-}(q)-x_{ -+}(q))/2\). For a Hamiltonian (4) a standard Bloch theorem is applicable. The initial Brillouin zone is then reduced to a significantly smaller (by a factor of \(10^{-2}\)) Brillouin zone. The corresponding period of the mean-field state solution \(\propto 1/q\) is of order \(10^{2}\) of the original lattice constants and depends on doping. The coupling at a finite momentum can give rise to oscillations in density that we discuss later in the paper. To find the value of the order parameter \(x_{q}\) we use the weak-coupling approach expanding a mean-field equation up to the third order. Then \[x_{+-}(g_{i})=-\lambda x_{+-}(g_{i})\Pi_{+-}(g_{i})\] \[-\lambda\sum_{g_{j},g_{k}}U_{+-}(g_{i},g_{j},-g_{k})x_{+-}(g_{j} )x_{-+}(-g_{k})x_{+-}(g_{i}-g_{j}+g_{k}),\] where \(\Pi_{+-}(g_{i})\) is a simple polarization operator between '+' and '-' valleys at a momentum \(g_{i}\), \(\lambda\) is the interaction constant, \(U_{+-}(g_{i},g_{j},-g_{k})\) is a generalization of the polarization operator to the case of the four incoming/outgoing electron-hole excitations. The polarization operator \(\Pi_{+-}(g_{i})\) is invariant under \(C_{3}\) rotation (but not under \(C_{6}\) or the mirror reflection). Similarly, \(U_{+-}(g_{i},g_{j},-g_{k})\) is invariant under simultaneous rotation of all vector over \(C_{3}\) (but not under pairs or single vectors). As a result, order parameter should be invariant under \(C_{3}\) (or, alternatively, some of the components vanish). Under mirror reflection \(x_{+-}(g_{i})^{*}=x_{-+}(-g_{i})\). Another restriction comes from the fact that the vector \(g_{i}-g_{j}+g_{k}\) should have a length equal to \(q\). Then at least two out of the three of the arguments in \(U_{+-}\) should be the same. Because of the symmetry restrictions, three types of solutions are possible: 1) stripe solution with only a single component being non-zero, 2) symmetry broken with one of the three components vanishing, 3) \(C_{3}\)-symmetric solution. The mean-field equation reduces to \[|x_{+-}(g_{i})|^{2}=-\frac{\lambda\Pi_{+-}(g_{i})+1}{\lambda U_{n}(g_{i})}, \tag{5}\] where \(\sum_{j,k}U_{+-}(g_{i},g_{j},g_{k})\equiv U_{n}(g_{i})\) is the sum over all terms that correspond to non-zero components of \(x_{q}\). Then ratio of the energies for each solution with respect to the energy of the Fermi sea will depend only on the coefficients \(U_{n}\) and not on the response: \[\frac{\Delta E_{n}}{\Delta E_{k}}=\frac{n|U_{k}|}{k|U_{n}|}, \tag{6}\] where \(n\) and \(k\) is the number of the non-zero components. The direct process with \(g_{i}=g_{j}=g_{k}\) have much higher amplitude and, as a result, the \(\Delta E_{n}/\Delta E_{k}\approx n/k\). Then the symmetrical solution is more preferred then either 1) or 2). To distinguish the inter valley coherence at a finite momentum (\(IVC-Q\)) from other candidate phases and show that this is the phase that gives an explanation for the increase in the resistance and change in the quantum oscillation data, we now discuss the experimental features of the mean-field state. _Experimental features_ The first feature of the symmetry broken state to explain is an increase in the resistance. A classical formula for the current in the leading order in the vector potential is \(j_{i}(\Omega)=1^{i}\sum_{\sigma}\frac{\partial H_{x}}{\partial A_{i}}\), where \(\sigma=0,1\) is a spinor index and \(i=x,y\) is a direction vector. As a result, \(j_{i}(\Omega)=p_{ij}(\Omega)A_{j}(\Omega)=-ip_{ij}(\Omega)E_{j}(\Omega)/\Omega= \sigma(\Omega)_{ij}E_{j}\). \[p_{ij}(\Omega)=\sum_{pnab}v_{ab}^{i}(p)v_{ba}^{j}(p)G_{a}(\omega_{n}+\Omega,p) G_{b}(\omega_{n},p), \tag{7}\] where \(a,b\) stand for bands, \(i,j\) are the direction of the velocities matrix elements \(v_{ab}^{i}\equiv\sum_{i\alpha}u_{a\alpha}(p+g_{i})u_{ba}^{*}(p+g_{i})\partial_{ t}(p+g_{i})\), \(G_{a/b}\) stands for the Green's funtions in the band basis averaged over the disorder. Within the Figure 2: A symmetrized response \(\Pi(q)=\Pi_{+-}(q)+\Pi_{-+}(q)\) in the electron-hole channel between two different valleys. A) Two-dimensional colorplot for the response at the hole density \(n_{h}=0.21\times 10^{12}cm^{-2}\). There is a noticeable minima around \(q\approx p_{F+}-p_{F-}\). The real part of the response has a 6-fold symmetry. Unlike the response of the free electron gas it has a strong variation for the transferred momentum below \(2p_{F}\). B) Response \(\tau(q)\) between the valleys for different values of the hole density. The ratio of the \(\Pi(q)/\Pi(0)\) reaches its maxima at around \(n_{h}=0.3\times 10^{12}\)\(cm^{-2}\). On the inset: dependence of the position of the minima in the response on the density (blue dots), fit \(p_{F+}-p_{F-}\) (blue dashed) and \((p_{F+}/v_{+}+p_{F-}/v_{-})/(v_{+}^{-1}-v_{-}^{-1})\). In this calculation, parameters used are \(m=1\), \(\lambda=-240\), \(\Delta=-0.8\). Fitting to the 6-band structure shows that \(m\) of order \(5\times 10^{3}\) K. framework of the (7), it is the off-diagonal elements of the velocity that are responsible for the increase (Fig. 4, inset). Numerically, we have an increase of the resistance by the order of \(10\%-30\%\) (Fig. 4) for \(\beta=5\times 10^{4}m^{-1}\) and higher for lower temperatures. We explain the rise in the resistance in the following manner: the pairing momentum \(q_{min}\) connects both points within the same Fermi surface as well as the points on different surfaces. The outer Fermi surface has a negative Fermi-velocity and thus hole-like, while the inner is electron-like. When the electrons and holes are paired, an exciton condensate can be formed. When electrons are paired in different valleys, a pseudo-magnetism (coherence) arises. As a result, we can think of the state as a mixture between a condensate of mobile excitons and a coherence of electrons in different valleys. The condensed part is, in principle, should have zero conductivity. For pure exciton condensate then \(\tau=0\). Neglecting the change in the drift velocity, we expect the relative change in the total conductivity be proportional to the \(-n_{ex}/n_{tot}\), where the \(n_{ex}\) is the density of the particles in the condensate, \(n_{tot}\) is the total electron density. The parameter \(-n_{ex}/n_{tot}\) should be entirely geometrical. To estimate it, we do the following: the number of the exciton pairs should be proportional to the integral \[n_{x}\propto\sum_{i}\int_{p_{1}}d^{2}p_{1}\int_{p_{2}}d^{2}p_{2}\delta(|\bar{p }_{2}-\bar{p}_{1}-\bar{q}_{i}|)n_{i}(p_{2})n_{o}(p_{1})/3, \tag{8}\] where the sum performed over the reciprocal lattice vectors, \(n_{i}(p_{1})=\theta(p_{i}-p_{1})\), \(n_{o}(p_{2})=\theta(p_{o}-p_{2})\theta(p_{2}-p_{i})\), \(\theta(x)\) is the Heaviside function, \(p_{i/o}\) are the absolute values of the Fermi momentums of the inner and outer Fermi surfaces, \(\delta(x)\) is a delta function. The integral over crescent-like area gives \(n_{x}\propto 4\pi p_{i}^{3/2}q^{1/2}\) for small \(q/p_{i}\). Similarly, the number of electrons in the system for a single spin flavor can be estimated as \(n\propto 2\pi(p_{o}^{2}-p_{i}^{2})\), with \(i/o\) corresponding to either inner or outer Fermi seas. As a result, conductance in the correlated regime divided by the conductance of the normal phase will give \[\sigma_{x}/\sigma_{m}\propto 1-\frac{2p_{i}^{3/2}q^{1/2}}{p_{o}^{2}-p_{i}^{2}}. \tag{9}\] The latter formula in the range of interest takes values from 0.2 to 0.4. To put this interpretation further, we study the temperature-dependence of the response (Fig. 3). Naively, the exciton part should scale as \(\log(\beta)\), while the magnetism should saturate to a constant value at temperatures below \(\epsilon_{F}\). We see the logarithmic behavior of the response for \(q=q_{min}\) at the temperature up to two order of magnitude below the Fermi energy. At temperatures lower than that the response is constant. We could conclude then that at temperatures below the transition but not very low the response of the system effectively resembles one of the exciton condensate. However, as the temperature goes down, the fact that there is only a countable number of the points with perfect nesting becomes noticeable and the response of the system has a temperature behavior of the Stoner magnet. The reduction of degeneracy of the Fermi surface seen in the quantum oscillation can be explained through the intervalley-coherence. In our case, the order parameter is inhomogeneous but has a periodicity of around \(2\;10^{2}a_{0}\approx 60\;\mathrm{nm}\). Given that the period is of the same order magnitude as the magnetic length or more, the corresponding matrix elements in Landau levels basis will only be non-zero between the same Landau levels. Thus, the IVC-Q will be approximately indistinguishable from the intervalley coherence at zero momentum for sufficiently large magnetic fields in the quantum oscillation data. To conclusively distinguish the phase with partial condensation of mobile excitons from the simple coherence between two valleys, we suggest looking at the density variations either through the STM, AFM or Raman spectroscopy. Unlike the Larkin-Ovchinnikov state, the variations in the order parameter do show up in the density. In the leading order in \(x\), the amplitude of variations then can be estimated as \(\delta n\approx 12x^{2}\nu/\epsilon_{q}\). The latter makes approximately \(1-10\%\) of the homogenous component. _Summary_ We showed that a new state with partial condensation of mobile excitons can explain the observable Figure 3: Temperature dependence of the intervalley electron-hole response. Main plot: \(\tau\) for several temperatures ranging from \(\lg(\beta)=3.6\) to \(\lg(\beta)=4\). One can see that at the minimum (\(q=q_{min}\)) the value of the response is by a factor of 1.5 is larger than at \(q=0\) by its absolute value. Inset: temperature-dependence of the minimum in the response in as a function of \(\lg(\beta)\). At the intermediate temperature response goes nearly logarithmically. Response at the \(q=0\) at low (\(10^{-4}<\epsilon_{F}\)) temperatures. In this calculation, parameters used are \(m=1\), \(\lambda=-240\), \(\Delta=-0.8\), \(n_{h}=0.21\times 10^{12}cm^{-2}\). increase in the resistance and quantum oscillation data. A theoretical possibility of such state is hinted by the peak in the electron-hole response between two valleys when the annular Fermi surfaces are present. Because of the extrema in the response at non-zero transferred momentum, the Fermi sea can become unstable towards transition to a state with intervalley coherence at finite momentum. We interpret the resulting phase of matter as a partial condensation of excitons distributed between two different Fermi surfaces (inner electron-like and outer hole-like) and coherence established between quasiparticles on the alike Fermi surface. The additional argument towards this interpretation is obtained through study of the temperature dependence of the response. We demonstrate that the bare response does not saturate even at temperatures significantly lower than the Fermi energy, with a divergence being slower than \(\log(T)\). Partial condensation also explains a significant increase in the resistance: part of the Fermi liquid that condenses into excitons and does not contribute to the electrical current. The fraction of the electrons that condenses is of order \(2p_{i}^{3/2}q^{1/2}/(p_{o}^{2}-p_{i}^{2})\) for \(q/p_{i}\ll 1\) and \(T\to 0\). Further, we look at the long-range (\(\approx 30\) nm) density variations. We estimate the amplitude of the oscillations to be of order \(1-10\%\) of the homogenous component and potentially observable through STM, AFM or Raman spectroscopy. _Acknowledgement_ I.V.B. is grateful for helpful conversations with Anna M. Seiler and Noelia Fernandez, Haoxin Zhou as well as to a "lovely guy in a cowboy hat" who accepted this acknowledgement as an entrance fee to a swimming pool.
2302.00505
On Pisot Units and the Fundamental Domain of Galois Extensions of $\mathbb{Q}$
In this paper, we present two main results. Let $K$ be a number field that is Galois over $\mathbb{Q}$ with degree $r+2s$, where $r$ is the number of real embeddings and $s$ is the number of pairs of complex embeddings. The first result states that the number of facets of the reduction domain (and therefore the fundamental domain) of $K$ is no greater than $O\left(\left(\frac{1}{2}(r+s-1)^\delta(r+s)^{1+\frac{1}{2(r+s-1)}}\right)^{r+s-1}\right) \cdot\left(e^{1+\frac1{2e}}\right)^{r+s}(r+s)!$, where $\delta=1/2$ if $r+s \leq 11$ or $\delta=1$ otherwise. The second result states that there exists a linear time algorithm to reduce a totally positive unary form $axx^*$, such that the new totally positive element $a^\prime$ that is equivalent to $a$ has trace no greater than a constant multiplied by the integer minimum of the trace-form $\trace(axx^*)$, where the constant is determined by the shortest Pisot unit in the number field. This may have applications in ring-based cryptography. Finally, we show that the Weil height of the shortest Pisot unit in the number field can be no greater than $\frac{1}{[K:\mathbb{Q}]}\left(\frac{\gamma}{2}(r+s-1)^{\delta-\frac{1}{2(r+s-1)}}R_K^{\frac{1}{r+s-1}}+(r+s-1)\epsilon\right)$, where $R_K$ denotes the regulator of $K$, $\gamma=1$ if $K$ is totally real or $2$ otherwise, and $\epsilon>0$ is some arbitrarily small constant.
Christian Porter, Alexandre Bali, Alar Leibak
2023-02-01T15:26:06Z
http://arxiv.org/abs/2302.00505v1
# On Pisot Units and the Fundamental Domain of Galois Extensions of \(\mathbb{Q}\) ###### Abstract In this paper, we present two main results. Let \(K\) be a number field that is Galois over \(\mathbb{Q}\) with degree \(r+2s\), where \(r\) is the number of real embeddings and \(s\) is the number of pairs of complex embeddings. The first result states that the number of facets of the reduction domain (and therefore the fundamental domain) of \(K\) is no greater than \(O\left(\left(\frac{1}{2}(r+s-1)^{\delta}(r+s)^{1+\frac{1}{2(r+s-1)}}\right)^{r+ s-1}\right)\cdot\left(e^{1+\frac{1}{2e}}\right)^{r+s}(r+s)!\), where \(\delta=1/2\) if \(r+s\leq 11\) or \(\delta=1\) otherwise. The second result states that there exists a linear time algorithm to reduce a totally positive unary form \(axx^{*}\), such that the new totally positive element \(a^{\prime}\) that is equivalent to \(a\) has trace no greater than a constant multiplied by the integer minimum of the trace-form \(\operatorname{Tr}(axx^{*})\), where the constant is determined by the shortest Pisot unit in the number field. This may have applications in ring-based cryptography. Finally, we show that the Weil height of the shortest Pisot unit in the number field can be no greater than \(\frac{1}{|K:\mathbb{Q}|}\left(\frac{\gamma}{2}(r+s-1)^{\delta-\frac{1}{2(r+s-1 )}}R_{K}^{\frac{1}{r+s-1}}+(r+s-1)\epsilon\right)\), where \(R_{K}\) denotes the regulator of \(K\), \(\gamma=1\) if \(K\) is totally real or \(2\) otherwise, and \(\epsilon>0\) is some arbitrarily small constant. ## 1 Introduction Let \(K\) be an algebraic field of degree \(n=r+2s\) (where \(r\) is the number of real embeddings and \(s\) is the number of pairs of complex embeddings) over \(\mathbb{Q}\) with ring of integers \(\mathcal{O}_{K}\) and unit group \(\mathcal{O}_{K}^{*}\). We associate to \(K\) the canonical embeddings \(\sigma_{1},\ldots,\sigma_{r},\sigma_{r+1},\ldots,\sigma_{r+s},\ldots,\sigma_{ r+2s}\) into \(\mathbb{C}\), where \(\sigma_{r+s+k}(x)=\sigma_{r+k}(x)^{*}\) for all \(1\leq k\leq s\) and \(*\) denotes the complex conjugate. Define by \(K_{\mathbb{R}}=K\otimes_{\mathbb{Q}}\mathbb{R}\). Note that \(K_{\mathbb{R}}=\mathbb{R}^{r}\times\mathbb{C}^{s}\). We define the canonical involution \(*:K_{\mathbb{R}}\to K_{\mathbb{R}}\) that acts as the identity on \(\mathbb{R}^{r}\) and acts as complex conjugation on \(\mathbb{C}^{s}\). For any \(\alpha=(\alpha_{1},\ldots,\alpha_{r+s}),\beta=(\beta_{1},\ldots,\beta_{r+s}) \in K_{\mathbb{R}}\), define \(\alpha\beta=(\alpha_{1}\beta_{1},\alpha_{2}\beta_{2},\ldots,\alpha_{r+s}\beta _{r+s})\). We say an element \(\alpha=(\alpha_{1},\ldots,\alpha_{r+s})\in K_{\mathbb{R}}\) is totally positive if every \(\alpha_{i}\in\mathbb{R}^{+}\). Throughout the paper, we will assume that any number field \(K\) that we consider is Galois over \(\mathbb{Q}\). In fact, the only lemma that requires this property is Lemma 3, but unfortunately this lemma is crucial to prove the result of the paper, so we must restrict ourselves to such fields. Consider \[\operatorname{Tr}(axx^{*}),\,\,\,x,a\in K_{\mathbb{R}},\] where \(a\) is a totally positive element. This generates a real positive-definite quadratic form of dimension \(r+s\). We call \(axx^{*}\) a unary form. We will set \(\mathcal{O}_{K_{\mathbb{R}}}\) to be the set of elements \((\sigma_{1}(x),\ldots,\sigma_{r+s}(x))\), where \(x\in\mathcal{O}_{K}\), and \(\mathcal{O}_{K_{\mathbb{R}}}^{*}\) the set of elements \((\sigma_{1}(u),\ldots,\sigma_{r+s}(u))\) such that \(u\in\mathcal{O}_{K}^{*}\). Then a totally positive element \(a\) is said to be reduced if it satisfies \[\mathrm{Tr}(a)\leq\mathrm{Tr}(avv^{*}), \tag{1}\] for all \(v\in\mathcal{O}_{K_{\mathbb{R}}}^{*}\). If \(v=(\sigma_{1}(u),\ldots,\sigma_{r+s}(u))\), we use the notation \(v^{-1}=(\sigma_{1}(u^{-1}),\ldots,\sigma_{r+s}(u^{-1}))\in\mathcal{O}_{K_{ \mathbb{R}}}^{*}\). We say that two totally positive elements \(a,a^{\prime}\) are equivalent if \(a^{\prime}=avv^{*}\) for some \(v\in\mathcal{O}_{K_{\mathbb{R}}}^{*}\). Note then that since \(a=a^{\prime}v^{-1}v^{-1}{}^{*}\), \(a\) can be considered by its equivalence class, where equivalence is determined by multiplying \(a\) by \(vv^{*}\) where \(v\in\mathcal{O}_{K_{\mathbb{R}}}^{*}\), and so the real quadratic forms \(\mathrm{Tr}(axx^{*}),\mathrm{Tr}(a^{\prime}xx^{*})\) are equivalent. The reduction domain of \(K_{\mathbb{R}}\), denoted \(\mathcal{F}_{K_{\mathbb{R}}}\), is the set of all reduced totally positive elements of \(K_{\mathbb{R}}\), and so clearly every positive element is equivalent to an element in \(\mathcal{F}_{K_{\mathbb{R}}}\). Note that the reduction domain is a fundamental domain for the set of totally positive elements of \(K\). The reduction domain is known to be the union of finitely many perfect cones ([2, Satz 4]). In [3], an upper bound on the number of perfect unary forms in any given totally real number field was determined. The facets of the cone \(\mathcal{F}_{K}\) are defined by the inequalities 1, and it is known that the number of facets of the reduction domain are finite in number, meaning that only finitely many inequalities need to be satisfied in order to determine whether or not a unary form is reduced. Let \(x\in\mathcal{O}_{K}\) be an algebraic integer of \(K\). We say that \(x\) is a Pisot-Vijayaraghavan number (shortened to a Pisot number) if the absolute value of \(x\) is greater than 1, but the absolute value of all its Galois conjugates, except for the conjugates that correspond to complex conjugation, have absolute value less than 1. We say that \(x\) is a Pisot unit if \(x\in\mathcal{O}_{K}^{*}\). By Dirichlet's unit theorem, we know that the rank of \(\mathcal{O}_{K}^{*}\) is \(r+s-1\). Suppose then that \(\mathcal{O}_{K}^{*}\) is multiplicatively generated by the elements \(u_{1},u_{2},\ldots,u_{r+s-1}\) and \(\zeta\) where \(\zeta\) is some root of unity in \(\mathcal{O}_{K}\). Consider the logarithmic embedding: \[\mathrm{Log}:K\rightarrow\mathbb{R}^{r+s}:\mathrm{Log}(x)= (\log(|\sigma_{1}(x)|),\log(|\sigma_{2}(x)|),\ldots,\log(|\sigma_{ r}(x)|),\] \[2\log(|\sigma_{r+1}(x)|),2\log(|\sigma_{r+2}(x)|),\ldots,2\log( |\sigma_{r+s}(x)|)).\] Then note that under the logarithmic embedding, \(\Lambda_{K}:=\mathrm{Log}(\mathcal{O}_{K}^{*})\) generates a lattice in the space \[V=\left\{(x_{1},\ldots,x_{r+s})\in\mathbb{R}^{r+s}:\sum_{i=1}^{r+s}x_{i}=0 \right\}. \tag{2}\] Let \(\|\mathbf{v}\|\) be the \(l_{p}\) norm of an element \(\mathbf{v}\in\mathbb{R}^{r+s}\). Throughout the paper, we will use the notation \[\rho^{p}(\Lambda)=\max_{x\in W}\min_{v\in\Lambda_{K}}\|x-v\|_{p}\] for any lattice \(\Lambda\), where \(W\) is the space in which \(\Lambda\) is full-rank. We will also make use of the notation \[\lambda_{i,K}(\Lambda)=\min\{c\in\mathbb{R}_{\geq 0}:\dim(\Lambda\cap cK)=i\},\] which is called the \(i\)th successive minima of \(\Lambda\), and \(K\) is some \(0\)-symmetric convex body. We will use the notation \(\lambda_{i,p}(\Lambda)\) to denote the \(i\)th successive minima of \(\Lambda\) with respect to the convex body drawn out by the \(l_{p}\)-norm. The aim of this paper is to prove the following results. **Theorem 1**.: _Let \(N_{K}\) denotes the number of facets of the reduction domain of \(K_{\mathbb{R}}\). Then_ \[N_{K}< 2(r+s-1)\] \[+2\left(\frac{1}{2}(r+s-1)^{\delta}(r+s)^{1-\frac{1}{2(r+s-1)}}+ \frac{r+s-1}{2}\frac{\log\left(\frac{r+s+1}{r+s-1}\right)}{R_{K}^{\frac{1}{r+s- 1}}}+\frac{\log\left(\frac{r+s-1}{2}\right)}{R_{K}^{\frac{1}{r+s-1}}}\right)^{r +s-1}\] \[\cdot(r+s)\sum_{k=0}^{\lfloor\frac{r+s}{2}\rfloor}(-1)^{k}\binom {r+s}{k}\left(\frac{r+s}{2}-k\right)^{r+s-1},\] _where_ \[\delta=\begin{cases}1/2,&1\leq r+s-1\leq 10,\\ 1,&10<r+s-1.\end{cases}\] _and \(R_{K}\) is the regulator of \(K\)._ Using Lemma 9, and the fact that \(R_{K}>0.2052\dots\) for all number fields \(K\)[11], this gives us \[N_{K}<O\left(\left(\frac{1}{2}(r+s-1)^{\delta}(r+s)^{1-\frac{1}{2(r+s-1)}} \right)^{r+s-1}\right)\cdot\left(e^{1+\frac{1}{2e}}\right)^{r+s}(r+s)!.\] Numerical evidence seems to suggest that the term \(\left(e^{1+\frac{1}{2e}}\right)^{r+s}\) can also be dropped in the expression above. The logarithmic Weil height of an algebraic number \(x\) in \(K\) is defined by \[h(x)=\frac{1}{[K:\mathbb{Q}]}\left(\sum_{i=1}^{r}\log^{+}|\sigma_{i}(x)|+2 \sum_{i=1}^{s}\log^{+}|\sigma_{i}(x)|\right),\] where \(\log^{+}(\alpha)=\max\{\log(\alpha),0\}\). We also prove the following interesting proposition, bounding the height of the Pisot unit with the smallest Weil height. **Proposition 1**.: _For all \(\epsilon>0\), there exists a Pisot unit with Weil height \(h(u)\) satisfying_ \[h(u)\leq\frac{1}{[K:\mathbb{Q}]}\left(\frac{\gamma}{2}(r+s-1)^{\delta-\frac{1 }{2(r+s-1)}}R_{K}^{\frac{1}{r+s-1}}+(r+s-1)\epsilon\right),\] _where \(\delta\) is defined as before, and \(\gamma=1\) if \(K\) is totally real, or \(\gamma=2\) if \(K\) is totally complex._ Using ideas outlined in this work, we also provide an algorithm to reduce unary forms, which could have applications in ring-based cryptography (see e.g. [4], [5], [6]). **Theorem 2**.: _For any totally positive element \(a\in K_{\mathbb{R}}\), define by_ \[\mu(a)\triangleq\min_{x\in\mathcal{O}_{K_{\mathbb{R}}}\setminus\{0\}}\text{Tr}( axx^{*}).\] _Then given a totally positive element \(a\), a Pisot unit \(u\) and some parameter \(\min_{j\neq 1}|\sigma_{j}(u)|^{2}<\delta<1\), there exists an algorithm that computes an equivalent element \(a^{\prime}\) such that_ \[\text{Tr}(a^{\prime})\leq\max\left\{\frac{t_{K}(u,\delta)^{2}}{ \min_{x\in\mathcal{S}}\text{Tr}(xx^{*})},1\right\}\mu(a), \tag{3}\] _where_ \[t_{K}(u,\delta)=\sqrt{1+\frac{|u|^{2}-\delta}{\delta-\max_{j\neq 1}| \sigma_{j}(u)|^{2}}},\] _and \(\mathcal{S}\) denotes the elements of \(\mathcal{O}_{K_{\mathbb{R}}}\) that do not correspond to roots of unity or zero in \(\mathcal{O}_{K}\). Also, if_ \[\mu(a)=\text{Tr}(axx^{*})\] _for some \(x\in\mathcal{O}_{K_{\mathbb{R}}}\), then_ \[\text{Tr}(xx^{*})\leq t_{K}(u,\delta)^{2}.\] _Moreover, the algorithm takes at most \(\mathcal{O}(\log(X)((r+s+1)\log(X)+(r+s)\log(\max_{i}|\sigma_{i}(u)|)))\) bit operations, where \(X=\max_{i}a_{i}\), where \(a=(a_{1},\ldots,a_{r+s})\)._ ## 2 Reduction of Unary Forms Via Pisot Units, and Some Useful Lemmas **Proposition 2**.: _If \(K\) is not either \(\mathbb{Q}\) or imaginary quadratic (i.e. has a nontrivial unit group), then for any \(\varepsilon>0\), there exists a Pisot unit \(u\) such that_ \[e^{(r+s-2)\rho^{\infty}(\Lambda_{K})+(r+s-1)\varepsilon}\leq |u|\leq e^{(r+s)\rho^{\infty}(\Lambda_{K})+(r+s-1)\varepsilon},\] \[e^{-(2\rho^{\infty}+\varepsilon)}\leq |\sigma_{i}(u)|\leq e^{-\varepsilon},\] _for all \(i\neq 1\)._ Proof.: If \(u\) is a Pisot unit, then clearly \(\log(|u|)>0\) and \(\log(|\sigma(u)|)<0\) for any Galois conjugate \(\sigma\) that does not correspond to complex conjugation. Let \(x=(x_{1},\ldots,x_{r+s})\) be an element of \(V\). By definition, \(\sum_{i=1}^{r+s}x_{i}=0\), and \[\min_{w\in\Lambda_{K}}\|x-w\|_{\infty}=\|x-v\|_{\infty}=\max_{1 \leq i\leq r+s}|x_{i}-v_{i}|\leq\rho^{\infty}(\Lambda_{K}),\] for some appropriate \(v=(v_{1},\ldots,v_{r+s})\in\Lambda_{K}\). Set \(x_{1}=(r+s-1)\rho^{\infty}(\Lambda_{K})+(r+s-1)\varepsilon,x_{2}=x_{3}=\cdots=x_{r +s}=-\rho^{\infty}(\Lambda_{K})-\varepsilon\) for some \(\varepsilon>0\). Then we must have \[|(r+s-1)\rho^{\infty}(\Lambda_{K})-v_{1}+(r+s-1)\varepsilon|\leq \rho^{\infty}(\Lambda_{K}),\] \[|v_{2}+\rho^{\infty}(\Lambda_{K})+\varepsilon|\leq\rho^{\infty}( \Lambda_{K}),\] \[|v_{3}+\rho^{\infty}(\Lambda_{K})+\varepsilon|\leq\rho^{\infty}( \Lambda_{K}),\] \[\vdots\] \[|v_{r+s}+\rho^{\infty}(\Lambda_{K})+\varepsilon|\leq\rho^{\infty} (\Lambda_{K}).\] Clearly \(\rho^{\infty}(\Lambda_{K})\) is nonzero if \(K\) is not either \(\mathbb{Q}\) or imaginary quadratic, so these inequalities yield the following inequalities: \[0<(r+s-2)\rho^{\infty}(\Lambda_{K})+(r+s-1)\varepsilon\leq v_{1}\leq(r+s)\rho^{\infty}(\Lambda_{K})+(r+s-1)\varepsilon,\] \[-2\rho^{\infty}(\Lambda_{K})-\varepsilon\leq v_{2}\leq-\varepsilon<0,\] \[-2\rho^{\infty}(\Lambda_{K})-\varepsilon\leq v_{3}\leq-\varepsilon<0,\] \[\vdots\] \[-2\rho^{\infty}(\Lambda_{K})-\varepsilon\leq v_{r+s}\leq-\varepsilon<0,\] which proves the lemma. **Lemma 3**.: _Suppose that \(a=(a_{1},a_{2},\ldots,a_{r+s})\in K_{\mathbb{R}}\) is a totally positive element. Let \(u\in\mathcal{O}_{K}^{*}\) be a Pisot unit and suppose that_ \[\text{Tr}(av_{i}v_{i}^{*})\geq\text{Tr}(a),\] _for all \(1\leq i\leq r+2s\) where \(v_{i}\) is the element of \(K_{\mathbb{R}}\) obtained by embedding \(\sigma_{i}(u)\) into \(K_{\mathbb{R}}\). Let_ \[t_{K}(u)=\sqrt{1+\frac{|u|^{2}-1}{1-\max_{2\leq j\leq r+s}|\sigma_{j}(u)|^{2}}}.\] _Then for any \(x\in K_{\mathbb{R}}\) satisfying \(\text{Tr}(xx^{*})\geq t_{K}(u)^{2}\), \(\text{Tr}(axx^{*})\geq\text{Tr}(a)\)._ Proof.: By assumption, for each \(1\leq i\leq r+2s\), \[\text{Tr}(av_{i}v_{i}^{*})=\left(\sum_{j=1}^{r}+2\sum_{j=r+1}^{r+s }\right)a_{j}|\sigma_{j}(\sigma_{i}(u))|^{2}\geq\text{Tr}(a)=\left(\sum_{j=1}^ {r}+2\sum_{j=r+1}^{r+s}\right)a_{j}\] \[\iff\left(\sum_{j=1}^{r}+2\sum_{j=r+1}^{r+s}\right)a_{j}(|\sigma_ {j}(\sigma_{i}(u))|^{2}-1)\geq 0.\] Suppose that \(|\sigma_{k}(\sigma_{i}(u))|=|u|\) for some value of \(k\). Then \[(|u|^{2}-1)a_{k}\geq\left(\sum_{j\neq k=1}^{r}+2\sum_{j\neq k=r+1 }^{r+s}\right)a_{j}\geq(1-\max_{j\neq 1}|\sigma_{j}(u)|^{2})\left(\sum_{j\neq k =1}^{r}+2\sum_{j\neq k=r+1}^{r+s}\right)a_{j}\] \[\iff\text{Tr}(a)\leq t_{K}(u)^{2}a_{k}.\] By cycling through all values \(1\leq i\leq r+s\), we attain the above inequality for all \(1\leq k\leq r+s\), and so if \(x=(x_{1},\ldots,x_{r+s})\in K_{\mathbb{R}}\) and \(\mathrm{Tr}(xx^{*})\geq t_{K}(u)^{2}\), \[\mathrm{Tr}(axx^{*})=\sum_{j=1}^{r+s}a_{j}|x_{j}|^{2}\geq t_{K}(u)^{-2} \mathrm{Tr}(a)\sum_{j=1}^{r+s}|x_{j}|^{2}=t_{K}(u)^{-2}\mathrm{Tr}(a)\mathrm{ Tr}(xx^{*})\geq\mathrm{Tr}(a),\] as required. **Lemma 4**.: _Let \(\rho^{\infty}(\Lambda)\) denote the covering radius in the \(l_{\infty}\) norm of a lattice \(\Lambda\) of rank \(n\) with volume \(\text{Vol}(\Lambda)\), and also assume that \(\Lambda\) is well-rounded (that is, the successive minima are all equal in value). Then_ \[\rho^{\infty}(\Lambda)\leq\begin{cases}\frac{\sqrt{n}}{2}\text{Vol}(\Lambda )^{\frac{1}{n}},&1\leq n\leq 10,\\ \frac{n}{2}\text{Vol}(\Lambda)^{\frac{1}{n}},&10<n.\end{cases} \tag{4}\] Proof.: To prove the first inequality, note first that \(\rho^{\infty}(\Lambda)\leq\rho^{2}(\Lambda)\). It was shown in [7] that \(\rho^{2}(\Lambda)\leq\frac{\sqrt{n}}{2}\text{Vol}(\Lambda)\) for all rank \(n\) lattices, where \(n\leq 10\). For the second case, note first that \(\lambda_{i,\infty}(\Lambda)\leq\lambda_{i,2}(\Lambda)\leq\sqrt{n}\lambda_{i, \infty}(\Lambda)\). It is well-known that \[\rho^{2}(\Lambda)\leq\frac{\sqrt{n}}{2}\lambda_{n,2}(\Lambda),\] and since \(\Lambda\) is assumed to be well-rounded, we get \[\rho^{\infty}(\Lambda)\leq\rho^{2}(\Lambda)\leq\frac{\sqrt{n}}{2}\lambda_{n, 2}(\Lambda)\leq\frac{n}{2}\lambda_{n,\infty}(\Lambda)=\frac{n}{2}\lambda_{1, \infty}.\] It is also well-known that \(\lambda_{1,\infty}\leq\text{Vol}(\Lambda)^{\frac{1}{n}}\) for any lattice \(\Lambda\), and so the second inequality holds. **Lemma 5** ([8]).: _Let \(\mathcal{C}(R)\) denote the \(n\)-dimensional hypercube of side-length \(R\), and let \(V\) denote the \(n-1\)-dimensional half-plane as in 2. Then_ \[\text{Vol}(\mathcal{C}(R)\cap V)=\frac{R^{n-1}\sqrt{n}}{(n-1)!}\sum_{k=0}^{ \lfloor\frac{n}{2}\rfloor}(-1)^{k}\binom{n}{k}\left(\frac{n}{2}-k\right)^{n-1}.\] **Lemma 6** ([9]).: _Let \(K\) be a convex \(0\)-symmetric body of rank \(n\). Then_ \[|K\cap\mathbb{Z}^{n}|\leq n!\text{Vol}(K)+n.\] ## 3 Determining an Upper Bound on the Number of Facets of \(\mathcal{F}_{K}\) Proof of Theorem 1.: By Lemma 3, any totally positive element \(a=(a_{1},\ldots,a_{r+s})\in\mathcal{F}_{K}\) has \[\mathrm{Tr}(axx^{*})\geq\mathrm{Tr}(a),\] for all \(x\in K_{\mathbb{R}}\), \(\operatorname{Tr}(xx^{*})\geq t_{K}^{2}\), where \(t_{K}=\min_{u\in\mathcal{P}}t_{K}(u)\) and \(\mathcal{P}\) is the set of all Pisot units of \(K\). Therefore, if \(u\in\mathcal{O}_{K_{\mathbb{R}}}^{\star}\) determines a facet of the reduction domain it must satisfy \[\operatorname{Tr}(uu^{*})\leq t_{K}^{2}\iff\log(\operatorname{Tr}(uu^{*})) \leq 2\log(t_{K}).\] Let \(\epsilon(u)=\max_{i}\{|\sigma_{i}(u)|,1/|\sigma_{i}(u)|:1\leq i\leq r+s\}\). Then exactly half of the (nontrivial) unit group satisfy \(\epsilon(u)=\max_{i}|\sigma_{i}(u)|\) and exactly half of them satisfy \(\epsilon(u)=\max_{i}|\sigma_{i}(u)|^{-1}\), since if \(\epsilon(u)=\max_{i}|\sigma_{i}(u)|\) then \(\epsilon(1/u)=\max_{i}|\sigma_{i}(u)|^{-1}\). We consider only units that satisfy \(\epsilon(u)=|\sigma_{i}(u)|\), so \[2\log(t_{K})\geq\log(\operatorname{Tr}(uu^{*}))\geq 2\|\operatorname{Log}(u) \|_{\infty},\] so the only possible units satisfying \(\epsilon(u)=\max_{i}|\sigma_{i}(u)|\) that can possibly constitute facets of the reduction domain must satisfy \(\|\operatorname{Log}(u)\|_{\infty}\leq\log(t_{K})\). Hence, we want to determine the number of lattice points of \(\Lambda_{K}\) are contained within the convex body \[C=\left\{x\in\mathbb{R}^{r+s}:\|x\|_{\infty}\leq\log(t_{K})\right\}. \tag{5}\] Now, since \(\Lambda_{K}\) is of full-rank in the half plane \(V\) as described in 2, we need to consider the 0-symmetric convex body \(C\cap V\). By Lemma 5, this shape has volume equal to \[\operatorname{Vol}(C\cap V)=\frac{\log(t_{K})^{r+s-1}\sqrt{r+s}}{(r+s-1)!} \sum_{k=0}^{\lfloor\frac{r+s}{2}\rfloor}(-1)^{k}\binom{r+s}{k}\left(\frac{r+s} {2}-k\right)^{r+s-1}.\] We apply the transform that takes \(\Lambda_{K}\) to the set \(\mathbb{Z}^{r+s-1}\), rotated in \(\mathbb{R}^{r+s}\) so that it sits within the half-plane \(V\). Then applying a similar transform to the convex body \(C\cap V\) gives us the new convex body \(K\) which has volume \[\operatorname{Vol}(K)=\frac{\log(t_{K})^{r+s-1}(r+s)}{R_{K}(r+s-1)!}\sum_{k=0 }^{\lfloor\frac{r+s}{2}\rfloor}(-1)^{k}\binom{r+s}{k}\left(\frac{r+s}{2}-k \right)^{r+s-1},\] where \(R_{K}\) is the regulator of the field \(K\), using the fact that \(\operatorname{Vol}(\Lambda_{K})=R_{K}/\sqrt{r+s}\) (see [10]). Then by Lemma 6, the number of integer lattice points inside \(K\) is upper bounded by \[r+s-1+\frac{\log(t_{K})^{r+s-1}(r+s)}{R_{K}}\sum_{k=0}^{\lfloor\frac{r+s}{2} \rfloor}(-1)^{k}\binom{r+s}{k}\left(\frac{r+s}{2}-k\right)^{r+s-1}.\] It remains to prove a bound on \(\log(t_{K})\). Clearly, since \(\max_{j\neq 1}|\sigma_{j}(u)|^{2}<1\) for any Pisot unit \(u\), \[1<\frac{1}{1-\max_{j\neq 1}|\sigma_{j}(u)|^{2}},\] so \[t_{K}=\min_{u\in\mathcal{P}}\sqrt{1+\frac{|u|^{2}-1}{1-\max_{j\neq i}|\sigma_ {j}(u)|^{2}}}<\min_{u\in\mathcal{P}}\sqrt{\frac{|u|^{2}}{1-\max_{j\neq 1}| \sigma_{j}(u)|^{2}}},\] where \(\mathcal{P}\) denotes the set of Pisot units of \(K\). Hence by Lemma 2, we must have \[\log(t_{K})<(r+s)\rho^{\infty}(\Lambda)+(r+s-1)\epsilon-\log(1-e^{-2\epsilon}),\] for any \(\epsilon>0\). The right-hand side of the above inequality attains its minimum at \(\epsilon=\frac{1}{2}(\log(r+s+1)-\log(r+s-1))\), for which we get \[\log(t_{K})<(r+s)\rho^{\infty}(\Lambda)+\frac{r+s-1}{2}\log\left(\frac{r+s+1} {r+s-1}\right)+\log\left(\frac{r+s-1}{2}\right).\] Finally, by Lemma 4 and the fact that the log-unit lattice is well-rounded with respect to the infinity norm (since \(K\) is Galois, if say \(\operatorname{Log}(u)\) is the minimum vector, then \(\operatorname{Log}(\sigma_{i}(u))\) has identical length with respect to the infinity norm, and there are \(r+s-1\) linearly independent vectors of this form), \[\rho^{\infty}(\Lambda_{K})\leq\begin{cases}\frac{\sqrt{r+s-1}}{2\sqrt{r+s^{r+ s-1}}}R_{K}^{\frac{1}{r+s-1}},&1\leq r+s-1\leq 10,\\ \frac{(r+s-1)}{2\sqrt{r+s^{r+s-1}}}R_{K}^{\frac{1}{r+s-1}},&10<r+s-1.\end{cases}\] Then since we have counted exactly half of the required integer lattice points, we get \[N_{K}< 2(r+s-1)\] \[+2\left(\frac{1}{2}(r+s-1)^{\delta}(r+s)^{1-\frac{1}{2(r+s-1)}}+ \frac{r+s-1}{2}\frac{\log\left(\frac{r+s+1}{r+s-1}\right)}{R_{K}^{\frac{1}{r+ s-1}}}+\frac{\log\left(\frac{r+s-1}{2}\right)}{R_{K}^{\frac{1}{r+s-1}}}\right)^{r+s-1}\] \[\cdot(r+s)\sum_{k=0}^{\lfloor\frac{r+s}{2}\rfloor}(-1)^{k}\binom {r+s}{k}\left(\frac{r+s}{2}-k\right)^{r+s-1},\] where \[\delta=\begin{cases}1/2,&1\leq r+s-1\leq 10,\\ 1,&10<r+s-1.\end{cases}\] ### The Special Case of \([K:\mathbb{Q}]=3\) The case where \([K:\mathbb{Q}]=3\) can be treated separately, as in this case, every element of the unit group is either \(\pm 1\), a Pisot unit, the inverse of a Pisot unit, the conjugate of a Pisot unit or the inverse conjugate of a Pisot unit. We begin by proving the following useful lemma. **Lemma 7**.: _Suppose that \(\mathbf{b}_{1}=(\alpha,\beta,-\alpha-\beta)\) and \(\mathbf{b}_{2}=(\gamma,\delta,-\gamma-\delta)\) for some \(\alpha,\beta,\gamma,\delta\in\mathbb{R}\) satisfying \(|\alpha|,|\beta|,|\gamma|,|\delta|,|\alpha+\beta|,|\gamma+\delta|>0\), and assume that \(\mathbf{b}_{1},\mathbf{b}_{2}\) are linearly independent over \(\mathbb{R}\). Suppose that \(\mathbf{b}_{1},\mathbf{b}_{2}\) satisfy_ \[\|\mathbf{b}_{1}\|_{\infty}\leq\|\mathbf{b}_{2}\|_{\infty}\leq\|\mathbf{b}_{1 }\pm\mathbf{b}_{2}\|_{\infty}. \tag{6}\] _Let \(\mathbf{v}=x\mathbf{b}_{1}+y\mathbf{b}_{2}\), for \(x,y\in\mathbb{Z}\). Then unless \((|x|,|y|)\) are in the following set:_ \[S=\{(0,0),(1,0),(0,1),(1,1),(2,1),(1,2)\}, \tag{7}\] \(\|\mathbf{v}\|_{\infty}\geq 2\lambda^{\infty}\)_._ Proof.: See appendix. Now, let \(K\) be a number field of degree \(3\) over \(\mathbb{Q}\). Suppose that the vectors \(\mathbf{b}_{1},\mathbf{b}_{2}\) generate \(\operatorname{Log}(\mathcal{O}_{K}^{*})\), and suppose that \(\mathbf{b}_{1},\mathbf{b}_{2}\) satisfy \(6\) without loss of generality. Let \(u\) denote the unit that corresponds to the shortest non-zero element of \(\operatorname{Log}(\mathcal{O}_{K}^{*})\), under the logarithmic embedding. We may assume that \(u\) is a Pisot unit, as otherwise \(u\) is either the inverse or a conjugate (or both) of a Pisot unit, which does not affect the length of the element under the logarithmic embedding. Note then that \[t_{K}(u)=\sqrt{1+\frac{|u|^{2}-1}{1-\max_{j\neq 1}|\sigma_{j}(u)|^{2}}}<\sqrt{1+ \frac{|u|^{2}-1}{\prod_{j\neq 1}(1-|\sigma_{j}(u)|^{2})}}.\] Since \(K\) is Galois, \(K\) must be totally real. Clearly \(1-u^{2}\) is an algebraic integer, so \(|\mathrm{Nm}_{K/\mathbb{Q}}(1-u^{2})|\geq 1\), which gives \[|\mathrm{Nm}_{K/\mathbb{Q}}(1-u^{2})|=(u^{2}-1)\prod_{j\neq 1}(1-\sigma_{j}(u )^{2})\geq 1\iff\prod_{j\neq 1}(1-\sigma_{j}(u)^{2})\geq(u^{2}-1)^{-1},\] and so \[t_{K}(u)<\sqrt{1+(u^{2}-1)^{2}}.\] Assume without loss of generality that \[\mathrm{Tr}(au^{2})\geq\mathrm{Tr}(a)\] (this may be done without loss of generality, as otherwise we may find an equivalent totally positive element \(a^{\prime}\) such that this holds). Given that \(\|\operatorname{Log}(u)\|_{\infty}=\lambda_{1}^{\infty}\) by construction, by a similar argument as the one posed in the previous section that lead to us constructing the convex body in \(5\), we are looking for integer solutions to the inequality \[\|x\mathbf{b}_{1}+y\mathbf{b}_{2}\|_{\infty}\leq\frac{1}{2}\log(1+(\sigma_{i} (u)^{2}-1)^{2})=\frac{1}{2}\log(1+(\exp(2\lambda^{\infty})-1)^{2}),\] which gives \[\frac{\|x\mathbf{b}_{1}+y\mathbf{b}_{2}\|_{\infty}}{\lambda^{\infty}}\leq \frac{1}{2}\log((1+(\exp(2\lambda^{\infty})-1)^{2})^{1/\lambda^{\infty}}).\] The right hand side of the inequality tends to \(2\) as \(\lambda^{\infty}\to\infty\), and so the solutions \((x,y)\) to the above equation have absolute values that are limited to those in the set \(S\) in \(7\). ## 4 A Linear Complexity Reduction Algorithm Whilst usually we say that a form \(axx^{*}\) is reduced if \(a\in\mathcal{F}_{K_{\mathbb{R}}}\), the notion of reduction can also be defined more broadly. For example, types of reduction of (rational) quadratic forms include Minkowski [12], Korkin-Zolotarev [13] and Lenstra-Lovasz-Lovasz (LLL) [14]. We present a very simple algorithm that, given some totally positive element \(a\in K_{\mathbb{R}}\), finds an equivalent totally positive element \(a^{\prime}\) with "desirable" properties. **Proposition 3**.: _With inputs \(\delta,b\) and \(u\), assuming that \(\delta\) is strictly less than 1, algorithm 1 performs at most \(\mathcal{O}(\log(X)((r+s+1)\log(X)+(r+s)\log(\max_{i}|\sigma_{i}(u)|)))\) bit operations, where \(X=\max_{i}a_{i}\), where \(a=(a_{1},\ldots,a_{r+s})\)._ Proof.: Clearly each full round of the algorithm either results in the termination of the algorithm, or we find some equivalent \(a^{\prime}\) such that \(\text{Tr}(a^{\prime})<\delta\text{Tr}(a)\), and so since \(\delta<1\) the algorithm can only perform \(\mathcal{O}(\log(X))\) rounds. In the worst case, a full round would require us to compute the value of \(\text{Tr}(av_{j}v_{j}^{*})\)\(r+s\) times. The values of \(a_{i}\) are bounded above by \(X\) and the values of \(|\sigma_{j}(u)|\) are bounded above by \(\max_{j}|\sigma_{j}(u)|\), so a round can have a maximum of \(\mathcal{O}((r+s)\log(\max_{i}|\sigma_{i}(u)|)+(r+s+1)\log(X))\) bit computations. **Theorem 8**.: _Suppose that algorithm 1 takes as input a totally positive element \(a=(a_{1},a_{2},\ldots,a_{r+s})\in K_{\mathbb{R}}\), a Pisot unit \(u\), and some parameter \(0<\delta\leq 1\), and outputs some \(a^{\prime}=(a^{\prime}_{1},a^{\prime}_{2},\ldots,a^{\prime}_{r+s})\) equivalent to \(a\). Denote by_ \[\mu(a)\triangleq\min_{x\in\mathcal{O}_{K_{\mathbb{R}}}\setminus\{0\}}\text{ Tr}(axx^{*}).\] _Then_ \[\text{Tr}(a^{\prime})\leq\max\left\{\frac{t_{K}(u,\delta)^{2}}{ \min_{x\in\mathcal{S}}\text{Tr}(xx^{*})},1\right\}\mu(a), \tag{8}\] _where_ \[t_{K}(u,\delta)=\sqrt{1+\frac{|u|^{2}-\delta}{\delta-\max_{j\neq 1}| \sigma_{j}(u)|^{2}}},\] _and \(\mathcal{S}\) denotes the elements of \(\mathcal{O}_{K_{\mathbb{R}}}\) that do not correspond to roots of unity or zero in \(\mathcal{O}_{K}\). Moreover, if_ \[\mu(a)=\text{Tr}(axx^{*})\] _for some \(x\in\mathcal{O}_{K_{\mathbb{R}}}\), then_ \[\text{Tr}(xx^{*})\leq t_{K}(u,\delta)^{2}. \tag{9}\] Proof.: First, by an argument similar to that in Lemma 3, each \(a^{\prime}_{i}\) for \(1\leq i\leq r+s\) must satisfy \[a^{\prime}_{i}\geq\mathrm{Tr}(a^{\prime})t_{K}(u,\delta)^{-2}.\] Suppose that \(x=(x_{1},\ldots,x_{r+s})\in\mathcal{O}_{K_{\mathbb{R}}}\) is the element that satisfies \(\mathrm{Tr}(a^{\prime}xx^{*})=\mu(a)\). Clearly, inequality 8 holds if \(x\) corresponds to a root of unity in \(\mathcal{O}_{K}\), so we assume that \(x\in\mathcal{S}\). Then \[\mu(a)=\mathrm{Tr}(a^{\prime}xx^{*})=\left(\sum_{j=1}^{r}+2\sum_{j=r+1}^{r+s} \right)a^{\prime}_{i}x_{i}x^{*}_{i}\geq t_{K}(u,\delta)^{-2}\mathrm{Tr}(a^{ \prime})\left(\sum_{j=1}^{r}+2\sum_{j=r+1}^{r+s}\right)x_{i}x^{*}_{i}.\] Inequalities 8 and 9 follow.
2308.15635
Parameterized and Approximation Algorithms for the Maximum Bimodal Subgraph Problem
A vertex of a plane digraph is bimodal if all its incoming edges (and hence all its outgoing edges) are consecutive in the cyclic order around it. A plane digraph is bimodal if all its vertices are bimodal. Bimodality is at the heart of many types of graph layouts, such as upward drawings, level-planar drawings, and L-drawings. If the graph is not bimodal, the Maximum Bimodal Subgraph (MBS) problem asks for an embedding-preserving bimodal subgraph with the maximum number of edges. We initiate the study of the MBS problem from the parameterized complexity perspective with two main results: (i) we describe an FPT algorithm parameterized by the branchwidth (and hence by the treewidth) of the graph; (ii) we establish that MBS parameterized by the number of non-bimodal vertices admits a polynomial kernel. As the byproduct of these results, we obtain a subexponential FPT algorithm and an efficient polynomial-time approximation scheme for MBS.
Walter Didimo, Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Stephen Kobourov, Marie Diana Sieper
2023-08-29T21:01:31Z
http://arxiv.org/abs/2308.15635v1
# Parameterized and Approximation Algorithms for the Maximum Bimodal Subgraph Problem+ ###### Abstract A vertex of a plane digraph is _bimodal_ if all its incoming edges (and hence all its outgoing edges) are consecutive in the cyclic order around it. A plane digraph is bimodal if all its vertices are bimodal. Bimodality is at the heart of many types of graph layouts, such as upward drawings, level-planar drawings, and L-drawings. If the graph is not bimodal, the _Maximum Bimodal Subgraph (MBS)_ problem asks for an embedding-preserving bimodal subgraph with the maximum number of edges. We initiate the study of the MBS problem from the parameterized complexity perspective with two main results: (i) we describe an FPT algorithm parameterized by the branchwidth (and hence by the treewidth) of the graph; (ii) we establish that MBS parameterized by the number of non-bimodal vertices admits a polynomial kernel. As the byproduct of these results, we obtain a subexponential FPT algorithm and an efficient polynomial-time approximation scheme for MBS. Keywords:bimodal graphs, maximum bimodal subgraph, parameterized complexity, FPT algorithms, polynomial kernel, approximation scheme ## 1 Introduction Let \(G\) be a plane digraph, that is, a planar directed graph with a given planar embedding. A vertex \(v\) of \(G\) is _bimodal_ if all its incoming edges (and hence all its outgoing edges) are consecutive in the cyclic order around \(v\). In other words, \(v\) is bimodal if the circular list of edges incident at \(v\) can be split into at most two linear lists, where all edges in the same list are either all incoming or all outgoing \(v\). Graph \(G\) is _bimodal_ if all its vertices are bimodal. Bimodality is a key property at heart of many graph drawing styles. In particular, it is a necessary condition for the existence of _level-planar_ and, more generally, _upward planar_ drawings, where the edges are represented as curves monotonically increasing in the upward direction according to their orientations [12, 13, 14, 23]; see Fig. 1(a). Bimodality is also a sufficient condition for _quasi-upward planar_ drawings, in which edges are allowed to violate the upward monotonicity a finite number of times at points called _bends_[5, 6, 7]; see Fig. 1(b). It has been shown that bimodality is also a sufficient condition for the existence of _planar L-drawings_ of digraphs, in which distinct L-shaped edges may overlap but not cross [1, 2, 3]; see Fig. 1(c). A generalization of bimodality is \(k\)-modality. Given a positive even integer \(k\), a plane digraph is _\(k\)-modal_ if the edges at each vertex can be grouped into at most \(k\) sets of consecutive edges with the same orientation [26]. In particular, it is known that \(4\)-modality is necessary for planar L-drawings [10]. While testing if a digraph \(G\) admits a bimodal planar embedding can be done in linear time [5], a natural problem that arises when \(G\) does not have such an embedding is to extract from \(G\) a subgraph of maximum size (i.e., with the maximum number of edges) that fulfills this property. This problem is NP-hard, even if \(G\) has a given planar embedding and we look for an embedding-preserving maximum bimodal subgraph [8]. We address exactly this fixed-embedding version of the problem, and call it the _Maximum Bimodal Subgraph_ (MBS) problem. **Contribution.** While a heuristic and a branch-and-bound algorithm are given in [8] to solve MBS (and also to find a maximum upward-planar digraph), here we study this problem from the parameterized complexity and approximability perspectives (refer to [11, 17] for an introduction to parameterized complexity). Figure 1: (a) An upward planar drawing. (b) A quasi-upward planar drawing, where edge \(e\) makes two bends (the two horizontal tangent points). (c) A bimodal digraph (above) and a corresponding planar L-drawing (below). More precisely, we consider the following more general version of the problem with weighted edges; it coincides with MBS when we restrict to unit edge weights. \(\mathrm{MWBS}(G,w)\) (_Maximum Weighted Bimodal Subgraph_). _Given a plane digraph \(G\) and an edge-weight function \(w:E(G)\rightarrow\mathbb{Q}^{+}\), compute a bimodal subgraph of \(G\) of maximum weight, i.e., whose sum of the edge weights is maximum over all bimodal subgraphs of \(G\)._ Our contribution can be summarized as follows. \(-\)Structural parameterization. We show that MWBS is FPT when parameterized by the _branchwidth_ of the input digraph \(G\) or, equivalently, by the _treewidth_ of \(G\) (Sect. 3). Our algorithm deviates from a standard dynamic approach for graphs of bounded treewidth. The main difficulty here is that we have to incorporate the "topological" information about the given embedding in the dynamic program. We accomplish this via the sphere-cut decomposition of Dorn et al. [15]. \(-\)Kernelization. Let \(b\) be the number of non-bimodal vertices in an input digraph \(G\). We construct a polynomial kernel for the decision version of MWBS parameterized by \(b\) (Sect. 4). Our kernelization algorithm performs in several steps. First we show how to reduce the instance to an equivalent instance whose branchwidth is \(\mathcal{O}(\sqrt{b})\). Second, by using specific gadgets, we compress the problem to an instance of another problem whose size is bounded by a polynomial of \(b\). In other words, we provide a polynomial compression for MWBS. Finally, by the standard arguments, [17, Theorem 1.6], based on a polynomial reduction between any NP-complete problems, we obtain a polynomial kernel for MWBS. By pipelining the crucial step of the kernelization algorithm with the branchwidth algorithm, we obtain a parameterized subexponential algorithm for MWBS of running time \(2^{\mathcal{O}(\sqrt{b})}\cdot n^{\mathcal{O}(1)}\). Since \(b\leq n\), this also implies an algorithm of running time \(2^{\mathcal{O}(\sqrt{n})}\). Note that our algorithms are asymptotically optimal up to the _Exponential Time Hypothesis_ (ETH) [20, 21]. The NP-hardness result of MBS (and hence of MWBS) given in [8] exploits a reduction from Planar-3SAT. The number of non-bimodal vertices in the resulting instance of MBS is linear in the size of the Planar-3SAT instance. Using the standard techniques for computational lower bounds for problems on planar graphs [11], we obtain that the existence of an \(2^{o(\sqrt{b})}\cdot n^{\mathcal{O}(1)}\)-time algorithm for MBWS would contradict ETH. \(-\)Approximability. We provide an Efficient Polynomial-Time Approximation Scheme (EPTAS) for MWBS, based on Baker's (or shifting) technique [4]. Namely, using our algorithm for graphs of bounded branchwidth, we give an \((1+\epsilon)\)-approximation algorithm that runs in \(2^{\mathcal{O}(1/\epsilon)}\cdot n^{\mathcal{O}(1)}\) time. Full proofs of the results marked with an asterisk (*), as well as additional definitions and technical details, are given in appendix. ## 2 Definitions and Terminology Let \(G\) be a digraph. We denote by \(V(G)\) and \(E(G)\) the set of vertices and the set of edges of \(G\). Throughout the paper we assume that \(G\) is planar and that it comes with a planar embedding; such an embedding fixes, for each vertex \(v\in V(G)\), the clockwise order of the edges incident to \(v\). We say that \(G\) is a _planar embedded digraph_ or simply that \(G\) is a _plane digraph_. **Branch decomposition and sphere-cut decomposition.** A _branch decomposition_ of a graph \(G\) defines a hierarchical clustering of the edges of \(G\), represented by an unrooted proper binary tree, that is a tree with non-leaf nodes of degree three, whose leaves are in one-to-one correspondence with the edges of \(G\). More precisely, a branch decomposition of \(G\) consists of a pair \(\langle T,\xi\rangle\), where \(T\) is an unrooted proper binary tree and \(\xi:\mathcal{L}(T)\leftrightarrow E(G)\) is a bijection between the set \(\mathcal{L}(T)\) of the leaves of \(T\) and the set \(E(G)\) of the edges of \(G\). For each arc \(a\) of \(T\), denote by \(T_{1}^{a}\) and \(T_{2}^{a}\) the two connected components of \(T\setminus\{a\}\), and, for \(i=1,2\), let \(G_{i}^{a}\) be the subgraph of \(G\) that consists of the edges corresponding to the leaves of \(T_{i}^{a}\). The _middle set_\(\operatorname{mid}(a)\subseteq V(G)\) is the intersection of the vertex sets of \(G_{1}^{a}\) and \(G_{2}^{a}\), i.e., \(\operatorname{mid}(a):=V(G_{1}^{a})\cap V(G_{2}^{a})\). The _width_\(\beta(\langle T,\xi\rangle)\) of \(\langle T,\xi\rangle\) is the maximum size of the middle sets over all arcs of \(T\), i.e., \(\beta(\langle T,\xi\rangle)=\max\{|\operatorname{mid}(a)|:a\in E(T)\}\). An _optimal branch decomposition_ of \(G\) is a branch decomposition with minimum width; this width is called the _branchwidth_ of \(G\) and is denoted by \(\operatorname{bw}(G)\). A sphere-cut decomposition is a special type of branch decomposition (see Fig. 2). Let \(G\) be a connected planar graph, topologically drawn on a sphere \(\Sigma\). A _noose_\(O\) of \(G\) is a closed simple curve on \(\Sigma\) that intersects \(G\) only at vertices and that traverses each face of \(G\) at most once. The _length_ of \(O\) is the number of vertices that \(O\) intersects. Note that, \(O\) bounds two closed discs \(\Delta_{O}^{1}\) and \(\Delta_{O}^{2}\) in \(\Sigma\); we have \(\Delta_{O}^{1}\cap\Delta_{O}^{2}=O\) and \(\Delta_{O}^{1}\cup\Delta_{O}^{2}=\Sigma\). Let \(\langle T,\xi\rangle\) be a branch decomposition of \(G\). Suppose that for each arc \(a\) of \(T\) there exists a noose \(O_{a}\) that traverses exactly the vertices of \(\operatorname{mid}(a)\) and whose closed discs \(\Delta_{O_{a}}^{1}\) and \(\Delta_{O_{a}}^{2}\) enclose the drawings of \(G_{1}^{a}\) and of \(G_{2}^{a}\), respectively. Denote by \(\pi_{a}\) the circular clockwise order of the vertices in \(\operatorname{mid}(a)\) along \(O_{a}\) and let \(\Pi=\{\pi_{a}:a\in E(T)\}\) Figure 2: A plane graph \(G\) and a sphere-cut decomposition of \(G\); three nooses are highlighted on \(G\) for the arcs \(a\), \(b\), and \(c\) of the decomposition tree. the set of all circular orders \(\pi_{a}\). The triple \(\langle T,\xi,\Pi\rangle\) is a _sphere-cut decomposition_ of \(G\). We assume that the vertices of \(\operatorname{mid}(a)=V(G_{1}^{a})\cap V(G_{2}^{a})\) are enumerated according to \(\pi_{a}\). Since a noose \(O_{a}\) traverses each face of \(G\) at most once, both graphs \(G_{1}^{a}\) and \(G_{2}^{a}\) are connected. Also, the nooses are pairwise non-crossing, i.e., for any pair of nooses \(O_{a}\) and \(O_{b}\), we have that \(O_{b}\) lies entirely inside \(\Delta^{1}_{O_{a}}\) or entirely inside \(\Delta^{2}_{O_{a}}\). For a noose \(O_{a}\), we define \(\operatorname{mid}(O_{a})=\operatorname{mid}(a)\), or in general, we define \(\operatorname{mid}(\phi)\) to be the vertices cut by \(\phi\). We rely on the following result on the existence and computation of a sphere-cut decomposition [22] (see also [15]). Proposition 1 ([22]): _Let \(G\) be a connected graph embedded in the sphere with \(n\) vertices and branchwidth \(\ell\geq 2\). Then there exists a sphere-cut decomposition of \(G\) with width \(\ell\), and it can be computed in \(\mathcal{O}(n^{3})\) time._ We remark that the branchwidth \(\operatorname{bw}(G)\) and the treewidth \(\operatorname{tw}(G)\) of a graph \(G\) are within a constant factor: \(\operatorname{bw}(G)-1\leq\operatorname{tw}(G)\leq\lfloor\frac{3}{2} \operatorname{bw}(G)\rfloor-1\) (see [24]). ## 3 FPT Algorithms for MWBS by Branchwidth In this section we describe an FPT algorithm parameterized by branchwidth. We first introduce configurations, which encode on which side of a closed curve and in what order in a bimodal subgraph for a vertex \(v\) the switches between incoming to outgoing edges happen. Definition 1 (Configuration): Let \(C=\{(i),(o),(i,o),(o,i),(o,i,o),(i,o,i)\}\). Let \(G\) be a graph embedded in the sphere \(\Sigma\), \(\phi\) be a noose in \(\Sigma\) with a prescribed inside, \(v\in\operatorname{mid}\left(\phi\right)\), and \(X\in C\). Let \(E^{v,\phi}\) be the set of edges incident to \(v\) in \(\phi\). We say \(v\) has _configuration_\(X\) in \(\phi\), if \(E^{v,\phi}\) can be partitioned into sets such that: 1. For every \(x\in X\), there is a (possibly empty) set \(E_{x}\) associated with it. 2. Every set associated with an \(i\) (o) contains only in- (/out-) edges of \(v\). 3. For every set, the edges contained in it are successive around \(v\). 4. The sets \(E_{x}\) appear clockwise (seen from \(v\)) in the same order in \(G\) inside \(\phi\) as the \(x\) appear in \(X\). For every \(v\in\operatorname{mid}\left(\phi\right)\), let \(X_{v}\) be a configuration of \(v\) in \(\phi\). We say \(X_{\phi}=\{X_{v}\mid v\in\operatorname{mid}\left(\phi\right)\}\) is a _configuration set_ of \(\phi\). If \(G\) is bimodal, then for every noose \(\phi\) and every vertex \(v\in\operatorname{mid}\left(\phi\right)\), \(v\) must have at least one configuration \(X\in C\) in \(\phi\). Note that configurations and configuration sets are not unique, as seen in Fig. 3(a). A vertex can even have all configurations if it has no incident edges in \(\phi\). The next definition is needed to encode when configurations can be combined in order to obtain bimodal vertices. Definition 2 (Compatible configurations): Let \(X,X^{\prime},X^{*}\in C\) be configurations. We say \(X,X^{\prime}\) are _compatible configurations_ or short _compatible_, if by concatenating \(X,X^{\prime}\) and deleting consecutive equal letters, the result is a substring of \((o,i,o)\) or \((i,o,i)\). Note that it is not important in which order we concatenate \(X,X^{\prime}\). See Figure 3(b). We say \(X\) and \(X^{\prime}\) are _compatible with respect to \(X^{*}\)_ if by concatenating \(X,X^{\prime}\) (in this order) and deleting consecutive equal letters, the result is a substring of \(X^{*}\). A configuration \(X\) can have several compatible configurations, for example \((i,o)\in C\) is compatible with \((o),(i)\) and \((o,i)\). From these \((o,i)\) is in some sense maximal, meaning that configurations \((o)\) and \((i)\) are substrings of \((o,i)\). Given a configuration \(X\), a _maximal compatible configuration_\(X^{\prime}\) of \(X\) is a configuration that is compatible with \(X\), and all other compatible configurations of \(X\) are substrings of \(X^{\prime}\). Observe that every configuration has a unique maximal compatible configuration, they are pairwise: \((i)-(i,o,i)\), \((o)-(o,i,o)\) and \((o,i)-(i,o)\). We say a noose \(\phi_{3}\) is _composed_ of the nooses \(\phi_{1}\) and \(\phi_{2}\), if the edges of \(G\) in \(\phi_{3}\) are partitioned by \(\phi_{1}\) and \(\phi_{2}\). If a noose \(\phi_{3}\) is composed of nooses \(\phi_{1}\) and \(\phi_{2}\), and there exists a vertex \(v\in\operatorname{mid}(\phi_{1})\cap\operatorname{mid}(\phi_{2})\cap \operatorname{mid}(\phi_{3})\), such that in \(\phi_{3}\) around \(v\), all adjacent edges of \(v\) in \(\phi_{1}\) are clockwise before all adjacent edges of \(v\) in \(\phi_{2}\). If \(X,X^{\prime}\) and \(X^{*}\) are nooses and \(X\) and \(X^{\prime}\) are compatible with respect to \(X^{*}\), and \(v\) has configuration \(X\) in \(\phi_{1}\) and configuration \(X^{\prime}\) in \(\phi_{2}\), then it has configuration \(X^{*}\) in \(\phi_{3}\). See Figure 3(c). If a curve \(\phi\) contains only one edge on its inside, finding maximal subgraphs for a configuration inside \(\phi\) is easy. Lemma 1 (*): _Let \(G\) be a graph embedded in the sphere \(\Sigma\), let \(e=\{u,v\}\) be an edge and let \(\phi\) be a noose that cuts \(G\) only in \(u\) and \(v\), such that \(e\) is in \(\phi\) and all other edges are on the outside of \(\phi\). Let \(X_{u},X_{v}\) be prescribed configurations. Then we can compute in \(\mathcal{O}(1)\) time the maximum subgraph \(G^{\prime}\) of \(G\) such that \(u,v\) have configuration \(X_{u}\) respectively \(X_{v}\) in \(\phi\) in \(G^{\prime}\)._ We will now see how we can compute optimal subgraphs bottom-up. Figure 3: (a) A vertex with configurations \((o,i),(o,i,o)\) and \((i,o,i)\) in \(\phi\). The most restricted and thus minimal configuration is \((o,i)\). (b) A vertex with configuration \((o,i,o)\) in \(\phi\) and \((o)\) outside of \(\phi\). Concatenating \((o,i,o)\) with \((o)\) and deleting consecutive equal letters results in \((o,i,o)\), the result is a substring of \((o,i,o)\), thus \((o,i,o)\) and \((o)\) are compatible. (c) Note that \(\phi_{3}\) is composed of \(\phi_{1}\) and \(\phi_{2}\); the inside of \(\phi_{1}\), the inside of \(\phi_{2}\) and the outside of \(\phi_{3}\) are clockwise in this order around \(v\) with configuration \((i,o)\) in \(\phi_{1}\) and \((o)\) in \(\phi_{2}\). They can be concatenated to configuration \((i,o)\) in \(\phi_{3}\), while \((i,o)\) and \((o)\) are compatible w.r.t. \((i,o)\), but not \((o,i)\). Lemma 2 (*): _Let \(G\) be a graph embedded in the sphere \(\Sigma\), let \(\phi_{1},\phi_{2},\phi_{3}\) be nooses with length at most \(\ell\) each, and let \(E_{\phi_{1}},E_{\phi_{2}},E_{\phi_{3}}\) be the sets of edges contained inside the respective noose with \(E_{\phi_{1}},E_{\phi_{2}}\) being a partition of \(E_{\phi_{3}}\). Let \(X_{\phi_{3}}\) be a configuration set for \(\phi_{3}\). Let further for every configuration set \(X_{\phi_{1}}\) (\(X_{\phi_{2}}\)) of \(\phi_{1}\) (\(\phi_{2}\)), the maximum subgraph that has configuration set \(X_{\phi_{1}}\) (\(X_{\phi_{2}}\)) and is bimodal in \(\phi_{1}\) (\(\phi_{2}\)) be known. Then a maximum subgraph \(G^{\prime}\) of \(G\) that has configuration set \(X_{\phi_{3}}\) and is bimodal in \(\phi_{3}\) can be computed in \(\mathcal{O}(6^{2\ell})\cdot n^{\mathcal{O}(1)}\) time._ If a noose \(\phi\) contains only \(e\in E\), we have only two options in \(\phi\): delete \(e\) or do not. Testing which is optimal can be done in constant time, this leads to Lemma 1. Now let \(\phi_{3}\) be a noose that contains more than one edge, let \(\phi_{1},\phi_{2}\) be two nooses that partition the inside of \(\phi_{3}\), and let \(X_{\phi_{3}}\) be a given configuration set. If we already know optimal solutions for any given configuration set in \(\phi_{1}\) (\(\phi_{2}\)) (which we already computed when traversing the sphere-cut decomposition bottom up), we can guess for some optimal solution for \(\phi_{3}\) for every \(v\in\operatorname{mid}(\phi_{1})\cap\operatorname{mid}(\phi_{2})\) the configuration it has in \(\phi_{1}\) and in \(\phi_{2}\). This gives us configuration sets \(X_{\phi_{1}}\) and \(X_{\phi_{2}}\) for \(\phi_{1}\) and \(\phi_{2}\), respectively (for every \(v\in\operatorname{mid}(\phi_{1})\setminus\operatorname{mid}(\phi_{2})\) we take its configuration in \(X_{\phi_{3}}\)). We obtain the corresponding solution \(G^{\prime}\) that coincides with the optimal solution for \(\phi_{1}\) (\(\phi_{2}\)) in \(\phi_{1}\) (\(\phi_{2}\)) respecting \(X_{\phi_{1}}\) (\(X_{\phi_{2}}\)) and that coincides with \(G\) outside of \(\phi_{3}\). Since \(|\operatorname{mid}(\phi_{1})\cap\operatorname{mid}(\phi_{2})|\leq\ell\), we achieve the same by enumerating all possible configurations for \(\operatorname{mid}(\phi_{1})\cap\operatorname{mid}(\phi_{2})\), compute the corresponding solutions and take the maximum in \(\mathcal{O}(6^{2\ell})\cdot n^{\mathcal{O}(1)}\) time, leading to Lemma 2. We now obtain the following theorem. Theorem 2.1 (*): _There is an algorithm that solves \(\operatorname{MWBS}(G,w)\) in \(2^{\mathcal{O}(\operatorname{bw}(G))}\). \(n^{\mathcal{O}(1)}\) time. In particular, \(\operatorname{MWBS}\) is FPT when parameterized by branchwidth._ Proof (Sketch): Assume that \(G\) is connected (otherwise process every connected component independently). If \(\operatorname{bw}(G)=1\), \(G\) is a star and we can compute an optimal solution in polynomial time. Otherwise, according to Proposition 1 we can compute a sphere-cut decomposition \(\langle T,\xi,\Pi\rangle\) for \(G\) with optimal width \(\ell\). We pick any leaf of \(T\) to be the root \(r\) of \(T\). For every noose \(O\) corresponding to an arc of \(T\) let \(X_{O}\) be a configuration set for \(O\). Then we define \(E_{(O,X_{O})}\) to be edge set of minimum weight, such that \(G\setminus E_{(O,X_{O})}\) is bimodal inside of \(O\) and has configuration set \(X_{O}\) in \(O\). We now compute the \(E_{(O,X_{O})}\) bottom-up. For a noose \(O\) corresponding to a leaf-arc in \(T\), Lemma 1 shows that we can compute all possible values of \(E_{(O,X_{O})}\) in linear time. For a noose \(O\) corresponding to a non-leaf arc in \(T\), Lemma 2 shows that we can compute \(E_{O,X_{O}}\) for a given \(X_{O}\) in \(\mathcal{O}(6^{2\ell})\cdot n^{\mathcal{O}(1)}\) time, and thus all entries for \(O\) in \(\mathcal{O}(6^{3\ell})\cdot n^{\mathcal{O}(1)}\) time. Let \(e\in E\) be the edge associated with \(r\). We have only two options left, delete \(e\) or do not. In both cases we obtain the optimal solution for the rest of \(G\) from the values \(E_{(O,X_{O})}\). The overall running time is \(2^{\mathcal{O}(\ell)}\cdot n^{\mathcal{O}(1)}\). Since our input graphs are planar, we immediately obtain a subexponential algorithm for MWBS because for a planar graph \(G\), \(\operatorname{bw}(G)=\mathcal{O}(\sqrt{n})\)[18]. Theorem 2.2: \(\operatorname{MWBS}(G=(V,E),w)\) _can be solved in \(2^{\mathcal{O}(\sqrt{n})}\) time._ ## 4 Compression for MWBS by \(b\) Throughout this section we assume that (i) the weights are rational, that is, for \((G,w)\), \(w\colon V(G)\to\mathbb{Q}^{+}\) and (ii) we consider the decision version of MWBS, that is, additionally to \((G,w)\), we are given a target value \(W\in\mathbb{Q}^{+}\) and the task is to decide whether \(G\) has a bimodal subgraph \(G^{*}\) with \(w(E(G^{*}))\geq W\). **Further definitions.** For simplicity, we say that a bimodal vertex of \(G\) is a _good_ vertex, and that a non-bimodal vertex is a _bad_ vertex. We denote by \(\mathcal{G}(G)\) and \(\mathcal{B}(G)\) the sets of good and bad vertices of \(G\), respectively. Given a vertex \(v\in V(G)\), an _in-wedge_ (resp. _out-wedge_) of \(v\) is a maximal circular sequence of consecutive incoming (resp. outgoing) edges of \(v\). Clearly, if \(v\) is bimodal it has at most one in-wedge and at most one out-wedge. Given a vertex \(v\in\mathcal{B}(G)\), a _good edge-section_ of \(v\) is a maximal consecutive sequence of in- and out- wedges of \(v\), such that no edge is incident to another bad vertex. Observation 1. Let \((G,w)\) be an instance of MWBS with \(b\) bad vertices, and let \(v\in\mathcal{B}(G)\). Then \(v\) can have at most \(b-1\) good edge-sections. We introduce a generalization of MWBS called Cut-MWBS\((G,w,\mathcal{E})\) (_maximum weighted bimodal subgraph with prescribed cuts_). Given a plane digraph \(G\), an edge-weight function \(w:E(G)\to\mathbb{Q}^{+}\), and a partition \(\mathcal{E}\) of \(E(G)\), compute a bimodal subgraph \(G^{\prime}\) of \(G\) of maximum weight, i.e., whose sum of the edge weights is maximum over all bimodal subgraphs of \(G\), under the condition that for every set \(E_{i}\in\mathcal{E}\), either all \(e\in E_{i}\) are still present in \(G^{\prime}\) or none of them are. We can see that every instance \((G,w)\) of MWBS is equivalent to the instance \((G,w,\{\{e\}\mid e\in E(G)\})\) of Cut-MWBS, and thus Cut-MWBS is NP-hard. Also, the decision variant of the problem is NP-complete. We now give reduction rules for the MWBS to Cut-MWBS compression, and prove that each of them is _sound_, i.e., it can be performed in polynomial time and the reduced instance is solvable if and only if the starting instance is solvable. Reduction Rule 1. Let \((G,w)\) be an instance of MWBS, and \(v\in V(G)\) be an isolated vertex. Then, let \((G^{\prime},w)\) be the new instance, where \(V(G^{\prime})=V(G)\setminus\{v\}\). Reduction Rule 2. Let \((G,w)\) be an instance of MWBS with the target value \(W\), and \(u,v\in\mathcal{G}(G)\) be such that \((u,v)\) is an edge. Then, the resulting instance is \((G^{\prime},w)\), where \(G^{\prime}=G-(u,v)\), and the new target value is \(W^{\prime}=W-w(u,v)\). Reduction Rule 3. Let \((G,w)\) be an instance of MWBS and \(v\in\mathcal{G}(G)\) of degree \(\geq 2\). Let \((G^{\prime},w)\) be the new instance, where in \(G^{\prime}\) we replace each edge \(e=(u,v)\) (resp. \(e=(v,u)\)) where \(u\in\mathcal{G}(G)\) with another edge \(e^{\prime}=(u,x_{uv})\) (resp. \(e^{\prime}=(x_{uv},u)\)), where \(x_{uv}\)'s are distinct vertices created for each such edge, and each \(e^{\prime}\) is embedded within the embedding of \(e\), where \(w(e^{\prime})=w(e)\) (see Fig. 4). Claim 1 (*). Reduction rules 1, 2 and 3 are sound. By applying Reductions 1, 2 and 3 exhaustively, we get Lemma 3, which is already enough to give a subexponential FPT algorithm by \(b\) (Theorem 3). Lemma 3 (*): _Given an instance \((G,w)\) of MWBS, there exists a polynomial-time algorithm to obtain an equivalent instance \((G^{\prime},w)\) with \(G^{\prime}\) being a subgraph of \(G\), such that (i) \(|\mathcal{B}(G^{\prime})|\leq|\mathcal{B}(G)|\), (ii) \(\mathcal{G}(G^{\prime})\) is an independent set in \(G^{\prime}\), and (iii) for all \(v\in\mathcal{G}(G^{\prime})\), \(\deg(v)=1\) in the underlying graph of \(G^{\prime}\)._ Theorem 3.1: _There exists an algorithm that solves MWBS\((G,w)\) with \(b\) bad vertices in \(2^{\mathcal{O}(\sqrt{b})}\cdot n^{\mathcal{O}(1)}\) time._ Proof: By Lemma 3, \((G,w)\) is equivalent to \((G^{\prime},w)\) with at most \(b\) vertices of degree \(>1\), which we can compute in polynomial time. This implies \(\mathrm{bw}(G^{\prime})=\mathcal{O}(\mathrm{tw}(G^{\prime}))=\mathcal{O}( \sqrt{b})\), and we can apply Theorem 3.1 to obtain an algorithm that computes a solution for \((G^{\prime},w)\) in \(2^{\mathcal{O}(\mathrm{bw}(G^{\prime}))}|V(G^{\prime})|^{\mathcal{O}(1)}\) time. We now describe how we can partition, for a given input, all good-edge sections into edge sets in such a way that there exists an optimal solution in which every set is either contained or deleted completely, and the total number of sets is bounded in a function of \(b\). We will then show how we can replace the sets with edge sets of size at most two. The main difficulty will be to ensure that sets that exclude each other continue to do so in the reduced instance. Lemma 4 (*): _Let \((G,w)\) be an instance of MWBS with \(n\) vertices and \(b\) bad vertices, such that \(\mathcal{G}(G)\) is an independent set in \(G\) and \(\deg(v)=1\) for all \(v\in\mathcal{G}(G)\). Let further \(v\in\mathcal{B}(G)\), and let \(S\) be a good edge-section of \(v\). Then \(S\) can be partitioned into at most 26 sets \(S_{1},\ldots,S_{26}\), such that for every optimal solution \(G^{\prime}\subseteq G\) of \(MWBS(G,w)\), there exists an optimal solution \(G^{*}\subseteq G\) of \(MWBS(G,w)\), such that \(G^{\prime}\) and \(G^{*}\) coincides on \(G\setminus S\), and for every \(i\), \(S_{i}\) is either contained or removed completely in \(G^{*}\)._ _Further, there exists a partition \(P_{1},\ldots,P_{j}\) of \(\{S_{1},\ldots,S_{26}\}\), such that for all \(P_{i}\): (1) \(|P_{i}|\leq 2\), (2) the edges in \(P_{i}\) are consecutive in \(S\) and (3) if \(P_{i}=\{\mathcal{S}_{1},\mathcal{S}_{2}\}\), then \(\mathcal{S}_{1}\) consists of outgoing edges of \(v\) iff \(\mathcal{S}_{1}\) consists of incoming edges of \(v\), and at least one of \(\mathcal{S}_{1},\mathcal{S}_{2}\) does not form a set of consecutive edges in \(S\)._ Figure 4: A bimodal vertex (a) before and (b) after Reduction rule 3 is applied. To show this, we enclose \(S\) in a curve \(\phi\), and then compute for every given configuration \(X\) the maximal subgraph \(G^{\prime}\) such that \(v\) has configuration \(X\) in \(\phi\). This yields a set of at most 12 possible locations for switches between incoming and outgoing edges in \(S\), which gives a partition of \(S\) into at most 13 sets (corresponding to \(P_{1},\ldots,P_{j}\)) that do not contain a switch, and thus at most 26 sets that will not be separated by an optimal solution, corresponding to \(S_{1},\ldots,S_{26}\). We now describe a parameter-preserving reduction from MWBS to Cut-MWBS. Lemma 5 (*)._Given an instance \((G,w)\) of MWBS with \(b\) bad vertices, we can find in polynomial time an instance \((G^{\prime},w,\mathcal{E})\) of Cut-MWBS, so that: (i) For every \(\mathcal{E}_{i}\subseteq\mathcal{E}\) with \(|\mathcal{E}_{i}|\geq 2\), there exists a bad vertex \(v\in G^{\prime}\) and a good edge-section \(S\) of \(v\), so that \(\mathcal{E}_{i}\) is a subset of \(S\) and \(\mathcal{E}_{i}\) contains only outgoing or only incoming edges of \(v\). (ii) \(|\mathcal{B}(G^{\prime})|\leq b\), (iii) \(|\mathcal{E}|=\mathcal{O}(b^{2})\), (iv) \((G,w)\) and \((G^{\prime},w,\mathcal{E})\) have the same optimal cost, (v) there exists a partition \(P_{1},\ldots,P_{j}\) of \(\mathcal{E}\), such that \(|P_{i}|\leq 2\) for all \(P_{i}\), (vi) if \(|P_{i}|=1\), then the edges-set contained in \(P_{i}\) is either an edge between two bad vertices, or there exists a bad vertex \(v\in G^{\prime}\) and good edge-section \(S\) of \(v\), such that the edges contained in \(P_{i}\) are all consecutive in \(S\), and, (vii) if \(|P_{i}|=2\) with \(P_{i}=\{\mathcal{E}_{1},\mathcal{E}_{2}\}\), there exists some \(v\in\mathcal{B}(G^{\prime})\) and a good edge-section \(S\) of \(v\), such that the edges in \(P_{i}\) are all consecutive in \(S\); and \(\mathcal{E}_{1}\) consists of outgoing edges of \(v\) if and only if \(\mathcal{E}_{1}\) consists of incoming edges of \(v\), and at least one of \(\mathcal{E}_{1},\mathcal{E}_{2}\) does not form a set of consecutive edges in \(S\)._ See Fig. 5(a) for a visualization. We obtain this transformation by applying Lemma 3 in order to get a simplified equivalent instance \(G^{\prime}\). Let \(E_{\mathrm{rest}}\) be all edges incident to two bad vertices. For every bad vertex \(v\) and every good edges section \(S\) of \(v\), let \(\mathcal{S}_{v,S}\) be the partition of \(S\) obtained from Lemma 4. We define \(\mathcal{E}=\{e\mid e\in E_{\mathrm{rest}}\}\cup\bigcup_{v,S}\mathcal{S}_{v,S}\). This defines the instance \((G^{\prime},w,\mathcal{E})\) of Cut-MWBS. We will now further reduce the size of \((G,w,\mathcal{E})\). Figure 5: (a) Illustration for Lemmas 4 and 5. The gray dashed lines correspond to a set of switches between the optimal solution we will choose; they impose the partition \(P_{1},\ldots,P_{13}\). \(S_{1}\) (\(S_{2}\)) are the incoming (outgoing) edges of \(P_{1}\), respectively. (b) The same vertex after transition to Cut-MWBS by Lemma 5, and after Reduction Rule 4 (5) got applied to \(P_{2}\) (\(P_{1}\)), respectively. Reduction Rule 4. Let \((G,w,\mathcal{E})\) be an instance of Cut-MWBS with properties (i) to (vii) of Lemma 5. Let \(v\in\mathcal{B}(G)\), let \(S\) be a good edge-section of \(v\), and let \(\mathcal{E}_{i}\in\mathcal{E}\) such that \(\mathcal{E}_{i}\subseteq S\) is a _consecutive_ set of edges in \(S\). Then let \((G^{\prime},w^{\prime},\mathcal{E}^{\prime})\) be the new instance that is obtained from \((G,w,\mathcal{E})\) by deleting all edges (and their incident good vertices) but one edge \(e\) out of \(\mathcal{E}_{i}\), and assigning \(w^{\prime}(e)=w(\mathcal{E}_{i})\). Reduction Rule 5. Let \((G,w,\mathcal{E})\) be an instance of Cut-MWBS with the properties (i) to (vii) of Lemma 5. Let further \(v\in\mathcal{B}(G)\), let \(S\) be a good edge-section of \(v\), and let \(\mathcal{E}_{\mathrm{in}},\mathcal{E}_{\mathrm{out}}\in\mathcal{E}\) such that \(\mathcal{E}_{\mathrm{in}},\mathcal{E}_{\mathrm{out}}\subseteq S\), \(\mathcal{E}_{\mathrm{in}}\) are all incoming to \(v\), \(\mathcal{E}_{\mathrm{out}}\) are all outgoing of \(v\), \(\mathcal{E}_{\mathrm{in}}\cup\mathcal{E}_{\mathrm{out}}\) is a consecutive set of edges in \(S\), and at least one of \(\mathcal{E}_{\mathrm{in}}\) or \(\mathcal{E}_{\mathrm{out}}\) does not form a consecutive set of edges in \(S\). We construct a new edge-set \(e_{1},e_{2},e_{3},e_{4}\) as follows: \(e_{1},e_{3}\) are incoming for \(v\), \(e_{2},e_{4}\) are outgoing of \(v\), and all of \(e_{1},e_{2},e_{3},e_{4}\) are incident to a newly inserted (good) vertex \(v_{e_{k}}\) for \(k\in\{1,\ldots,4\}\). We set \(w^{\prime}(e_{1})=w^{\prime}(e_{4})=0\), \(w^{\prime}(e_{2})=w(\mathcal{E}_{\mathrm{out}})\) and \(w^{\prime}(e_{3})=w(\mathcal{E}_{\mathrm{in}})\). Further we assign \(e_{1},e_{3}\in\mathcal{E}_{\mathrm{in}}\) and \(e_{2},e_{4}\in\mathcal{E}_{\mathrm{out}}\). Let \((G^{\prime},w^{\prime},\mathcal{E})\) be the new instance that is obtained from \((G,w,\mathcal{E})\) by replacing the edges in \(\mathcal{E}_{i}\cup\mathcal{E}_{j}\) with the consecutive sequence \(e_{1},e_{2},e_{3},e_{4}\). Claim 2 (*). Reductions 4 and 5 are sound. Lemma 6 (*)._Let \((G,w,\mathcal{E})\) be an instance of Cut-MWBS with \(b\) bad vertices and properties (i) till (vii) of Lemma 5. Then we can compute in polynomial time an equivalent instance \((G^{\prime},w^{\prime},\mathcal{E}^{\prime})\) such that \(V(G^{\prime})=\mathcal{O}(b^{2})\)._ See Fig. 5(b) for an illustration. We compute \((G^{\prime},w^{\prime},\mathcal{E}^{\prime})\) by applying Reductions 4 and 5 exhaustively. To bound the size of the weights \(w\), we use the approach of Etscheid et al. [16] and the well-known Theorem 4. This yields the compression of MWBS (Theorem 5) and a kernel for MWBS (Theorem 6). Theorem 4 ([19])._There is an algorithm that, given a vector \(\omega\in\mathbb{Q}^{r}\) and an integer \(N\), in polynomial time finds a vector \(\bar{\omega}\) such that \(||\bar{\omega}||_{\infty}=2^{\mathcal{O}(r^{3})}\) and \(\text{sign}(\omega\cdot b)=\text{sign}(\bar{\omega}\cdot b)\) for all vectors \(b\in\mathbb{Z}^{r}\) with \(||b||_{1}\leq N-1\). Theorem 5 (*)._There exists a polynomial-time algorithm that, given an instance \((G,w)\) of MWBS with \(b\) bad vertices and a target value \(W\), computes an instance \((G^{\prime},w^{\prime},\mathcal{E})\) of Cut-MWBS with size \(\mathcal{O}(b^{8})\), and a new target value \(W^{\prime}\) with size \(\mathcal{O}(b^{6})\), such that there exists a solution for \((G,w)\) of cost \(W\) if and only if there exists a solution for \((G^{\prime},w^{\prime},\mathcal{E})\) of cost \(W^{\prime}\). Theorem 6 (*)._The decision version of MWBS parameterized by the number of bad vertices \(b\) admits a polynomial kernel. Efficient PTAS for MWBS and Final Remarks We sketch our Efficient Polynomial-Time Approximation Scheme (EPTAS) for MWBS, i.e., a \((1-\epsilon)\)-approximation that runs in \(2^{\mathcal{O}(1/\epsilon)}\cdot n^{\mathcal{O}(1)}\) time. We use Baker's technique [4] to design our EPTAS. Our goal is to reduce the problem to (multiple instances of) the problem, where the treewidth (hence, branchwidth) of the graph is bounded by \(\mathcal{O}(1/\epsilon)\), at the expense of an \(\epsilon\)-factor loss in cost. Then, we can use our single-exponential algorithm in the branchwidth to solve each such instance exactly, which implies a \((1-\epsilon)\)-approximation. We sketch the details of this reduction. W.l.o.g. assume that the graph is connected. We perform a breadth-first search starting from an arbitrary vertex \(v\in V(G)\), and partition the vertex-set into layers \(L_{0},L_{1},\ldots\), where \(L_{i}\) is the set of vertices at distance _exactly_\(i\) from \(v\) in the _undirected_ version of \(G\). It is known that the treewidth of the subgraph induced by any \(d\) consecutive layers is upper bounded by \(\mathcal{O}(d)\) - this follows from a result of Bodlaender [9], which states that the treewidth of a planar graph with diameter \(D\) is \(\mathcal{O}(D)\). Let \(t=1/\epsilon\), and for each \(0\leq i\leq t\), let \(E^{(i,i+1)}\) denote edges \(uv\) such that \(u\in L_{j}\), \(v\in L_{j+1}\) with \(j\) mod \(t=i\). By an averaging argument, there exists an index \(0\leq i\leq t\), such that the total contribution of all the edges from an optimal solution (i.e., the set of edges inducing a maximum-weight bimodal subgraph) that belong to \(E^{(i,i+1)}\), is at most \(1/t=\epsilon\) times the weight of the optimal solution. Since we do not know this index \(i\), we consider all values of \(i\), and consider the subproblems obtained by deleting the edges. Then, the graph breaks down into multiple connected components, and the treewidth of each component is \(\mathcal{O}(1/\epsilon)\). We solve each such subproblem optimally in time \(2^{\mathcal{O}(1/\epsilon)}\cdot n^{\mathcal{O}(1)}\) using Theorem 1.1, and combine the solutions for the subproblems to obtain a solution for the original instance. Note that the graph obtained by combining the optimal solutions for the subproblems is bimodal, and for the correct value of \(i\), the weight of the graph is at least \(1-\epsilon\) times the optimal cost. That is, the combined solution is a \((1-\epsilon)\)-approximation. Theorem 5.1 (*)._There exists an algorithm that runs in time \(2^{\mathcal{O}(1/\epsilon)}\cdot n^{\mathcal{O}(1)}\) and returns a \((1-\epsilon)\)-approximate solution for the given instance of MWBS. That is, MWBS admits an EPTAS._ We note that Baker's technique can also be used to obtain an EPTAS with the similar running for the _minimization_ variant of MWBS. Although the high level idea is similar, the details are more cumbersome. Final Remarks.We conclude by suggesting some open questions. One natural problem is to ask for a maximum \(k\)-modal subgraph for any given even integer \(k\geq 2\); we believe that our ideas can be extended to this more general setting. Another natural variant of MBS is to limit the number of edges that we can delete to get a bimodal subgraph by an integer \(h\); in this setting, \(h\) becomes another parameter in addition to those we have considered. Finally, studying MBS in the variable embedding setting is an interesting future direction.
2308.03057
Spin Coherence and Spin Relaxation in Hybrid Organic-Inorganic Lead and Mixed Lead-Tin Perovskites
Metal halide perovskites make up a promising class of materials for semiconductor spintronics. Here we report a systematic investigation of coherent spin precession, spin dephasing and spin relaxation of electrons and holes in two hybrid organic-inorganic perovskites MA0.3FA0.7PbI3 and MA0.3FA0.7Pb0.5Sn0.5I3 using time-resolved Faraday rotation spectroscopy. With applied in-plane magnetic fields, we observe robust Larmor spin precession of electrons and holes that persists for hundreds of picoseconds. The spin dephasing and relaxation processes are likely to be sensitive to the defect levels. Temperature-dependent measurements give further insights into the spin relaxation channels. The extracted electron Land\'e g-factors (3.75 and 4.36) are the biggest among the reported values in inorganic or hybrid perovskites. Both the electron and hole g-factors shift dramatically with temperature, which we propose to originate from thermal lattice vibration effects on the band structure. These results lay the foundation for further design and use of lead- and tin-based perovskites for spintronic applications.
Haochen Zhang, Zehua Zhai, Zhixuan Bi, Han Gao, Meng Ye, Yong Xu, Hairen Tan, Luyi Yang
2023-08-06T08:40:10Z
http://arxiv.org/abs/2308.03057v2
# Spin Coherence and Spin Relaxation in Hybrid Organic-Inorganic Lead and Mixed Lead-Tin Perovskites ###### Abstract Metal halide perovskites make up a promising class of materials for semiconductor spintronics. Here we report a systematic investigation of coherent spin precession, spin dephasing and spin relaxation of electrons and holes in two hybrid organic-inorganic perovskites MA\({}_{0.3}\)FA\({}_{0.7}\)Pb\({}_{3}\) and MA\({}_{0.3}\)FA\({}_{0.7}\)Pb\({}_{0.5}\)Sn\({}_{0.5}\)Js using time-resolved Faraday rotation spectroscopy. With applied in-plane magnetic fields, we observe robust Larmor spin precession of electrons and holes that persists for hundreds of picoseconds. The spin dephasing and relaxation processes are likely to be sensitive to the defect levels. Temperature-dependent measurements give further insights into the spin relaxation channels. The extracted electron Lande g-factors (3.75 and 4.36) are the biggest among the reported values in inorganic or hybrid perovskites. Both the electron and hole g-factors shift dramatically with temperature, which we propose to originate from thermal lattice vibration effects on the band structure. These results lay the foundation for further design and use of lead- and tin-based perovskites for spintronic applications. Metal halide perovskites have been intensively studied due to their excellent optoelectronic performance[1], but have only started to gain attention as promising materials for spintronic applications[2]. On top of their unique properties such as great defect tolerance and high absorption coefficient, the efficient spin injection has been demonstrated through optical pumping methods[3, 4, 5], and long spin coherence times from hundreds of picoseconds to over one nanosecond have been observed at cryogenic temperatures[4, 5, 6, 7, 8]. Furthermore, the Lande g-factor, an important parameter for spin manipulation via external fields, has been investigated in detail in various inorganic and hybrid perovskite systems through ultrafast optical methods[4, 5, 6, 7, 8, 9], and their values can be accurately modeled based on the perovskite's band structure parameters, especially their bandgap and spin-orbit coupling (SOC) splitting energy[9, 10, 11]. However, most research of spin physics in perovskite semiconductors has been focusing on conventional lead-based systems[3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14]. Their tin-based counterparts, which were synthesized recently with reasonably high quality[15] and showed top-tier photovoltaic performance in tandem solar cells[1, 15], also provide a fertile playground to explore spin properties. First, the tin perovskites have a smaller bandgap and SOC gap than the lead perovskites[16], and can thus test previous models on the perovskite band structure such as the \(\mathbf{k}\cdot\mathbf{p}\) model on the Lande g-factors[10, 11]. Second, the smaller SOC might reduce the spin relaxation, leading to a longer spin coherence time, which could be potentially exploited for spin transport[2]. Moreover, the tin perovskites generally have more vacancies and higher trap densities than the lead-based ones [15, 16] due to the easy oxidation of tin cations from Sn\({}^{2+}\) to Sn\({}^{4+}\), so defect physics is likely important in evaluating their spin properties. In addition, thermal effects on the spin properties also need to be investigated to put forward a perovskite-based spintronic device at room temperature, and a complete understanding of the thermal evolution of the spin states and the spin relaxation in perovskites is still lacking [3, 4, 5, 6, 7, 12, 13, 14, 17, 18, 19]. For instance, a strong temperature dependence of the Lande g-factors in a pure-lead perovskite has been observed [4]. Nevertheless, the origin of such an effect has not been analyzed in detail. In this work, we do a comparative study of the spin coherence and spin relaxation in two hybrid organic-inorganic perovskite thin films MA\({}_{0.3}\)FA\({}_{0.7}\)Pb\({}_{3}\) and MA\({}_{0.3}\)FA\({}_{0.7}\)Pb\({}_{0.5}\)Sn\({}_{0.5}\)I\({}_{3}\) (abbreviated to Pb- and PbSn-perovskite, respectively, hereafter), where MA = CH\({}_{3}\)NH\({}_{3}\) and FA = (NH\({}_{2}\))\({}_{2}\)CH. Using time-resolved Faraday rotation spectroscopy, we detect long-lived spin relaxation and spin coherence of electrons and holes, from which we extract spin lifetimes, Lande g-factors and the inhomogeneous broadening of the g-factors. The electron g-factors are 3.75 and 4.36 in the Pb- and PbSn-perovskites, which are the biggest among the reported g-factors in inorganic or hybrid perovskites. We observe contrasting spin lifetimes between the two samples, suggesting that the spin relaxation is likely due to scattering with defects via the Elliot-Yafet mechanism at low temperatures and the spin decoherence suffers from g-factor inhomogeneity due to impurities and vacancies. By measuring carrier spin lifetimes at elevated temperatures, we specify possible roles of defects and phonons in the spin relaxation channels. Temperature-dependent experiments show drastic changes of both electron and hole g-factors. Supported by a model, we propose, for the first time, that this effect is dominated by the enhancement of dynamic lattice distortions (lattice vibrations) with increasing temperature, resulting in strong modifications of not only the bandgap but also the interband transition matrix and the SOC gap. Our results provide insights for the development of future hybrid perovskite spintronic materials. The Pb- and PbSn-perovskite samples are solution-processed polycrystalline thin films. Both compounds have a cubic structure and show no structural phase transition from 90 K to room temperature from X-ray diffraction measurements[20], as expected from the optimized ratio of the organic cations MA and FA in our samples[21, 22]. The cubic-phase perovskites are direct gap semiconductors where the gap is located at the R point of the Brillouin zone[10, 23]. Figure 1a shows a schematic of the band structure and the spin optical selection rules near the band edge. The conduction band minimum (CBM) is formed by the Pb/Sn \(p\)-orbitals, and the valence band maximum (VBM) is formed by the Figure 1: **a** Band structure and spin optical selection rules. The conduction band minimum (CBM) is a spin-orbit coupling split-off band of the Pb/Sn \(p\)-orbitals with total angular momentum \(j=1/2\), and the light and heavy electron states (LE and HE) are higher energy bands with total angular momentum \(j=3/2\). The valence band maximum (VBM) is formed by the Pb/Sn \(s\)-orbitals and the \(i\)\(p\)-orbitals retaining \(s\) symmetry (\(j=1/2\)). Therefore, the band edges of the perovskites are formed by effective spin 1/2 states, allowing selective pumping of spin polarized electrons and holes in the system through the absorption of circularly polarized photons. Blue arrow: right circularly polarized light; red arrow: left circularly polarized light. The spin-orbit splitting energy is denoted by \(\Delta\) in the conduction bands and the bandgap is denoted by \(E_{g}\). **b** Absorbance and photoluminescence (PL) spectra of the two samples at 10 K, where arb. u. stands for arbitrary units. Pb/Sn \(s\)-orbitals and the \(1\)\(p\)-orbitals retaining the \(s\) symmetry[24]. The SOC causes an energy splitting \(\Delta\) in the conduction band and the CBM is an SOC split-off band from the higher energy heavy electron (HE) and light electron (LE) bands[23]. Figure 1b shows the absorption and photoluminescence (PL) spectra for the Pb- and PbSn-perovskites at 10 K, from which the band gaps are determined to be 1.56 and 1.18 eV, respectively. Note the sharp absorption edge, pronounced exciton absorption peak and narrow PL width for the Pb-perovskite, in stark contrast to the results of the PbSn-perovskite. This is due to the presence of Sn\({}^{4+}\) defects and high trap densities in the PbSn sample[15], which also influence the spin lifetime as discussed later. Efficient spin polarization in the perovskites can be achieved by the optical orientation effect, which is based on optical selection rules subject to the band structure of the perovskites[4, 10, 23, 25]. The CBM and VBM of perovskites consist of effective spin 1/2 states, and therefore circularly polarized light can be used to selectively excite spin-polarized electrons and holes due to the conservation of spin angular momentum, similar to the optical spin selection rules in GaAs[25]. The TOC figure shows a schematic of the time-resolved Faraday rotation experiment. The spin optical selection rules enable the injection of spin-polarized electrons and holes via a circularly polarized pump pulse. The subsequent time evolution is monitored by measuring the Faraday rotation of a time-delayed linearly polarized probe pulse. Both the pump and probe beams are at near normal incidence; therefore, the out-of-plane spin component is initialized and detected. An external magnetic field \(B_{\mathrm{V}}\) is applied in the transverse direction (i.e., perpendicular to the laser beams, the Voigt geometry), inducing coherent spin precession along it. The time-resolved Faraday rotation signals for the Pb- and PbSn-perovskites at 4 K are shown in Figure 2a and b. At zero field, the signal lasts over hundreds of picoseconds for both perovskites. With an applied magnetic field \(B_{\mathrm{V}}\), the signal becomes oscillatory indicating spin precession along \(B_{\mathrm{V}}\). Both the frequency and the decay rate of the precession signals become bigger as the field is stronger. The Fourier transform of the time traces reveals two oscillation frequencies, both of which increase linearly with the magnetic field strength, as shown in Figure 2c and d. The time-domain raw data can be fit well with two exponentially decaying cosines: \(\theta_{\mathrm{F}}(t)=\sum_{i=1}^{2}A_{i}\mathrm{cos}\ (2\pi f_{i}t+\phi_{i})\mathrm{exp}\ (-t/T_{2,i}^{*})\). We plot the extracted oscillation frequency \(f_{\mathrm{e,h}}\) and the spin decay rate \(1/T_{2,\mathrm{e,h}}^{*}\) in Figure 2e and f, respectively. The two oscillation frequencies represent the Larmor precession frequencies of electrons and holes: \(f_{\mathrm{e,h}}=\left|g_{\mathrm{e,h}}\right|\mu_{\mathrm{B}}B_{\mathrm{V}}/h\), where \(g_{\mathrm{e,h}}\) is the electron (hole) g-factor, \(\mu_{\mathrm{B}}\) is the Bohr magneton, \(B_{\mathrm{V}}\) is the magnetic field strength and \(h\) is the Planck constant. From the data in Figure 2: **a-d** Time-resolved Faraday rotation traces and their fast Fourier transform (FFT) spectra under different in-plane magnetic fields for the Pb- and PbSn-perovskites at 4 K, where arb. u. stands for arbitrary units. Curves are offset for clarity. Larmor precession frequency (**e**) and spin dephasing rate \(1/T_{2}^{*}\) (**f**) versus the magnetic field \(B_{V}\) for electrons and holes in the Pb- and PbSn-perovskites, extracted from the time-resolved Faraday rotation data (**a** and **b**). The black marker in panel **f** is the longer lifetime extracted from the double-exponential fit of the time trace for the Pb-perovskite at zero field. Error bars are smaller than the marker sizes in panels **e** and **f**. Figure 2e, we obtain the absolute values of the g-factors. We further assign the g-factors and determine their signs (summarized in Table 1) according to the \(\mathbf{k}\cdot\mathbf{p}\) model[10, 11] (see below). In addition, the signs of the g-factors can be obtained from the exciton Zeeman splitting measurement[4, 5, 7, 19] (see Supplementary Note 4). Since the Pb-perovskite has an exciton g-factor of \(g_{\rm X}=+2.16(0.11)\) (Supplementary Note 4) and \(g_{\rm X}=g_{\rm e}+g_{\rm h}\) (Ref.[5]), we get \(g_{\rm e}>0\) and \(g_{\rm h}<0\). Notably, the linear fits of the oscillation frequencies in Figure 2e have zero intercepts with the frequency axis, indicating a negligible exciton exchange field on electrons and holes[4]. The g-factor describes the Zeeman splitting between different spin states in the band edges and is thus determined by the band structure parameters. For the cubic-phase perovskites, a three-level \(\mathbf{k}\cdot\mathbf{p}\) model[10, 11] was developed, giving the analytical expressions \(g_{\rm e}=-\frac{2}{3}+\frac{4}{3}\frac{p^{2}}{m_{\rm e}E_{\rm g}}+\Delta_{ \rm rvb}\) and \(g_{\rm h}=2-\frac{4}{3}\frac{p^{2}}{m_{0}}\frac{\Delta}{E_{\rm g}(E_{\rm g}+ \Delta)}\), where \(E_{\rm g}\) is the bandgap, \(\Delta\) is the spin-orbit splitting in the conduction bands, \(p\) is the interband matrix element of the momentum operator, \(m_{0}\) is the free-electron mass and \(\Delta_{\rm rvb}\) is the contribution to the electron g-factor from the remote valence bands. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Material & & \(\mathbf{g}\) & \(\Delta g\) & \(\tau_{\rm S}\) (ps) \\ \hline \multirow{2}{*}{Pb} & \(e\) & 3.75 & 0.08 & 671 (33) \\ \cline{2-5} & \(h\) & -1.24 & 0.11 & 751 (42) \\ \hline \multirow{2}{*}{PbSn} & \(e\) & 4.36 & 0.11 & 25.0 (0.1) \\ \cline{2-5} & \(h\) & -0.57 & 0.30 & 171 (2) \\ \hline \end{tabular} \end{table} Table 1: g-factors \(g\), g-factor spread \(\Delta g\), zero-field spin lifetime \(\tau_{S}\) for electrons and holes in the Pb- and PbSn-perovskites extracted from time-resolved Faraday rotation measurements at 4 K. The errors from the fits for both \(g\) and \(\Delta g\) are smaller than 0.001 and thus not shown. The error for \(\tau_{S}\) is shown in the parentheses following the data. For the g-factors in lead-based perovskites, a universal dependence on the bandgap has been revealed based on these expressions [11]. By plotting our Pb-perovskite g-factors together with those from Ref. [11] in Figure 3, we find that they follow the same dependence on the bandgap. The electron g-factor of the PbSn-perovskite also obeys the same trend as the lead perovskites, even with its much smaller bandgap and SOC gap. However, the hole g-factor \(g_{\rm h,PbSn}\) deviates from the \(g_{\rm h}\) fit for the lead perovskites, highlighting that the hole g-factor depends sensitively on the SOC splitting energy \(\Delta\), consistent with the \(\mathbf{k}\cdot\mathbf{p}\) model. We find that a reasonable value [26, 27] of \(\Delta=0.7\) eV fits \(g_{\rm h,PbSn}\), in comparison with \(\Delta=1.5\) eV for the lead perovskites. Note that the g-factors in these perovskites are generally larger than conventional semiconductors such as GaAs, ZnSe and CdTe with similar bandgaps [28, 29, 30, 31, 32]. The \(g_{\rm e}\) values in our samples are the largest among the reported g-factors in inorganic or hybrid perovskites, Figure 3: **Bandgap dependence of the electron and hole g-factors for the perovskites.** The red and blue markers are the g-factor values for our samples (\(g_{\rm e,Pb}\), \(g_{\rm h,Pb}\), \(g_{\rm e,PbSn}\) and \(g_{\rm h,PbSn}\)) and the yellow and green markers are the g-factors for various lead-based perovskites with different compositions measured by Kirstein _et al._ in Ref. [11] (\(g_{\rm e,ref}\) and \(g_{\rm h,ref}\)). The \(g_{\rm zz}\) values in Ref. [11] are shown here and the g-factor anisotropy is omitted. The curves are theoretical predictions from the \(\mathbf{k}\cdot\mathbf{p}\) expressions in the main text. \(\Delta\) denotes the spin-orbit splitting energy for the perovskites in the conduction band, whose value is \(\Delta=1.5\) eV for the lead perovskites and \(\Delta=0.7\) eV for the PbSn-perovskite from the best fit. demonstrating that electron spins in these perovskites can be manipulated with smaller magnetic fields and are thus promising for spintronic applications. Figure 2f shows that the spin decay rate increases linearly with the field for both electrons and holes in both samples, indicating that the decay is dominated by ensemble spin dephasing because of the inhomogeneous broadening of \(g\)-factors. The phenomena can be described by \(1/T_{\text{2,e,h}}^{*}=1/\tau_{\text{S,e,h}}+\Delta g_{\text{e,h}}\mu_{\text{B}} B_{\text{V}}/\hbar\) (Refs. [4, 5, 33]), where \(\hbar\) is the reduced Planck constant, \(\tau_{\text{S,e,h}}\) is the spin lifetime at zero field and \(\Delta g_{\text{e,h}}\) is the spread of g-factors. The fits in Figure 2f using this expression give the g-factor broadening parameter \(\Delta g\) and spin lifetime \(\tau_{\text{S}}\), which are summarized in Table 1. The g-factor spread \(\Delta g\) is likely affected by the sample quality factors such as impurity level and spatial inhomogeneity. The \(\Delta g_{\text{e,h}}\) of the Pb-perovskite are similar to those reported in Refs. [4, 5]. However, the PbSn-perovskite has larger \(\Delta g\) values than the Pb-perovskite for both electrons and holes, especially a \(\Delta g_{\text{h}}\) value as large as 0.30, which possibly links with the sample's poorer morphology and more defects due to the easy oxidation of Sn\({}^{2+}\) to Sn\({}^{4+}\) and Sn vacancies [16]. The contrasting spin lifetimes \(\tau_{\text{S}}\) between the two perovskites suggest that crystal defects play an important role in determining spin relaxation. Although the Pb-perovskite has larger SOC and thus more efficient spin relaxation is expected, the spin lifetimes for both electrons and holes in the Pb-perovskite are significantly longer compared to those in the PbSn-perovskite at 4 K. Given that the PbSn-perovskite has a much higher defect level [15], we speculate that this is because at low temperatures, the impurity scattering via the Elliott-Yafet mechanism is dominant and leads to much faster spin relaxation in the PbSn system. Moreover, in the Pb-perovskite, the spin lifetime of electrons is comparable to that of holes, in stark contrast to the PbSn-perovskite, where the spin lifetime of holes is more than six times longer than that of electrons. We suspect that this may originate from the hole doping in the PbSn sample due to Sn vacancies [15, 34], leading to reduced scattering of holes with defects (due to screening) than that of electrons. To study the temperature dependence of the spin lifetime, we fit the time-resolved Faraday rotation data with double-exponentials at elevated temperatures at zero magnetic field (see Supplementary Figure 2). The short-lived component for the Pb-perovskite, lasting ~30 ps at low temperatures, has weak temperature dependence (see Supplementary Figure 3) and does not exist in an applied transverse magnetic field, similar to the previous studies[4, 5]. The longer lifetime (~800 ps at 4 K) drops significantly with increasing temperature and is related to the spin lifetimes of both electrons and holes because their dynamics are similar (see Table 1) and indistinguishable from the fits at zero field. For the PbSn-perovskite, by contrast, both the short and long lifetimes extrapolate well with the high-field \(T_{2}^{*}\) data (see the zero-field data points in Figure 2f), so we identify them as the electron and hole spin lifetimes. The spin lifetimes show strong temperature dependence and diminish with increasing temperature for both samples, as shown in Figure 4. Similar results have been observed in other perovskite systems[4, 5, 6, 7, 13] and were described by the Arrhenius activation-law expression \(\frac{1}{\tau_{\mathrm{S}}(T)}=\frac{1}{\tau_{\mathrm{S}}(T=0)}+w\,\exp(- \frac{\Delta E}{k_{\mathrm{B}}T})\), where \(\tau_{\mathrm{S}}(T=0)\) is the zero-temperature spin lifetime, \(w\) is a constant and \(\Delta E\) is the activation energy. The expression typically represents that spin scattering in the system has a clearly-defined energy barrier that needs to be thermally activated. With the same model, \(\Delta E=2.3\) meV is extracted for the Pb-perovskite Figure 4: Temperature dependence of the spin lifetime at zero magnetic field for the Pb- and PbSn-perovskites. The solid lines are Arrhenius activation-law fits to the data. The fits yield the activation energy of 2.3 meV for the Pb-perovskite and 12.2 meV for the PbSn-perovskite. Error bars are smaller than the marker size. and \(\Delta E=12\) meV is extracted for both electrons and holes in the PbSn-perovskite. This energy scale may be related to thermal excitation out of trapped states [7, 35] or due to phonon-induced spin relaxation mechanisms [4, 5, 6, 8, 13]. On the one hand, trap states are easily formed in these solution-processed perovskite samples, especially the Pb and Sn vacancies [34, 35]. On the other hand, transport measurements have indicated that momentum scattering is dominated by acoustic phonon scattering [36] and the longitudinal optical phonon modes have been measured to be ~10 meV (Refs. [37, 38]) in lead perovskites. Future experimental and theoretical work considering the perovskite's unique band structures and carrier scattering with defects and phonons is needed to clarify these effects. In Figure 5a and b, we further measure the temperature dependence of the g-factors for both perovskites based on the spin precession frequencies under a 0.8 T transverse magnetic field (time traces in Supplementary Figure 2). The g-factors vary dramatically from 4 to 70 K: the electron g-factors decrease by 0.44 and 0.35, while the hole g-factors increase by 0.51 and 0.13 in the Pb- and PbSn-perovskites, respectively. In stark contrast, the electron g-factor only changes by ~0.04 in GaAs and CdTe [39, 40, 41, 42] and ~0.01 in InP [42] and GaN [43] over the same temperature range. To determine the origin of the large g-factor shifts, we first examine the effect from the bandgap change with temperature. Extracted from absorbance spectra, the bandgap for both perovskites changes approximately linearly with the temperature from 10 to 70 K (Ref. [20] and Supplementary Figure 4). If temperature-independent values for the SOC gap and the interband matrix element are assumed, the bandgap change alone is estimated to result in a variation of -0.05 in \(g_{\mathrm{e}}\) and 0.05 in \(g_{\mathrm{h}}\) for the Pb-perovskite and -0.18 in \(g_{\mathrm{e}}\) and 0.12 in \(g_{\mathrm{h}}\) for the PbSn-perovskite, which are insufficient to describe the observed giant g-factor shifts, especially for the Pb-perovskite (see Supplementary Note 6 for details). The strong temperature dependence of the g-factors is likely a result of the combined modification not only in the bandgap, but also in the SOC gap and the interband transition matrix elements. By inducing lattice deformations, thermal effects can cause significant changes in the perovskite's band structure. Well-studied mechanisms include the lattice thermal expansion[44], octahedral tilting of the inorganic perovskite framework[45, 46, 20], anharmonic lattice vibrations[47, 48, 49, 20], etc. To elucidate these structural effects on the g-factors, we build an empirical \(sp^{3}\) tight-binding model based on the existing parameters for the cubic-phase MAPb\({}_{3}\) (Ref.[24]), which has very similar bandgap and SOC gap values as our Pb-perovskite. Details are included in Supplementary Note 6. We find that the dominant contribution results from the lattice vibration effects on the band structure, whereas the lattice expansion contributes a very small portion of the observed g-factor shift (less than 1/5, see Supplementary Figure 5). We calculate the impact on the band structure and the g-factors from the possible thermal vibration modes represented by the dynamic displacements of the lead and iodine Figure 5: **a** and **b** Temperature dependence of the electron and hole g-factors, respectively, extracted from the time-resolved Faraday rotation data at 0.8 T (Voigt geometry). **c** and **d** The electron and hole g-factors as a function of the square of the vibration amplitude \(\delta_{x}^{2}\) for the Pb-atoms in cubic MAPb\({}_{3}\), calculated by an empirical tight-binding model and the g-factor expressions in Ref.[11] (details in Supplementary Note 6), where \(\alpha\) is the lattice constant. The inset in panel **d** shows a schematic of the lattice vibration, with the gray ball being the lead atom and the purple balls being the iodine atoms. atoms. An off-centering shift of the lead atom in the PbI\({}_{6}\)-octahedra is depicted in the inset of Figure 5d and other modes are shown in Supplementary Figure 6. Characterized by the square of the atomic displacement \(\delta_{\mathrm{z}}^{2}\), the growing thermal vibrations cause a substantial increase in the bandgap while decreasing both the SOC gap and the interband transition matrix elements as shown in Supplementary Figure 6 (g-i). Overall, in Figure 5c and d, when the vibration amplitude reaches as large as 0.02\(a\) (\(a\) is the lattice constant), \(g_{\mathrm{e}}\) decreases by ~0.48 and \(g_{\mathrm{h}}\) increases by ~0.33, comparable to our experiment results. The displacement range up to 0.02\(a\) is reasonably chosen based on the ab initio molecular dynamics simulation in Ref. [47] of the mean square displacement for the Pb atoms in CsPbBr\({}_{3}\) at 150 K, which is about 0.033 A\({}^{2}\). The value can be linearly extrapolated to 0.015 A\({}^{2}\) at 70 K, corresponding to \(\sim\)(0.02\(a\))\({}^{2}\) in our Pb-perovskite. At low temperatures, the thermal energy is proportional to the square of the lattice vibration amplitude [47], which in turn induces almost linear shifts of g-factors with the temperature, in excellent agreement with the experimental results. In summary, we have done a comparative study of spin coherence, spin dephasing and spin relaxation of electrons and holes in the Pb- and PbSn-perovskites using time-resolved Faraday rotation spectroscopy. From these measurements we have not only determined the Lande g-factors and spin lifetimes but also shed light on their thermal evolutions. While the electron and hole g-factors for the Pb-perovskite and the electron g-factor for the PbSn-perovskite follow the theoretical predictions developed for lead perovskites, a modified SOC energy in the model is required to fit the hole g-factor for the PbSn-perovskite. The relatively small bandgap in our samples makes their electron g-factors the largest among the reported values in inorganic or hybrid perovskites. While a long-lived subnanosecond spin relaxation and spin coherence of electrons and holes have been observed in the Pb sample, the spin lifetimes in the PbSn-perovskite suffer significantly from its higher defect level despite its smaller SOC. In addition, we have pointed out that the strong temperature dependence of the g-factors likely originates from the lattice thermal vibrations, which greatly modify the bandgap, the SOC gap and the momentum operator transition matrix elements. Our findings lay the foundation for future spintronic applications based on the tin perovskites and provide insights into the unique thermal effects on the spin coherence and g-factors in the perovskite systems. ## Supporting Information Supplementary Notes on Sample Preparation, Absorption Measurements, Time-resolved Faraday Rotation Measurements, Exciton Zeeman Splitting, Temperature Dependence of Time-resolved Faraday Rotation Data, Thermal Shifts of g-factors in the Perovskites Induced by Lattice Distortions ## Acknowledgements Samples were prepared at Nanjing University. All optical measurements and calculations were performed at Tsinghua University. L.Y. acknowledges the support from the National Key R&D Program of China (Grant Nos. 2020YFA0308800 and 2021YFA1400100) and the National Natural Science Foundation of China (Grant Nos. 12074212). Y.X. was supported by the National Key R&D Program of China (2018YFA0307100 and 2018YFA0305603) and the National Natural Science Foundation of China (12025405 and 11874035). The work of H.T. was supported by the National Natural Science Foundation of China (Grant Nos. 61974063 and U21A2076). H.Z. was also supported by funds from the University of Toronto. ## Competing interests The authors declare no competing interests. ## References * [1] Lin, R. _et al._ All-perovskite tandem solar cells with improved grain surface passivation. _Nature_**603**, 73-78 (2022). * [2] Privitera, A., Righetto, M., Cacialli, F. & Riede, M. K. Perspectives of Organic and Perovskite-Based Spintronics. _Advanced Optical Materials_**9**, 2100215 (2021). * [3] Giovanni, D. _et al._ Highly Spin-Polarized Carrier Dynamics and Ultralarge Photoinduced Magnetization in CH\({}_{3}\)NH\({}_{3}\)PbI\({}_{3}\) Perovskite Thin Films. _Nano Lett._**15**, 1553-1558 (2015). * [4] Odenthal, P. _et al._ Spin-polarized exciton quantum beating in hybrid organic-inorganic perovskites. _Nature Phys_**13**, 894-899 (2017). * [5] Belykh, V. V. _et al._ Coherent spin dynamics of electrons and holes in CsPbBr\({}_{3}\) perovskite crystals. _Nat Commun_**10**, 673 (2019). * [6] Crane, M. J. _et al._ Coherent Spin Precession and Lifetime-Limited Spin Dephasing in CsPbBr\({}_{3}\) Perovskite Nanocrystals. _Nano Lett._**20**, 8626-8633 (2020). * [7] Kirstein, E. _et al._ Lead-Dominated Hyperfine Interaction Impacting the Carrier Spin Dynamics in Halide Perovskites. _Advanced Materials_**34**, 2105263 (2022). * [8] Kirstein, E. _et al._ Coherent Spin Dynamics of Electrons in Two-Dimensional (PEA)\({}_{2}\)PbI\({}_{4}\) Perovskites. _Nano Lett._**23**, 205-212 (2023). * [9] Huynh, U. N. _et al._ Transient quantum beatings of trions in hybrid organic tri-iodine perovskite single crystal. _Nat Commun_**13**, 1428 (2022). * [10] Yu, Z. G. Effective-mass model and magneto-optical properties in hybrid perovskites. _Sci Rep_**6**, 28576 (2016). * [11] Kirstein, E. _et al._ The Lande factors of electrons and holes in lead halide perovskites: universal dependence on the band gap. _Nat Commun_**13**, 3062 (2022). * [12] Zhao, W. _et al._ Transient circular dichroism and exciton spin dynamics in all-inorganic halide perovskites. _Nat Commun_**11**, 5665 (2020). * [13] Strohmair, S. _et al._ Spin Polarization Dynamics of Free Charge Carriers in CsPbI\({}_{3}\) Nanocrystals. _Nano Lett._**20**, 4724-4730 (2020). * [14] Liang, W. _et al._ Efficient Optical Orientation and Slow Spin Relaxation in Lead-Free CsSnBr\({}_{3}\) Perovskite Nanocrystals. _ACS Energy Lett._**6**, 1670-1676 (2021). * [15] Lin, R. _et al._ Monolithic all-perovskite tandem solar cells with 24.8% efficiency exploiting comproportionation to suppress Sn(ii) oxidation in precursor ink. _Nat Energy_**4**, 864-873 (2019). * [16] Gu, S. _et al._ Tin and Mixed Lead-Tin Halide Perovskite Solar Cells: Progress and their Application in Tandem Solar Cells. _Advanced Materials_**32**, 1907392 (2020). * [17] Yu, Z.-G. & Li, Y. S. Unraveling the Spin Relaxation Mechanism in Hybrid Organic-Inorganic Perovskites. _J. Phys. Chem. C_**123**, 14701-14706 (2019). * [18] Zhou, M., Sarmiento, J. S., Fei, C., Zhang, X. & Wang, H. Effect of Composition on the Spin Relaxation of Lead Halide Perovskites. _J. Phys. Chem. Lett._**11**, 1502-1507 (2020). * [19] Kirstein, E. _et al._ Spin Dynamics of Electrons and Holes Interacting with Nuclei in MAPbI\({}_{3}\) Perovskite Single Crystals. _ACS Photonics_**9**, 1375-1384 (2022). * [20] Zhang, H. _et al._ Revealing unusual bandgap shifts with temperature and bandgap renormalization effect in phase-stabilized metal halide perovskites. Preprint at [https://doi.org/10.48550/arXiv.2308.11104](https://doi.org/10.48550/arXiv.2308.11104) (2023). * [21] Rajagopal, A., Stoddard, R. J., Hillhouse, H. W. & Jen, A. K.-Y. On understanding bandgap bowing and optoelectronic quality in Pb-Sn alloy hybrid perovskites. _J. Mater. Chem. A_**7**, 16285-16293 (2019). * [22] Lee, J.-W., Tan, S., Seok, S. I., Yang, Y. & Park, N.-G. Rethinking the A cation in halide perovskites. _Science_**375**, eabj1186 (2022). * [23] Even, J., Pedesseau, L., Jancu, J.-M. & Katan, C. Importance of Spin-Orbit Coupling in Hybrid Organic/Inorganic Perovskites for Photovoltaic Applications. _J. Phys. Chem. Lett._**4**, 2999-3005 (2013). * [24] Boyer-Richard, S. _et al._ Symmetry-Based Tight Binding Modeling of Halide Perovskite Semiconductors. _J. Phys. Chem. Lett._**7**, 3833-3840 (2016). 25] Zutic, I., Fabian, J. & Das Sarma, S. Spintronics: Fundamentals and applications. _Rev. Mod. Phys._**76**, 323-410 (2004). * [26] Mosconi, E., Umari, P. & Angelis, F. D. Electronic and optical properties of mixed Sn-Pb organohalide perovskites: a first principles investigation. _J. Mater. Chem. A_**3**, 9208-9215 (2015). * [27] Umari, P., Mosconi, E. & De Angelis, F. Relativistic GW calculations on CH\({}_{3}\)NH\({}_{3}\)Pb\({}_{3}\) and CH\({}_{3}\)NH\({}_{3}\)SnI\({}_{3}\) Perovskites for Solar Cell Applications. _Sci Rep_**4**, 4467 (2014). * [28] Yugova, I. A. _et al._ Universal behavior of the electron \(g\) factor in GaAs/Al\({}_{x}\)Ga\({}_{1\cdot}\)As quantum wells. _Phys. Rev. B_**75**, 245302 (2007). * [29] Toft, I. & Phillips, R. T. Hole \(g\) factors in GaAs quantum dots from the angular dependence of the spin fine structure. _Phys. Rev. B_**76**, 033301 (2007). * [30] Willatzen, M., Cardona, M. & Christensen, N. E. Spin-orbit coupling parameters and electron g factor of II-VI zinc-blende materials. _Phys. Rev. B_**51**, 17992-17994 (1995). * [31] Poltavtsev, S. V. _et al._ In-plane anisotropy of the hole \(g\) factor in CdTe/(Cd,Mg)Te quantum wells studied by spin-dependent photon echoes. _Phys. Rev. Research_**2**, 023160 (2020). * [32] Ji, H. _et al._ Long spin-flip time and large Zeeman splitting of holes in type-II ZnTe/ZnSe submonolayer quantum dots. _Journal of Applied Physics_**124**, 144306 (2018). * [33] Yang, L. _et al._ Spin Coherence and Dephasing of Localized Electrons in Monolayer MoS2. _Nano Lett._**15**, 8250-8254 (2015). * [34] Chung, I. _et al._ CsSnI3: Semiconductor or Metal? High Electrical Conductivity and Strong Near-Infrared Photoluminescence from a Single Material. High Hole Mobility and Phase-Transitions. _J. Am. Chem. Soc._**134**, 8579-8587 (2012). * [35] Jin, H. _et al._ It's a trap! On the nature of localised states and charge trapping in lead halide perovskites. _Mater. Horiz._**7**, 397-410 (2020). * [36] Oga, H., Saeki, A., Ogomi, Y., Hayase, S. & Seki, S. Improved Understanding of the Electronic and Energetic Landscapes of Perovskite Solar Cells: High Local Charge Carrier Mobility, Reduced Recombination, and Extremely Shallow Traps. _J. Am. Chem. Soc._**136**, 13818-13825 (2014). * [37] Wright, A. D. _et al._ Electron-phonon coupling in hybrid lead halide perovskites. _Nat Commun_**7**, 11755 (2016). * [38] Sendner, M. _et al._ Optical phonons in methylammonium lead halide perovskites and implications for charge transport. _Mater. Horiz._**3**, 613-620 (2016). * [39] Oestreich, M. & Ruhle, W. W. Temperature Dependence of the Electron Lande \(g\) Factor in GaAs. _Phys. Rev. Lett._**74**, 2315-2318 (1995). * [40] Hubner, J., Dohrmann, S., Hagele, D. & Oestreich, M. Temperature-dependent electron Lande \(g\) factor and the interband matrix element of GaAs. _Phys. Rev. B_**79**, 193307 (2009). * [41] Hohage, P. E., Bacher, G., Reuter, D. & Wieck, A. D. Coherent spin oscillations in bulk GaAs at room temperature. _Appl. Phys. Lett._**89**, 231101 (2006). * [42] Oestreich, M. _et al._ Temperature and density dependence of the electron Lande g factor in semiconductors. _Phys. Rev. B_**53**, 7911-7916 (1996). * [43] Bu8, J. H., Schupp, T., As, D. J., Hagele, D. & Rudolph, J. Temperature dependence of the electron Lande \(g\) -factor in cubic GaN. _Journal of Applied Physics_**118**, 225701 (2015). * [44] Dar, M. I. _et al._ Origin of unusual bandgap shift and dual emission in organic-inorganic lead halide perovskites. _Science Advances_**2**, e1601156 (2016). * [45] Whitfield, P. S. _et al._ Structures, Phase Transitions and Tricritical Behavior of the Hybrid Perovskite Methyl Ammonium Lead lodide. _Sci Rep_**6**, 35685 (2016). * [46] Prasanna, R. _et al._ Band Gap Tuning via Lattice Contraction and Octahedral Tilting in Perovskite Materials for Photovoltaics. _J. Am. Chem. Soc._**139**, 11117-11124 (2017). * [47] Lanigan-Atkins, T. _et al._ Two-dimensional overdamped fluctuations of the soft perovskite lattice in CsPbBr\({}_{3}\). _Nat. Mater._**20**, 977-983 (2021). * [48] Patrick, C. E., Jacobsen, K. W. & Thygesen, K. S. Anharmonic stabilization and band gap renormalization in the perovskite CsSnI\({}_{3}\). _Phys. Rev. B_**92**, 201205 (2015). * [49] Wiktor, J., Rothlisberger, U. & Pasquarello, A. Predictive Determination of Band Gaps of Inorganic Halide Perovskites. _J. Phys. Chem. Lett._**8**, 5507-5512 (2017). **Supporting Information** **Supplementary Note 1: Sample Preparation** The samples are solution-processed MA\({}_{0.3}\)FA\({}_{0.7}\)PbI\({}_{3}\) and MA\({}_{0.3}\)FA\({}_{0.7}\)Pb\({}_{0.5}\)Sn\({}_{0.5}\)I\({}_{3}\) perovskite thin films. Both samples keep a cubic perovskite structure from 90 to 340 K based on our X-ray diffraction, photoluminescence and absorbance measurements[1]. The thicknesses of the Pb- and PbSn-perovskites are 1.0 and 1.2 \(\upmu\)m respectively. The typical grain size for both perovskite samples is 1 \(\upmu\)m. The PbSn-perovskite was prepared with a tin-reduced precursor solution strategy to effectively prevent the oxidation of Sn\({}^{2+}\). Characterized in Ref.[2], the synthesis strategy of the PbSn-perovskite significantly reduces both the trap density and the hole density to half of the value compared to the control sample. The PbSn-perovskite has achieved a record power conversion efficiency when implemented in an all-perovskite tandem solar-cell[2]. ## Supplementary Note 2: Absorption Measurements The absorbance spectrum was measured in a closed-cycle cryostat equipped with a superconducting magnet. The white light from a stabilized quartz tungsten-halogen broadband light source was directed to and focused on the sample using free space optics. The circularly polarized white light was achieved by letting the beam pass a broadband polarizer and a quarter-waveplate whose fast axis is 45\({}^{\circ}\) rotated from the polarizer axis. The light beam was aligned with the magnetic field direction so that the differences between the light absorption with opposite helicities caused by the Zeeman splitting can be measured. The transmitted light was collected with an optical fiber coupled to a spectrometer. We used a 1200 grooves/mm blazed at 500 nm grating to disperse the light and analyzed the spectrum with an EMCCD camera (for the measurements of the Pb-perovskite). The system has a resolution of 0.02 nm. The absorbance spectrum of the PbSn-perovskite sample was measured with the same experimental setup except with a single-point InGaAs detector. The EMCCD camera provides exceptional sensitivity and noise performance ("single electron) compared to the InGaAs detector ("\(10^{3}\) electrons), which makes the Zeeman splitting measurement on the Pb-perovskite possible. We also measured the temperature dependence of the absorbance for both perovskite samples to trace their bandgap shift with temperature. ## Supplementary Note 3: Time-Resolved Faraday Rotation Measurements The time-resolved Faraday rotation experiment was conducted with a wavelength-tunable 80 MHz femtosecond Ti:Sapphire oscillator. The laser beam was split into the pump and probe beam paths. The pump beam was modulated between right- and left-circularly polarized states by a photo-elastic modulator to facilitate lock-in measurements. The time-resolved Faraday rotation experiment was set up around the same cryostat as in the absorbance measurements but in the Voigt geometry, where the laser beam propagation direction is normal to the sample plane and the magnetic field is in the sample plane. The laser was tuned to the wavelength near the bandgap (~817 nm for the Pb-perovskite, ~1020 nm for the PbSn-perovskite). Because the bandgap of the perovskites is sensitive to temperature, we tuned the laser wavelength accordingly based on our absorbance measurements to trace the bandgap shift. Both the pump and probe beams were focused on the sample using the same lens with a ~80 um spot (pump fluence ~0.2 um cm\({}^{-2}\)). ## Supplementary Note 4: Exciton Zeeman Splitting Under a longitudinal magnetic field \(B_{\mathrm{F}}\) (the subscript F stands for the Faraday geometry, i.e., the field is aligned with the light beam propagation direction), the energies of the spin up and down states move in the opposite directions, resulting in a difference in the absorption between left- and right-circularly polarized light near the band edge. This difference is attributed to the exciton Zeeman splitting, which is shown in Supplementary Figure 1 by plotting the absorption onset of light with opposite helicities at \(B_{\mathrm{F}}=6\) T in the Pb-perovskite. The splitting energy \(\Delta E\) changes linearly with the magnetic field \(B_{\mathrm{F}}\) (inset), corresponding to the expression \(\Delta E=g_{\mathrm{X}}\mu_{\mathrm{B}}B_{\mathrm{F}}\), where \(g_{\mathrm{X}}\) is the exciton \(g\)-factor and \(\mu_{\mathrm{B}}\) is the Bohr magneton. We extract the exciton g-factor \(g_{\mathrm{X}}=2.16(0.11)\) from the slope and confirm the positive sign of \(g_{\mathrm{X}}\) by explicitly comparing the direction of the magnetic field with the helicity of the light. This \(g_{\mathrm{X}}\) value confirms the recent finding of a weak dispersion of the exciton g-factor with the bandgap energy in lead perovskites[3]. We do not observe an obvious Zeeman splitting in the PbSn-perovskite due to the much broader absorption edge (Figure 1b in the main text) and the much lower signal-to-noise ratio of the InGaAs detector used in the PbSn-perovskite absorption measurements compared with the case of the Pb-perovskite (see Supplementary Note 2). ## Supplementary Note 5: Temperature Dependence of Time-Resolved Faraday Rotation Data **Supplementary Figure 3 Temperature dependence of the short lifetime \(\tau_{\text{short}}\) for the Pb-perovskite.** ## Supplementary Note 6: Thermal Shifts of g-Factors in the Perovskites Induced by Lattice Distortions The analytical expressions in the main text for the g-factors show how they are connected to the bandgap, the SOC gap and the interband transition matrix elements. As shown in Supplementary Figure 4, the measured bandgap of the Pb- and the PbSn-perovskites increases with temperature from 10 to 70 K with the rates of roughly 0.25 and 0.61 meV K-1, respectively. However, the bandgap shifts alone are insufficient to account for the huge g-factor changes in this temperature range. For example, in the Pb-perovskite, the bandgap needs to shift at least eight times faster with temperature to achieve the measured change in their g-factors while keeping all the other parameters fixed. We propose that not only the bandgap but also the SOC gap and the interband transition matrix elements vary significantly with the temperature, and the thermal lattice distortions are the main source causing the band structure shift. To support our claims, we build an empirical \(sp^{3}\) tight-binding model based on the parameters given in Ref. [4] for cubic MAPbI\({}_{3}\) and the matrix elements of the momentum operator are calculated with a standard approach [5; 6]. We consider the influence of the lattice distortions on the band structure and the interband transition matrix. First, we examine the lattice dilation effects. The thermal lattice expansion coefficient is measured to be 1.40*10\({}^{5}\) and 0.72*10\({}^{5}\) K\({}^{1}\) for Pb and PbSn samples, respectively, from X-ray diffraction measurements, resulting in a lattice expansion of 0.08% and 0.04% from 10 to 70 K. In the tight-binding calculations, the lattice expansion reduces the transfer matrix elements between different atoms, which are inversely proportional to the square of the lattice constant [7; 8]. Supplementary Figure 5 demonstrates the calculated results for thermal lattice expansion up to ~0.1%. The bandgap changes less than half of the experiment results, and, especially, the shifts in \(g_{\mathrm{e}}\) and \(g_{\mathrm{h}}\) are much smaller than the experimental values in the Pb-perovskite (Figure 5 in the main text). Note that the SOC gap (not shown) changes marginally in this lattice expansion range. Therefore, we instead consider the lattice vibration effects while retaining the equilibrium cubic perovskite structure. **Supplementary Figure 6 Lattice vibration effects on the band structure of MAPbI\({}_{3}\) by the tight-binding model.** (a-c) Schematics of the three vibration modes we consider in the calculation, where \(\delta_{x}\) denotes the displacement of the corresponding atoms along the \(z\) direction. The three modes are the vibration of the lead-atom Pb (a), the iodine atoms on the x-axis l\({}_{x}\) (b) and the iodine atoms on the z-axis l\({}_{z}\) (c). (d-f) Band structures along the M\({}_{x}\)-R-M\({}_{y}\) direction under the vibration modes corresponding to (a-c). (g-i) Vibration-induced shifts in the bandgap \(E_{g}\), the interband transition matrix element \(p\) (the Kane matrix element \(P=\hbar p/m_{0}\)) and the SOC gap \(\Delta\). The direction-averaged Kane parameter (\(P=\hbar/m_{0}\sqrt{(p_{x}^{2}+p_{y}^{2}+p_{z}^{2})/3}\)) is calculated considering that we measure polycrystalline samples. The perovskites have relatively "soft" lattice structures susceptible to dynamic vibration effects[9, 10], which cause strong modifications of their band structure. In the cubic-phase perovskites, this dynamic effect of the local symmetry breaking has been shown to be originated from the anharmonicity in the potential energy surface and is associated with the soft phonon modes at high temperatures[11, 12]. Previous molecular dynamics simulations on inorganic perovskites[10, 11, 12] showed that the thermal vibrations could distort the lattice bond angle by more than 10\({}^{\circ}\) at room temperature, and the bandgap renormalization due to the electron-phonon coupling was substantial to cause bandgap shifts by hundreds of meV. Because the state-of-the-art treatments of the anharmonic effects on the band structure require sophisticated first-principles calculations[13], we instead provide a simple picture of the lattice vibration effects on the g-factors based on the tight-binding model. We examine three representative vibration modes by displacing the lead and two inequivalent iodine atoms (in the x and z directions relative to the Pb atom) in the PbI\({}_{6}\)-octahedra along the z-direction and schematically display them in Supplementary Figure 6(a-c). Supplementary Figure 6(d-f) show their corresponding band structures near the R point, where the bandgap is located. Due to the broken inversion symmetry induced by these distortion modes, the Rashba splitting occurs near the band edges (except along the R-M\({}_{z}\) direction). Note that the four-fold rotational symmetry about the z-axis is retained in the cases shown in (a,d;c,f), but broken in the case shown in (b,e). In all three modes, when the vibration grows, the bandgap increases while the SOC gap and the transition matrix element decrease as shown in Supplementary Figure 6(g-i), all pointing towards the measured shifting trends of \(g_{\rm e}\) and \(g_{\rm h}\) considering the expressions of the g-factors in the main text. As a result, all three modes cause a decrease of \(g_{\rm e}\) and an increase of \(g_{\rm h}\) (Supplementary Figure 7). By choosing a range up to 0.02\(a\) for \(\delta_{\rm z}\), which is the estimated vibration amplitude for lead atoms at 70 K (reason stated in the main text), the mode with lead displacements induces shift of about -0.48 in \(g_{\rm e}\) and 0.33 in \(g_{\rm h}\), in good agreement with the experimental results. Although the vibrations in iodine atoms induce smaller shifts in g-factors, their larger vibration amplitudes due to the smaller atomic mass can compensate this and cause similar g-factor shifts as the case considering the lead atom vibrations. Note that in GaAs the mechanism of the electron g-factor shift with temperature is still under debate. The experimental observations of the g-factors contradicted the initial prediction of the \(\mathbf{k}\cdot\mathbf{p}\) theory if only the bandgap change with temperature was considered[14]. It turned out that the contributions from the temperature dependence of the interband matrix due to dynamic lattice and the remote bands and non-parabolicity of the conduction band are also important[15, 16]. A 5.4% decrease of the Kane energy was required to fit the experimental g-factor from 2.6 K to room temperature[15].
2304.14826
We both think you did wrong -- How agreement shapes and is shaped by indirect reciprocity
Humans judge each other's actions, which at least partly functions to detect and deter cheating and to enable helpfulness in an indirect reciprocity fashion. However, most forms of judging do not only concern the action itself, but also the moral status of the receiving individual (to deter cheating it must be morally acceptable to withhold help from cheaters). This is a problem, when not everybody agrees who is good and who is bad. Although it has been widely acknowledged that disagreement may exist and that it can be detrimental for indirect reciprocity, the details of this crucial feature of moral judgments have never been studied in depth. We show, that even when everybody assesses individually (aka privately), some moral judgement systems (aka norms) can lead to high levels of agreement. We give a detailed account of the mechanisms which cause it and we show how to predict agreement analytically without requiring agent-based simulations, and for any observation rate. Finally, we show that agreement may increase or decrease reputations and therefore how much helpfulness (aka cooperation) occurs.
Marcus Krellner, The Anh Han
2023-04-22T10:14:19Z
http://arxiv.org/abs/2304.14826v1
# We both think you did wrong - How agreement shapes and is shaped by indirect reciprocity - DRAFT ## Abstract Humans judge each other's actions, which at least partly functions to detect and deter cheating and to enable helpfulness in an indirect reciprocity fashion. However, most forms of judging do not only concern the action itself, but also the moral status of the receiving individual (to deter cheating it must be morally acceptable to withhold help from cheaters). This is a problem, when not everybody agrees who is good and who is bad. Although it has been widely acknowledged that disagreement may exist and that it can be detrimental for indirect reciprocity, the details of this crucial feature of moral judgments have never been studied in depth. We show, that even when everybody assesses individually (aka privately), some moral judgement systems (aka norms) can lead to high levels of agreement. We give a detailed account of the mechanisms which cause it and we show how to predict agreement analytically without requiring agent-based simulations, and for any observation rate. Finally, we show that agreement may increase or decrease reputations and therefore how much helpfulness (aka cooperation) occurs. **Keywords:cooperation, evolutionary game theory, indirect reciprocity, donation game, private assessments, analytical predictions** Introduction From simplest replicators, evolution has worked its wonders and today we marvel at the success, diversity and complexity of life. Many wonders of life were enabled by cooperation. Cells work together to form multicellular organisms, former bacteria serve as the powerplants of cells in exchange for eternal protection from an outside world, and individual organism work together in communities of thousands and millions to become the most widespread forms of life there are, ants/wasps and, in similar but different way, humans. Indirect Reciprocity (IR) is one of the few other mechanisms to facilitate cooperation among self-interested agents (Nowak and Sigmund, 2005, Rand and Nowak, 2013, 2). You help me, without receiving any direct returns (that is, in that moment you behave altruistically). But you will be rewarded by others in the future. Because others have observed your behavior and will like you for it. I.e. they judge your action and form an opinion about you. And because they like you, they decide to help you. Indirect reciprocity allows cooperation (aka continued help amongst helpers) in large groups of unrelated individuals, even if they do not interact enough to establish direct reciprocity (Schmid et al., 2022). Hence, indirect reciprocity has been the focus of many influential studies of evolutionary game theory (Nowak and Sigmund (1998a), Ohtsuki and Iwasa (2004), see also Sigmund (2016)) and is even considered as an important foundation of human morality (Nowak and Sigmund, 2005). Traditionally, studies simplified the mechanism of IR, assuming that judgments are unanimous (also called public assessments). More realistically, everybody would make their own judgements and therefore have their own opinions (aka private assessments). When the public assessment assumption is removed, dynamics and outcomes of IR models change (Brandt and Sigmund, 2004, Okada, 2020b), often to the worse for the evolution of cooperation (Hilbe et al., 2018, Uchida and Sasaki, 2013). The main reason is the emergence and spreading of disagreements about someone's reputation, which had not been possible with public assessments. Disagreements can significantly disturb IR strategies that are stable under the public assessment regime, because they hinder a universal principle: if you withhold help towards somebody bad, this should not be judged as bad 1(Ohtsuki and Iwasa, 2006, Panchanathan and Boyd, 2003). It is problematic to apply this rule, if potential helpers and observers may disagree. You withhold help to me, since you do not like me. But some observers like me, so they will disapprove of your action. So, if there was disagreement about my reputation, your reputation can get tainted. In addition, a disagreement can cause further disagreements. If two observers disagree about me, they may also judge you differently for withholding help. That is, they now disagree about another person's reputation. Even a single disagreement can cascade through the population, even if no further errors occur, so that all opinions become bad or essentially random (Hilbe et al., 2018). Private assessment and disagreements are not only detrimental for IR strategies, but also lead to new challenges for IR research. Whereas for public assessment, there had long been exhaustive investigations of hundreds of strategies (Ohtsuki and Iwasa, 2004)[santos] using analytical models, something similar for private assessment was only achieved in recent years (Okada, 2020, Perret et al., 2021). And these models still require restricting assumptions. Namely that, in an infinite population, only a finite number of players observe. That is, the probability to observe \(\psi\) and hence also the fraction of opinions changed by a single interaction are required to be negligible (\(\psi\to 0\)). Analytical models and most models of IR in general (Okada, 2020) consider opinions as binary. That is, an individual is judged as either good (1) or bad (0). To understand the state of opinions in a population, it is important to know the average reputation \(r\): the probability of a random opinion being good, as well as the average probability of an agreement \(a\): the probability that two randomly selected players have the same opinion about another random player. The models mentioned above only considered \(r\) (hence we shall call them _R-models_ for short). They assume that, when the observation probability is very small (\(\psi\to 0\)), the probability of agreement \(a\) only depends on \(r\) and is given by \(\widehat{a}=r^{2}+(1-r)^{2}\) (i.e. probability that two random opinions are either both good or both bad) (Okada et al., 2018). In theory, \(a\) can be much higher. The population could for example consist of two groups, one with perfect reputation, the other with an abysmal one, see case 3 in Figure 1. The average reputation is still \(r=0.6\). But all opinions about a player of the unlucky group are bad. Which means that the opinions of any two players about such a player will always be the same. This is similarly true for the lucky group in which all opinions are good. In case 3, average agreement is therefore 100%. There are of course also intermediate states, such as case 2. The left side of the figure shows the entire agreement-reputation space and how agreement can range anywhere between \(\widehat{a}\) and 1. Note that \(0.5\leq\widehat{a}\leq 1\) (since \(0\leq r\leq 1\)). There are obvious mechanisms that cause disagreement and push state of the population towards \(\widehat{a}\), as discussed above. However, there could be other mechanisms that increase agree **Figure 1.****a) Two dimensions of global opinion state.** Graph shows agreement-reputation space, with the solid lines \(a=1\) and \(a=\widehat{a}\) indicating outer borders of possible agreement values in grey. (Subsequently, values above \(a=\widehat{a}+0.05\) will be considered as significant additional agreement, since values very close to \(\widehat{a}\) could conceivably be the result of noise or finite properties of the simulations; or be just too insignificant to make a difference). Case 1 has minimal agreement for the specific global average reputation \(r\). Case 3 has maximal agreement for the same \(r\) and case 2 an intermediate one for the same \(r\) still. **b) Model of simplified image matrices.** We simplify the image matrices for the three cases by removing all information about whose opinions it is and what specific player it is about. We only consider whether the opinion is about a player of group 1 or group 2. The result can be imagined as two containers (left and right, separated by the double line) holding good opinions (dark blue areas). We may call the containers or groups "lucky" on the left and "unlucky" on the right. In case 1, both containers are filled to the same height, representing that the reputations of lucky and unlucky group, \(r_{L}\) and \(r_{U}\), are equal to the global average. In case 2, some good opinions from the unlucky container were 'poured' into the lucky container, keeping \(r\) unchanged since we do not add or remove any good opinions. In case 3, all remaining of these good opinions were poured as well, filling the lucky container exactly to the brim. It is apparent that in case 3, all players of the lucky group have only good opinions about them, therefore agreement about a player of this group is always \(100\%\). The same applies to the unlucky group, which has only bad opinions about them. **c) Size of the groups and precise state of agreement \(d\).** The complete transition from case 1 to case 3 can only be achieved under a specific condition. The size of the groups (the width of the containers) must be \(r\) and \(1-r\), respectively. With this, the areas \(A_{L}\) and \(A_{U}\) are of equal size. That means, for minimal agreement as in case 1, there are exactly as many good opinions about the unlucky group as there are bad opinions about the lucky one. Therefore, they can be entirely exchanged. The central feature of the analytical model introduced in this paper, is the parameter \(d\), which describes the percentage of good opinions from area \(A_{U}\) exchanged with bad opinions from area \(A_{L}\). This parameter is independent of \(r\) and can obtain any value in the interval \([0,1]\) (note that both areas disappear for the extreme cases \(r=0\) and \(r=1\), which is why \(d\) is arbitrary in these cases). For details see equations 2 and 4. ment. For example, if you donated your help, observers may not care about their current opinion about the recipient and like you either way (which is the case for image scoring (Nowak and Sigmund, 1998) or standing (Leimar and Hammerstein, 2001, Sudgen, 1986)). Hence, aside from a few who miss-perceive your action due to noise, all observers will have the same opinion about you. The existence of _additional agreement_, i.e. states where agreement is significantly above \(\widehat{a}\), was already implicitly shown by the work of Fujimoto and Ohtsuki (2022), who investigated the exact distributions of reputations for four strategies. They found that even infinite populations sometimes have two or even up to an infinite number of distinct reputation states, that a player could be in. It was not stated by the authors but any split of players into groups with different reputation states entails that the opinions of this population have additional agreement (see a proof sketch in Supplementary Information (**SI**)). Nevertheless, their seminal work included an analytical model to predict distributions of strategies exactly, which will likely be foundational for future analytical models of IR. Yet, two problems remain. The number of strategies it can be applied to is small and the model is limited to the condition of full observation (i.e., \(\psi=1\)). In conclusion, in the literature so far, there have been only two types of analytical models of private assessment, and they are all limited to either \(\psi=1\) or \(\psi\to 0\). Both cases are extreme ends of possible natural conditions. Full observation seems impossible due to physical constraints. Even if all interactions between people were public, there are still limitations to the attention observers can pay. Nominal observation rates, on the other hand, make IR less effective in the short-term, and may cause it to be replaced by direct reciprocity (Schmid et al., 2021). As Fujimoto and Ohtsuki (2022) state themselves, a model for intermediate observation rates is necessary, especially to study IR under private assessment in an exhaustive fashion, like the defining work of Ohtsuki and Iwasa (2004). Overall, our investigation fills two major gaps in the literature. We provide the first the exhaustive investigation of agreement in reputation dynamics under private assessment. We test whether additional agreement is common for high or intermediate observation rates, but also if the assumption of the R-models always holds for \(\psi\to 0\). We test this with simulations of a vast number of strategies and conditions. Second, we define the first _A-R-model_, that moves beyond the one-dimensional approach of the R-models, to capture the two-dimensional space of agreement and reputation. It is capable of predicting long-term behavior of said simulations. Both contributions advance the understanding of indirect reciprocity under private assessment and lay important groundwork for exhaustive evolutionary investigations. Methods ### Simulation We examine if the average agreement \(a\) is greater than assumed in the R-model, \(\widehat{a}=r^{2}+(1-r)^{2}\). For that, we run simulations with different strategies and conditions. To limit complexity, following the example of Fujimoto and Ohtsuki (2022), we consider homogeneous populations, i.e. within a simulation all agents follow the same strategy. We have chosen a framework to model many different strategies. The first characteristic of a strategy is its assessment rules, given by vector \(\alpha\). It defines how an observer judges the action, i.e. cooperation \(C\) or defection \(D\), depending on its opinion about the recipient, i.e. good \(G\) or bad \(B\). An assessment can be one of three forms: the action is approved of (+1), disapproved of (-1) or ignored (0). If the player approves of an action, their opinion of the donor will be good afterwards (which is the same as to say that it has increased within the limit of the two opinion values, 0 and 1, aka bad and good). If the action is disapproved of, the opinion will be bad. And if the action is ignored, the opinion does not change. The second characteristic of the strategy is given by its action rules \(\beta\). They define what players will do, when they meet a recipient they like or dislike. These actions are cooperate (1) or defect (0). In the following, if we refer to \(\alpha\) and \(\beta\) as a whole, in the order given below. But we will sometimes refer to specific values or pairs of values, such \(\alpha_{CG}\): cooperation towards good. With this framework we can model 324 different strategies. For example the commonly studied strategy "staying" (Okada et al., 2017, Sasaki et al., 2017) is described as follows: \[staying:\begin{array}{c}\alpha=\{\alpha_{CG},\alpha_{DG},\alpha_{CB},\alpha_ {DB}\}=(1,-1,0,0)\\ \beta=\{\beta_{G},\beta_{B}\}=(1,0)\end{array} \tag{1}\] Of all the possible strategies of our framework, we study 171 strategies with unique behavior. Each of the possible strategies has a mirror image, which behaves equivalently in regard to cooperation. If we were to label opinion states as blue and red (instead of good and bad), an mirror image would cooperate in the exact same way to the same partners, but would think of them as blue instead of red or vice versa. We obtain such a mirror image of a strategy by exchanging the symbols +1 and -1 in the assessment rules and flipping them for good and bad recipients (\(\alpha_{m}=(x_{1},x_{2},x_{3},x_{4})\) to \(\alpha=(-x_{3},-x_{4},-x_{1},-x_{2})\)) as well as flipping values of the action rules (\(\beta=(xy)\) to \(\beta_{m}=(yx)\)). Note, that some strategies are their own mirror strategy. This focus on unique strategies conveniently leaves us with only three possible action rules: (1,1): unconditional cooperators or AllC, (1,0): conditional cooperators or Coco2 and (0,0): unconditional defectors or AllD (see SI for examples of mirror images). Footnote 2: sometimes, somewhat unfortunately, called discriminators For our simulations, we consider a well-mixed population of size \(N=100\). Every agent can have either a good (1) or a bad (0) opinion about each other agent. Hence, the state of the population can be described by the \(N\times N\) image matrix \(M(t)\) of the population at time \(t\)(Uchida, 2010). Initially all entries are filled by a fair coin toss. Other studies have reported, that other initial conditions almost never change the outcomes (Hilbe et al., 2018). Time is discrete. In total, each time step consists of three parts. First, a donor \(do\) and a recipient \(re\) are drawn at random from the population. The donor then decides whether to cooperate. Note, these two steps are usually referred to as the donation game. To apply IR, donors base their decision on their action rule and their current opinion about the recipient. In the third part, opinions about the donor are updated due to observations. These observations and updating can be broken down further. First, for each player (except the donor and recipient) it is decided if they will observe the interaction. For each player, who observes, it is determined if it observes accurately or if the observation is altered by a perception error, i.e. with a probability \(\epsilon\) they will perceive the opposite of what the donor is actually doing. Next, individually perceived action and individual opinion of the observer about the recipient are combined for the private assessment of the donor. The opinion is updated if assessment is 1 or -1, but left as is if assessment is 0. Finally, each observer (whether they made an assessment or not) may change their opinion to a random value if they commit a cognitive error with probability \(\mu\). We study the behavior of the simulation in the long term and extract precise values for average reputation \(r\) and average agreement \(a\). In order to ensure accurate and reliable results, the simulations are run until we are reach an objective level of confidence about the measured average values (for details see **SI**.) Note, in our version of the donation game, the donor and recipient do not update their opinion. I.e. players ignore the information of interactions they are themselves involved. This means we study pure indirect reciprocity, without any direct reciprocity (Schmid et al., 2021). Because of that, the diagonal of the image matrix is never updated nor used. We therefore exclude it from all computations of averages. For a detailed motivation of this design see **SI**. ### Predictive A-R-Model How could we define a new model which can represent both \(r\) and \(a\), but is otherwise as simple as possible? It has to be able to represent a continuous transition of \(a\), from the baseline of the former prediction \(\widehat{a}\) to its maximum 1 (see transition from 1 to 3 in figure 1). The transition must be possible for any \(0\leq r\leq 1\) while keeping \(r\) constant. As mentioned, we could first nominally divide the population into two groups, lucky and unlucky. To increase agreement a minimal step, we could take away a random good opinion about an unlucky player and exchange with a bad opinion about a lucky player. I.e. we decrease the average opinion of unlucky players and increase the average opinion about lucky players. We could continue to do so until lucky players have the average reputation \(r_{L}=1\) and unlucky players have the average reputation \(r_{U}=0\), hence both agreement about lucky players \(a_{L}\) and agreement about unlucky players \(a_{U}\) is 1. We can do this only if the number of good opinions about unlucky players \(n_{GU}\) is equal to the number of bad opinions about the lucky players \(n_{BL}\). This is the case, if the size of the lucky group (or the percentage of lucky players) is equal to \(r\), hence the size of the unlucky group is \(1-r\). With this assumption, we may define a single value \(d\), which is both the percentage of good opinions about unlucky players stolen and the percentage of bad opinions about lucky players replaced (see figure 1c). \[r_{L}=r+d(1-r)\quad\&\quad r_{U}=r-dr,\quad 0\leq d\leq 1. \tag{2}\] With this model we define the two dimensional space with just two variables. We assume a specific distribution of reputations, namely that there are exactly two groups (in general there could be up to \(N\) groups in a population of \(N\) players). The goal of our model is not to represent the exact distribution. Just to represent \(a\) as well as \(r\), for which the two-group scenario provides a minimal model. It allows us to to model all agreement states in all reputation states with a single parameter \(d\). Both \(r\) and \(d\) can range between 0 and 1 (whereas the minimum of \(a\) would depend on current \(r\)). Note that \(d\) is undefined for \(r=0\) and \(r=1\) (see equation 3), but it does not need to be. In the case of \(r=0\), \(d\) vanishes from \(r_{U}\) and the size of the lucky group is zero, hence \(n_{GU}\) is zero for any \(d\). The opposite is true for \(r=1\) respectively. Besides these extreme cases, \(d\) is given by \[d=\frac{2^{1/2}(r(1-r)(-2r^{2}+2r+a-1))^{1/2}}{2r(1-r)}=\left(1-\frac{1-a}{2r(1-r) }\right)^{1/2},\ 0<r<1. \tag{3}\] This close form of \(d\) is derived by solving the equation (4) below for \(d\), by replacing \(r_{L}\) and \(r_{U}\) with the equations in (2). The global average agreement \(a\) is given by the sum of agreements about members in each group discounted by the group size, i.e. \(a=ra_{L}+(1-r)a_{U}\). The agreement for a group is derived by its average reputation, that is \(a_{L}=r_{L}^{2}+(1-r_{L})^{2}\) and \(a_{U}=r_{U}^{2}+(1-r_{U})^{2}\). Thus, \[a=r(r_{L}^{2}+(1-r_{L})^{2})+(1-r)(r_{U}^{2}+(1-r_{U})^{2}). \tag{4}\] We will now use \(r\), \(r_{L}\) and \(r_{U}\) to predict the probabilities of events of our simulation, assuming an infinite population. To model a time step of the simulation, we need to distinguish between onetime events and repeated events. There are three events that happen only once per time step: picking a donor, picking a recipient and the decision of the donor. For these we need to know three kinds of probabilities for later computation. First, a pair of probabilities \(q\) about the donor's group affiliation: \(q_{L}=r\) (lucky group) and \(q_{U}=1-r\) (unlucky group). Second, four probabilities \(p\) representing the combinations of donor's choice and recipient's group. For example, \(p_{CL}\), the probability that the donor cooperates and the recipient is in the lucky group (where \(\beta\) is the set of the donor's action rules and \(1-r_{L}\) is the chance the donor has a bad opinion of the recipient from the lucky group), is given as follows \[p_{CL}=r\Big{(}r_{L}\beta_{G}+(1-r_{L})\beta_{B}\Big{)} \tag{5}\] or in general \[\begin{split} p_{jk}=q_{k}\Big{(}r_{k}c_{jG}+(1-r_{k})c_{jB} \Big{)},\\ \text{where }j\in\{C,D\},\ k\in\{L,U\}\text{ and }c_{Cx}=\beta_{x},c_{Dx}=1-\beta_{x}. \end{split} \tag{6}\] Third, a 2-by-4 matrix \(G\) showing how likely the opinion of the donor is good after assessment. It depends on the previous reputation of the donor's group \(r_{i}\) and the four assessment rules \(\alpha\) (see equation 1). For example, since staying judges cooperation with bad recipient as neutral, \(\alpha_{CB}=0\), the probability to be considered good is equal to the previous reputation, which is if the donor belongs to the lucky group \[G_{L,CB}=r_{L}, \tag{7}\] or in general \[\begin{array}{r@{\quad}l}1,\quad&\mbox{if}\quad&\alpha_{mn}=1\\ G_{i,mn}=r_{i},\quad&\mbox{if}\quad&\alpha_{mn}=0\,\\ 0,\quad&\mbox{if}\quad&\alpha_{mn}=-1\end{array} \tag{8}\] \[\mbox{where }i\in\{L,U\},m\in\{C,D\},n\in\{G,B\}.\] These are the probabilities that depend onetime events. There are also probabilities of repeated events, namely observations, since there can be many observers. Each observation includes the action the observer perceives and what opinion it had about the recipient. Hence there can be four kinds of observations, and we compute their probability \(O\) for each of the four combinations of onetime events \(p\) described in equation 6. For example, in the event that the donor cooperated with a lucky recipient, we can compute the probability \(O_{CL,CG}\), that the observer has a good opinion about this lucky recipient and that the observer perceives the cooperation without a perception error \(\epsilon\). \[O_{CL,CG}=r_{L}(1-\epsilon), \tag{9}\] or in general \[\begin{array}{r@{\quad}l}r_{x}^{*}=r_{x}\quad&\mbox{if}\quad&n=G\\ r_{x}^{*}=1-r_{x}\quad&\mbox{if}\quad&n=B\\ e=\epsilon\quad&\mbox{if}\quad&m\neq j\\ e=1-\epsilon\quad&\mbox{if}\quad&m=j\end{array} \tag{10}\] \[\mbox{where }j,m\in\{C,D\},k\in\{L,U\},n\in\{G,B\}.\] We then take into account observation rates \(\psi\) and cognitive errors \(\mu\), which are also repeated events. Each observation might not take place, leaving opinions unchanged. When it takes place, the resulting opinion may be altered by a cognitive error and its value would be reversed. We therefore adjust \(G\) to \(G^{*}\). \[G^{*}_{i,mn}=(1-\psi)r_{i}+\psi(G_{i,mn}(1-\mu)+(1-G_{i,mn})\mu). \tag{11}\] With these probabilities we can now compute the probability of the eight possible interactions, that is, the combination of all three onetime events, as follows \[\Pi_{ijk}=q_{i}p_{jk}, \tag{12}\] and the expected reputation of the donor for each of these combinations \[\Gamma_{ijk}=\sum_{m,n}G^{*}_{i,mn}O_{jk,mn}. \tag{13}\] Finally, we can now compute the expected change of reputation \[\Delta_{r}=\Big{(}\sum_{i,j,k}\Gamma_{ijk}\Pi_{ijk}\Big{)}-r \tag{14}\] and the expected change of agreement \[\Delta_{a}=\Big{(}\sum_{i,j,k}((\Gamma_{ijk})^{2}+(1-\Gamma_{ijk})^{2})\Pi_{ ijk}\Big{)}-a. \tag{15}\] Note, as it was when computing agreement in the simulation, the instances of interaction have to be computed separately. Each of the eight possible interactions predicts a specific agreement about a donor's image. We compute each instance and only then compute the expected value of \(\Delta_{a}\). Note also, that we do not assume that the donor can be attributed to the lucky or unlucky group after assessment. These groups are simplifications of the real image matrix that we use to describe its state with only two parameters. Information about the exact expected reputations of the donor and hence the agreement about it for the eight possible interactions is lost. Our model does not give exact predictions. To validate our analytical model, we create a numerical algorithm to find the stable equilibria points in the \(r\times a\) space (see **SI** for a detailed description). We use it to find the equilibria for all cases which we simulated and compare the results. For additional comparison we also use an analytical approach that models agreement always as \(\widehat{a}\), aka a R-model (Perret et al., 2021), and compare the fit of both predictive models. ## 3 Results As described above, we studied 171 strategies with unique behavior. Each homogeneous population of 100 players is first tested for the observation rate \(\psi=1\) in 15 conditions. The conditions span an exhaustive range of perception errors \(\epsilon\in\{0,0.001,0.01,0.05,0.1,0.2,0.4\}\) and a reasonable range of cognitive error \(\mu\in\{0.001,0.01,0.05\}\). Note, this full observation is the natural opposite to limiting assumption of the R-model where \(\psi\xrightarrow{}0\). Figure 1(a) shows the results of our simulations as reddish dots. They are depicted separately for the three kinds as action rules: Coco, AllC and AllD. For AllC and AllD, agreement is always virtually minimal. But for Coco strategies, many points lay far above. Sometimes agreement is almost maximal (\(a\xrightarrow{}1\)), while \(\widehat{a}\) predicted it to be the smallest possible value (\(a=0.5\) at \(r=0.5\)). For Cocos, 25.2% of the measured agreements were at least 0.05 higher than expected (indicated by the broken black line). Note that we removed cases in which results differed significantly across runs. Averaging them could give a false impression of average agreement. For example, two runs with \(r_{1}=0.3\) and \(r_{2}=0.7\) and each minimal \(a_{i}\xrightarrow{}\widehat{a}=0.58\) would average to \(r=0.5\) and \(a=0.58\). For this averaged reputation, the minimal agreement would be \(\widehat{a}=0.5\), so the average measurement would be \(0.08\) higher than that. In other words, although the actual agreement in each run was minimal, the averaged agreement would indicate additional agreement above the threshold. We exclude cases in which the range of \(r\) in a single condition is above \(0.1\) (\(2.8\%\) of cases in condition \(\psi=1\), see **SI** for examples of excluded cases and alternative figure with cases included). It is clear that our findings are not caused by such artifacts and that for \(\psi=1\) results differed substantially from the prediction \(\widehat{a}\) with more agreement than expected for many conditional cooperators. Next, we show that these results can be replicated for \(\psi=1\) by our A-R-model, see blue dots in figure 1(b). Excluded are again cases for with multiple equilibria (given the much higher precision of the analytical approach we excluded all cases in which the total distance in the agreement-reputation space, similar to equation 16, exceeded 0.01, see **SI** for examples of excluded cases). The general patterns of the simulation could be replicated. AllCs and AllIDs have virtually no additional agreement whereas Cocos do. The qualitative fit between simulation and model is striking. **Figure 2. Comparing results of simulations and analytical predictions.** Values for 171 unique strategies in 15 conditions (namely, \(\epsilon\in\{0,0.001,0.01,0.05,0.1,0.2,0.4\}\) and \(\mu\in\{0.001,0.01,0.05\}\)). Reputation \(r\) (x-axis) can range between \(0\) and \(1\). Agreement \(a\) can range between \(a=\widehat{a}\) and \(a=1\) (solid black lines). Values above \(a_{th}=\widehat{a}+0.05\) (broken black line) are considered as significant additional agreement. Figure 3: **Results for Cocos with other observation rates \(\psi\).** On the left are the simulation results, and on the right, the predictions of the A-R-model. Values for all 81 unique conditional cooperators, with conditions and excluding criteria the same as those given in figure 2. Before we quantitatively describe the fit, we expand our investigation to other observation rates. We show only results for Cocos, since these are the most interesting cases, in figure 3. We studied three intermediate observation rates \(\psi\in\{.75,.5,.25\}\) as well as a special case, where every round would have exactly one observer. A single observer is the smallest amount of meaningful observation possible. No observation would simply not change the image matrix and have no impact on reputation or agreement. It is the closest one can get to the limiting assumption of the R-Model \(\psi\to 0\).3 Footnote 3: Even taking a very small probabilistic observation rate such as \(\psi=0.0001\) would have actually increased the number of observers. All rounds in which no observation takes place are simply ignored. The rest may have one or more observers. In addition, simulating many rounds just to be ignored would increase the computational demand unnecessarily for no benefit. Both simulation and prediction show a steady decline in agreement, until \(a\) is virtually minimal when there is only a single observer. This is a qualitative fit of what the R-model assumes. Adjusting the observation rate in the A-R-model replicates it. For all observation rates, qualitative fit is still high. And, similarly to full observation, intermediate observation rates still show significant additional agreement. We quantified the fit of simulation and predictions in two ways. Our A-R-model makes predictions about agreement \(a_{p}\) as well as reputation \(r_{p}\). We therefore compute the absolute distance to simulated agreement \(a_{s}\) and reputation \(r_{s}\) by \[\Delta=\sqrt{(a_{p}-a_{s})^{2}+(r_{p}-r_{s})^{2}}. \tag{16}\] We excluded values not shown in figures 2 and 3 (if either simulation or prediction had to be excluded, we excluded the entire pair). Results for all observation rates are shown in the left of table 1. For both maximal and intermediate observation range, the fits are excellent. The median of absolute distance between the simulations and the model is 0.003. For 92.7-95.7% of predictions, the deviation is smaller than 0.01. For the minimal observation rate, fits are slightly worse. The median 0.005 and only 72.3% of predictions are less than 0.01 of target. We also compared the predictions of our A-R-model with predictions of the R-model. Since the R-model can only predict reputation \(r_{o}\), we compared the two models in that regard. The difference in deviation is shown in the right of table 1. Keep in mind that the R-model was designed only for \(\psi\to 0\), which is most closely met in the single observer condition. The new A-R-model is not often better as the R-model in this case. And at least for one case, it is much worse, as indicated by the range. However, the A-R-model is superior for all high and intermediate observation rates \(\psi\). Its predictions are better in \(4.6\%\) to \(11.5\%\) of all cases (AllD, AllC and Cocos), and the accuracy is increased by as much as \(0.5\), which is half of the maximum range of \(r\). Higher accuracy is especially prevalent for Cocos, to which all evolutionary successful strategies belong, such as image scoring (Nowak and Sigmund, 1998a) and the leading-eight (Ohtsuki and Iwasa, 2006). Here, for \(\psi=1\), the A-R-model is better in \(24\%\) of cases. Considering an independently changing average agreement seems to have increased the prediction of average reputation substantially. ## 4 Discussion We now summarize our findings and their immediate implications, before highlighting remaining issues and limitations. We further compare our A-R-model with the recent model of Fujimoto and Ohtsuki (2022) to highlight the differences and common ground. We then discuss some important potential extensions of the A-R-model, which would enable it to implement recently proposed enhancements to IR strategies, such as generous assessment (Schmid et al., 2021a) and pleasing (Krellner and Han, 2021). We will close with possible real world implications of the discovered additional agreement. In the first part of our report, we showed that additional agreement can emerge in indirect reciprocity under private assessment. For high or intermediate observation rates, opinions about a person were shared much more often than mere chance. Therefore, the assumption of the minimal agreement \(\widehat{a}\), on which most previous models (R-models) relied, cannot be generalized \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{A-R-model} & \multicolumn{3}{c}{Difference A-R-model minus R-model} \\ & \multicolumn{3}{c}{absolute deviation} & \multicolumn{3}{c}{deviation in reputation} \\ & median & \(<.01\) & \(<.05\) & \(<-.01\) & \(>.01\) & range \\ & & & & (A-R better) & (A-R worse) & \\ \hline \(\psi=1\) & 0.003 & 92.7\% & 99\% & 11.5\% & 0\% & -0.498:0.006 \\ \(\psi=0.75\) & 0.003 & 95.7\% & 99\% & 9.7\% & 0.1\% & -0.497:0.051 \\ \(\psi=0.5\) & 0.003 & 95.4\% & 98.9\% & 7.8\% & 0.2\% & -0.495:0.053 \\ \(\psi=0.25\) & 0.003 & 94.7\% & 98.4\% & 4.6\% & 0.1\% & -0.491:0.014 \\ single observer & 0.005 & 72.3\% & 86\% & 1.2\% & 0.4\% & -0.034:0.334 \\ \hline \hline \end{tabular} \end{table} Table 1: Deviations between Model Predictions and Simulation to these circumstances. We could on the other hand confirm that the assumption is reasonable for very low observation rates, such as a single observer for each interaction. But we showed, that results about evolutionary success with solitary observers (Okada, 2020, Okada et al., 2018, Perret et al., 2021) cannot yet be generalized to other observation rates. The second part of our report makes an important step towards that goal. Previous R-models can only model average reputation and therefore must rely on the assumption of minimal agreement. Our A-R-model is able to represent average levels of agreement and reputation independently. It predicts both with astounding accuracy and does so for any intermediate or high observation rates. It outperforms the predictions of R-models in these circumstances by a large margin. Making precise measurements or predictions of reputation in particular is the basis for studies on the evolutionary stability of any reputation-based IR strategies (Okada, 2020). An individual's reputation directly corresponds to how much cooperation they receive, hence it determines their payoffs and even the payoffs of others (from which the may receive cooperation, hence cause costs to them). Parallel to this work, other researchers have discovered another way to predict opinion dynamics more accurately than by average reputation alone. Fujimoto and Ohtsuki (2022) were able to predict precise distributions of reputations, e.g. 80% of players who would have a reputation of about 0.8, 15% of 0.2, and 5% of 0.25. This was another hugely important step for analytical models of IR under private assessment and is, of course, closely related to the current paper. Our model assumes a simplified distribution of reputations. We treat the populations as if there were at most two groups, and as if their size was given by the average reputation. Fujimoto and Ohtsuki (2022) show that this is often not the case, that there can be more groups or other compositions. Their analytical approach is exact and uses no simplification. However, as the fit of our model shows, this level of detail may not be necessary. And the level of detail in their model seems to come at a cost as their analysis is limited to only four strategies. It seems reasonable however that their framework could model all strategies, that care about action as well as reputation of the recipient, but apply only binary assessment rules (i.e. assess each action as either good or bad, but not as neutral). This would cover 64 strategies (mirror images included) compared to the 324 of our study.4 Their model also covers only a single observation rate \(\psi=1\). The authors themselves state that being able to model arbitrary observation rates would be a very important extension of their approach. Our A-R-model is already able to do that. And, it seems at least possible, that their approach is not capable of dealing with intermediate observation rates, i.e. \(\psi<1\). Because their model currently relies on the fact that the new reputation of the donor is entirely independent of their current reputation. If some players do not observe, their new opinions depend (entirely) on their current opinions, since their opinions are just kept as they are. Their model could no longer be based on one dimension (the current reputation of recipients) but would have to incorporate entirely new dimension (the current reputation of the donor). This increase in complexity might be a serious problem. Footnote 1: The authors are interested in the following question: how the donor is able to model arbitrary observation rates? These problems for arbitrary observation rates also concern the modelling of more complex, so-called 3rd-order strategies (Santos et al., 2021), which also use the current reputation of the donor in their assessment. In contrast, in our A-R-model, the current reputation of the donor is already incorporated. It will be straightforward to extend our model to consider all 2048 3rd-order strategies (mirror images included) (Ohtsuki, 2004). We envisage the following additional extensions of our model. First, we can include strategies that do not use deterministic rules but probabilistic ones (Schmid et al., 2021, 2021). Instead of assessing defection always as bad, observers may only do so 80% of the time (in other words they are sometimes generous in the judgment). The same could be applied to their action rules. They may want to cooperate even with bad individuals about 10% of the time. Second, we can include pleasing (Krellner and Han, 2020, 2021). Instead of granting or refusing help only based on the donor's own opinion (which can be easily disturbed by perception error), the donor pools some of the opinions of others and act like the majority would decide. Some of these extensions are as easy as replacing -1, 0 or 1 in the action and assessment rules of this paper with -0.8 and 0.1. This paper is the first to study agreement in such detail. Agreement is the key feature that distinguishes previous research on IR under public assessment from the more recent research on private assessment. Public assessment fixes agreement at its maximum (all agree all the time), but private assessment does not fix it at the opposite state i.e. its minimum \(\widehat{a}\). Public and private are not opposite sides of the same coin. Rather private assessment allows agreement to vary, opening a new dimension of complexity to the dynamics of reputation-based IR. Figure 4: **Detailed comparison between models for staying norm** (see equation 1). Parameters are \(\psi=1\), \(\epsilon_{p}=0.1\) and \(\mu=0.01\). Bottom graph shows the two-dimensional space of the A-R-model. Dark blue areas indicate likely changes in either agreement or reputation or both, whereas light yellow areas indicate little to no change. Arrows indicate the direction of the change. The precise predictions of the model is indicated by a blue x. Simulations are shown in red, the averages of single runs are shown as red circles and their total average as a red +. Top graph shows one-dimensional R-model in comparison. Change in reputation and its direction is depicted on the y-axis in addition to the arrows. The stable point (green x) is where the x-axis is crossed with a downward slope. The white line in the A-R-model indicates a shift from reputation decrease (below) to reputation increase (above). Our A-R-model allows one to study agreement and reputation in even more detail than was reported within the results of this paper. We focused on the stable states of the population. But one could study the direction and likelihood of change in each state, as seen in figure 4 for the example of the staying strategy (see also **SI** for other important strategies, such as image scoring (Nowak and Sigmund, 1998a) and the leading-eight (Ohtsuki and Iwasa, 2006)). One important insight this grants is about the stability of other regions. For staying, there exist somewhat stable areas with reputations lower than \(r=0.3\) and minimal agreement. The existence of additional relatively stable regions has large implications for the short-term behavior of the population. Seeing the direction of change in the entire space also allows us to make educated guesses about another dynamic. The white line in the A-R-model of figure 4 indicates a shift from reputation decrease (below) to reputation increase (above). This highlights how the trend in reputation depends on the state of agreement. It seems reasonable, that increasing agreement could increase reputation as well. Increase in agreement could be done by a form of opinion synchronization, for example by some players gossiping about their observation or opinions to make other opinions fall into line with their own. Hilbe et al. (2018) suggested that any IR strategy would profit of, or indeed rely on, some sort of opinion synchronization to maintain stable cooperation under private assessment. With the A-R-model, we can visualize which strategies can actually profit from such mechanisms. Some are not affected, e.g. image scoring, some actually show the opposite pattern, such as "GKGB" (Okada, 2020b), which seems only stable under private assessment, but not under public one (see **SI** for examples). In the future, we can seek a way to incorporate a probability for opinion synchronization in the A-R-model, to predict new stable points under various synchronization regimes. Related to that is a last alteration of the model. In the current investigation, we considered private assessment in accordance with most literature (Okada, 2020a). Every individual or player in the population observes independently. But we can imagine a situation, where only a few players observe, who then share their assessment with many others. Consider for example the extreme case of a single observer that shares with the entire population. This would correspond to public assessment (Ohtsuki and Iwasa, 2004), where no disagreement is possible (\(a=1\)). In a more realistic scenario, multiple observers are at least possible, and they may judge the interaction differently due to individual errors or different previously held opinions. The information of their judgments may also fail to reach some players. In such scenarios, disagreement can exist. However, all players who got the news from the same observer will most likely have the same opinion (even if information transfer is noisy). It is a form of built-in opinion synchronization. We can therefore expect agreement to be higher than in the scenario of the current paper. The amount of agreement reported here might only be the lower end for private assessment scenarios. Lastly, we look at the reasons additional agreement emerges. As discussed above, there can be no additional agreement if every player has the same reputation. Additional agreement requires at least two groups exist (but up to \(N\)-many), with different average reputations. Such groups can only emerge, if players with the same strategy face different fates. For example, a donor of strategy \(\alpha=(1,-1,1,-1)\) and \(\beta=(1,0)\) (image scoring) can get lucky by meeting a recipient they believe to be good. Hence the donor cooperates and earns a high reputation (everybody who observes, and does not make an error, believes the donor to be good). But another donor with the same strategy might get unlucky, because they believe the recipient to be bad. Hence they defect and most observers will now think this donor is bad. In general, groups can emerge if different onetime events cause different expected reputations. Such onetime event are which recipient is met or which decision is made. In the image scoring example, if errors happen with probability \(0.1\), we expect the reputation of a donor to be either \(0.1\) if they defected or \(0.9\) if they cooperated. Different decisions as donor seem indeed to be the most important factor for additional agreement. That is why only conditional cooperators (Coco) show additional agreement, but unconditional cooperators (AllC) or defectors (AllD) show none. Because some Cocos cooperate and some defect, they can form groups with different reputations and therefore have additional agreement. Is additional agreement a good thing? Our results for strategies such as staying show that it can increase reputations. The reputations studied here correspond to how often a player finds another player of the same strategy worth of cooperation (and Cocos also act on it). More agreement can increase reputations, hence increase cooperation rates within the strategy, hence increase the stability of that strategy. This could keep defectors at bay and increase cooperation in general. Evolutionary investigations need to confirm these assumptions, but in that regard, agreement might be a good thing. However, we also showed another consequence of agreement. It is connected with differences in reputations. Such differences inevitably lead to short-term inequality between players. Over the long run, each player of the same strategy will alternate between being lucky and being unlucky, so each will earn the same pay-offs. But some players may earn much less than others if only a few consecutive interactions are considered. This can be a problem, if the game (or life) would require a player to earn a minimal amount to sustain themselves. It may also be a problem if such inequality itself has bad consequences for individuals or society. Research on IR seems to continue to converge (Krellner and Han, 2022, Okada, 2020a). However, there seem to be still a lot of potential to deepen our understanding of the processes, as demonstrated in Fujimoto and Ohtsuki (2022) and the current paper. The understanding of agreement is central for any analytical model of IR under private assessment. It is central in understanding what benefits or problems that opinion synchronization or different observation rates may bring. And it is as central to all research on reputation-based IR under private assessment, since agreement can change independently of reputation and can significantly alter the latter as well. Our A-R-models provides the means to study the evolutionary dynamics of indirect reciprocity under private assessment for yet the largest strategy space and widest range of conditions. ## References * Brandt and Sigmund (2004) Brandt, H. and Sigmund, K. (2004). The logic of reproation: Assessment and action rules for indirect reciprocation. _Journal of Theoretical Biology_, 231(4):475-486. * Fujimoto and Ohtsuki (2022) Fujimoto, Y. and Ohtsuki, H. (2022). Reputation structure in indirect reciprocity under noisy and private assessment. _Scientific Reports_, 12(1):1-13. * Hilbe et al. (2018) Hilbe, C., Schmid, L., Tkadlec, J., Chatterjee, K., and Nowak, M. A. (2018). Indirect reciprocity with private, noisy, and incomplete information. _Proceedings of the National Academy of Sciences_, page 201810565. * Pleasing enables indirect reciprocity under private assessments. In _The 2020 Conference on Artificial Life_, pages 402-410, Cambridge, MA. MIT Press. * Krellner and Han (2021) Krellner, M. and Han, T. A. (2021). Pleasing Enhances Indirect Reciprocity-Based Cooperation Under Private Assessment. _Artificial Life_, pages 1-31. * Recent Findings on the Feasibility of Indirect Reciprocity under Private Assessment. In _The 2022 Conference on Artificial Life_, volume 1, Cambridge, MA. MIT Press. * Leimar and Hammerstein (2001) Leimar, O. and Hammerstein, P. (2001). Evolution of cooperation through indirect reciprocity. Nowak, M. A. and Sigmund, K. (1998a). Evolution of indirect reciprocity by image scoring. _Nature_, 393(6685):573-577. * Nowak and Sigmund (1998b) Nowak, M. A. and Sigmund, K. (1998b). The dynamics of indirect reciprocity. _Journal of Theoretical Biology_, 194(4):561-574. * Nowak and Sigmund (2005) Nowak, M. A. and Sigmund, K. (2005). Evolution of indirect reciprocity. _Nature_, 437(7063):1291-1298. * Ohtsuki (2004) Ohtsuki, H. (2004). Reactive strategies in indirect reciprocity. _Journal of Theoretical Biology_, 227(3):299-314. * Reputation dynamics in indirect reciprocity. _Journal of Theoretical Biology_, 231(1):107-120. * Ohtsuki and Iwasa (2006) Ohtsuki, H. and Iwasa, Y. (2006). The leading eight: Social norms that can maintain cooperation by indirect reciprocity. _Journal of Theoretical Biology_, 239(4):435-444. * Okada (2020a) Okada, I. (2020a). A Review of Theoretical Studies on Indirect Reciprocity. _Games_, 11(3):27. * Okada (2020b) Okada, I. (2020b). Two ways to overcome the three social dilemmas of indirect reciprocity. _Scientific Reports_, 10(1). * Okada et al. (2017) Okada, I., Sasaki, T., and Nakai, Y. (2017). Tolerant indirect reciprocity can boost social welfare through solidarity with unconditional cooperators in private monitoring. _Scientific Reports_, 7(1):9737. * Okada et al. (2018) Okada, I., Sasaki, T., and Nakai, Y. (2018). A solution for private assessment in indirect reciprocity using solitary observation. _Journal of Theoretical Biology_, 455:7-15. * Panchanathan and Boyd (2003) Panchanathan, K. and Boyd, R. (2003). A tale of two defectors: The importance of standing for evolution of indirect reciprocity. _Journal of Theoretical Biology_, 224(1):115-126. * DRAFT. _Scientific Reports_, 11(1). * Rand and Nowak (2013) Rand, D. G. and Nowak, M. A. (2013). Human cooperation. _Trends in Cognitive Sciences_, 17(8):413-425. * Ruegg et al. (2017) Santos, F. P., Pacheco, J. M., and Santos, F. C. (2021). The complexity of human cooperation under indirect reciprocity. _Philosophical Transactions of the Royal Society B: Biological Sciences_, 376(1838). * Sasaki et al. (2017) Sasaki, T., Okada, I., and Nakai, Y. (2017). The evolution of conditional moral assessment in indirect reciprocity. _Scientific Reports_, 7(1):41870. * Schmid et al. (2021a) Schmid, L., Chatterjee, K., Hilbe, C., and Nowak, M. A. (2021a). A unified framework of direct and indirect reciprocity. _Nature Human Behaviour_, 5(10):1292-1302. * Schmid et al. (2022) Schmid, L., Hilbe, C., Chatterjee, K., and Nowak, M. A. (2022). Direct reciprocity between individuals that use different strategy spaces. _PLOS Computational Biology_, 18(6):e1010149. * Schmid et al. (2021b) Schmid, L., Shati, P., Hilbe, C., and Chatterjee, K. (2021b). The evolution of indirect reciprocity under action and assessment generosity. _Scientific Reports_, 11(1):1-14. * Sigmund (2016) Sigmund, K. (2016). _The calculus of selflessness_. Princeton University Press. * Sudgen (1986) Sudgen, R. (1986). _The Economics of Rights, Cooperation and Welfare_. Basic Blackwell. * Uchida (2010) Uchida, S. (2010). Effect of private information on indirect reciprocity. _Physical Review E_, 82(3):036111. * Uchida and Sasaki (2013) Uchida, S. and Sasaki, T. (2013). Effect of assessment error and private information on stern judging in indirect reciprocity. _Chaos, Solitons & Fractals_, 56:175-180. We both think you did wrong - How agreement shapes and is shaped by indirect reciprocity - Supplementary Information (**Si**) M. Krellner\({}^{1}\) and T. A. Han\({}^{1}\) \({}^{1}\)School of Computing, Engineering and Digital Technologies, Teesside University, UK [email protected] ## S1 What mirror images of strategies were excluded Each strategy of our framework has a mirror image, which behaves equivalently in regard to cooperation. If we were to label opinion states as blue and red (instead of good and bad), a mirror image would cooperate in the exact same way to the same partners, but would think of them as blue instead of red or vice versa. We obtain such a mirror image of a strategy in three steps (see table S1). We exchange the symbols +1 and -1 in the assessment rules (a) and flip them for good and bad recipients (b), transforming \(\alpha=(x_{1},x_{2},x_{3},x_{4})\) to \(\alpha_{m}=(-x_{3},-x_{4},-x_{1},-x_{2})\). We also flip the values of the action rules (c), transforming \(\beta=xy\) to \(\beta_{m}=yx\). For the strategy staying, see mirror image in S1.d). Sometimes the transition of assessment rules creates the same assessment rules again (e). If the action rule is either (0,0) or (1,1), such a strategies is its own mirror image (f). Deciding which mirror images to keep is somewhat arbitrary. We decided to keep all \(\beta=10\) strategies and excluded their mirror images, which eliminated the action rule \(\beta=01\) entirely from the population of strategies. This action rules suggest cooperating with bad players and defecting towards good players, which seem intuitively wrong. We were left with only three possible action rules: (1,1): unconditional cooperators or AllC, (1,0): conditional cooperators or Coco and (0,0): unconditional defectors or AllD. ## S2 Adaptive algorithm to run simulations until precise average values are reached We break our simulations into multiple parts and after each decide whether to continue or not. The smallest unit is a segment consisting of \(10^{3}\) time steps. After a segment, we compute the reputations and agreements about all individual players at all time steps, and then average them over time and population. The average agreement \(a_{i}(t)\) about a player \(i\) at time \(t\) is computed by a similar formula as before, \(a_{i}(t)=r_{i}^{2}(t)+(1-r_{i}(t))^{2}\), i.e. it is given by their reputation at that time \(r_{i}(t)\). They are computed individually, since averaging reputation first would destroy the information about any agreement above \(\widehat{a}\). After computing \(r\) and \(a\) of the segment, we average the values of the last half (rounded down to closest integer) of segments to estimate long term \(r\) and \(a\) (i.e we include middle one if uneven number and the fist estimation is just the first value). Only after at least 10 segments, we compute the standard deviations of the last 10 estimates. We stop the simulation if the standard deviations of both estimates are below 0.001. This constitutes a run and the last estimation will be saved as its result. For the next run, we initiate the image matrix with new random opinions. Since we do not know, if the populations will show the same trends for each run, we again continue the simulations until we reach a confident estimation. After each run we compute a new estimation of \(r\) and \(a\) by averaging all runs (not just half). If, after at least 10 runs are done, the standard deviation of the last 10 estimations is below 0.001, we stop further runs and continue with the next condition. The results of each run of an condition and their averages are saved. For example, the simulations for condition \(\psi=1\) ran on average for 19.9 segments each run (i.e. 19900 time steps) and were repeated for 13.4 runs until they reached a satisfying accuracy. ## S3 Why Donor and Recipient do not update their opinion about the Donor Indirect Reciprocity (IR) is often referred to as a different mechanism from direct reciprocity (DR) (Rand and Nowak, 2013). However, as was shown by Schmid et al. (2021), both can exist in the same framework. In their model, DR is just a special case of IR, where the probability to use information outside ones' own interaction is zero. They assumed that observer and recipient always observe their own interactions. Hence, if they do not observe any other interactions, they judge others only by how they were treated themselves. The strategy, which evolves, closely resembles generous tit-for-tat (Nowak and Sigmund, 1992). Two problems arise if we let donors and recipients judge under private assessments. First, a donor judges itself systematically different from the average of how it is judged by others. Others may have a different opinion about the recipient than the donor (agreement is almost never perfect). Hence some will agree with the donor and judge it like the donor judges itself; but others will not. The donor is systematically biased about itself compared to the rest of the population. (For strategies such as "staying" where the donor plays the optimal action rule for its norms, the donor will tend to have a better opinion about itself than the rest of the population has.) Every player has this self-bias, since they only ever judge themselves while being a donor. Because of this bias, judgments by recipients are biased as well. A recipient judges actions according to its opinion about itself. Having an systematically different opinion about oneself causes systematically different opinions about donors while being recipient. For our investigation in particular, this would have been a problem, since the impact of this bias systematically differs between finite and infinite populations, which we directly compared. In infinite populations, self-bias is not a factor, since any finite number of opinions has no impact on the total average of opinions. However, it can have an impact in finite populations. Furthermore, there is the problem that the impact is not constant for different observation rates \(\psi\). In finite populations, where recipient and donor always judge, but only one other player does, the impact of the judgements of donor and recipient will be much higher than if 98 other players judge at the same time. This concerns the impact of self-bias, but also the impact of direct reciprocity Schmid et al. (2021). We considered both to be undesirable for the current study, and therefore excluded donor and recipient from judging. Judgements as a donor are only meaningful if they are later used in judgments as a recipient anyway. Our approach excluded self-bias and DR, and allowed the dynamics of finite and infinite populations to resemble each other even for very small observation rates. ## S4 Numerical algorithm to find equilibria of A-R and R-models Given the definitions above, we can define a function \(f\) which takes the inputs: current reputation \(r\), current agreement \(a\), as well as the description of the strategy \(\alpha\) and \(\beta\) and the two error parameters \(\epsilon\) and \(\mu\). And, which outputs are the expected change \(\Delta_{r}\) and \(\Delta_{a}\). So, we can determine whether reputation and agreement are expected to increase or decrease for a current state. We now look for states where both expected changes are zero, i.e. equilibrium points. These states are candidates for stable points, which predict the state of the system after a long period of time. Finding equilibria for our models is difficult. The above-mentioned function cannot be solved symbolically, since the degree of polynomials in general is rather large. We created a two-level approach to find the equilibria numerically. On the lower level we use the function with the "solve" of Matlab2022b library (2). The function has strengths and weaknesses. It uses a deterministic algorithm and is very fast and precise. However, it requires the input of a starting point and may fail to find the equilibrium from this point if it runs in local minima or, as is the case for our function, if it runs in undefined states (the function is not defined outside of \(0\leq r\leq 1\) and \(r^{2}+(1-r)^{2}\leq a\leq 1\)). We therefore add a second layer. We try multiple starting points and weed out failed attempts after the fact. We have chosen enough starting points (namely, 101) to find at least one equilibrium for each cases, and use the same starting points for all of them. This would sometimes yield multiple equilibria. Such results fall into two categories. Either the points lay closely together (total distance i0.01), which indicates a single true equilibrium that could not be computed with the same precision from all directions. In that case we look for the result with the smallest output of \(f\) and discard the others. In the second category, there are indeed multiple numerically derived equilibria, some of which may not be stable. This is the case for just 3 out the 171 unique strategies. To simplify further analysis, we exclude these from the comparison with the simulations.
2310.16176
Correction with Backtracking Reduces Hallucination in Summarization
Abstractive summarization aims at generating natural language summaries of a source document that are succinct while preserving the important elements. Despite recent advances, neural text summarization models are known to be susceptible to hallucinating (or more correctly confabulating), that is to produce summaries with details that are not grounded in the source document. In this paper, we introduce a simple yet efficient technique, CoBa, to reduce hallucination in abstractive summarization. The approach is based on two steps: hallucination detection and mitigation. We show that the former can be achieved through measuring simple statistics about conditional word probabilities and distance to context words. Further, we demonstrate that straight-forward backtracking is surprisingly effective at mitigation. We thoroughly evaluate the proposed method with prior art on three benchmark datasets for text summarization. The results show that CoBa is effective and efficient in reducing hallucination, and offers great adaptability and flexibility. Code can be found at https://github.com/zhenzhel/CoBa.
Zhenzhen Liu, Chao Wan, Varsha Kishore, Jin Peng Zhou, Minmin Chen, Kilian Q. Weinberger
2023-10-24T20:48:11Z
http://arxiv.org/abs/2310.16176v3
# Correction with Backtracking Reduces Hallucination in Summarization ###### Abstract Abstractive summarization aims at generating natural language summaries of a source document that are succinct while preserving the important elements. Despite recent advances, neural text summarization models are known to be susceptible to hallucinating (or more correctly confabulating), that is to produce summaries with details that are not grounded in the source document. In this paper, we introduce a simple yet efficient technique, CoBa, to reduce hallucination in abstractive summarization. The approach is based on two steps: hallucination detection and mitigation. We show that the former can be achieved through measuring simple statistics about conditional word probabilities and distance to context words. Further, we demonstrate that straight-forward backtracking is surprisingly effective at mitigation. We thoroughly evaluate the proposed method with prior art on three benchmark datasets for text summarization. The results show that CoBa is effective and efficient in reducing hallucination, and offers great adaptability and flexibility. ## 1 Introduction Recent summarization methods, based on neural sequence-to-sequence and language models (LM), are able to produce high-quality summaries (Zhang et al., 2020; Chung et al., 2022; Touvron et al., 2023). However, despite their impressive capabilities these summarization models are prone to hallucinations, a phenomenon where models make statements that seem plausible but are not grounded in the source document (Pagnoni et al., 2021; Maynez et al., 2020; Zhao et al., 2020). Hallucinations compromise the accuracy and trustworthiness of the generated summaries. We hypothesize that one reason for hallucination is that sometimes after a LM generates partial text, there is no completion that is grounded in the source text. An illustration of this situation is shown in Figure 1. Although the partial sentence _I live in_ is highly plausible, it forces the LM to specify where the person lives, even though this is not specified in the source document. Such situations can often be _detected_ by intrinsic properties of hallucinated text: (1) the first word of a hallucinated sequence tends to have low conditional probability, (2) hallucinations are not supported by words in the context, and therefore have a large distance to context words. Returning to our previous example, if the language model continues the sentence _I live in_ without any support from the context, _Munich_ might be just as plausible as _New York_, or _Penn State_. None of the locations would have particularly high probability, therefore triggering condition (1). Further, if none of the cities are mentioned in the context, all would have a large word distances to the context words, triggering condition (2). Once the beginning of a hallucination is detected, we _backtrack_ and re-generate the _preceding_ words that "cornered" the LM into a position without a faithful continuation. In our example, we replace the token _in_ by the token _with_; consequently, based on the context, the generated sentence can be completed with _my dog_. Our method _Correction with Backtracking (CoBa)_, is a simple inference-time method that requires no additional model training and is compatible with most of the decoding methods. We evaluate CoBa on three established document summarization datasets and measure the faithfulness of generated summaries. We show that it is highly effective and efficient for detecting and mitigating hallucinations. CoBa is also orthogonal to many existing hallucination reduction techniques and can be used in conjunction to those. ## 2 Background and Related Work We adopt the definition of hallucination for abstractive summarization from Maynez et al. (2020): _The summary \(\mathcal{S}\) for a context document \(\mathcal{C}\) contains hallucinations if there exists a span in \(\mathcal{S}\) which is not supported by \(\mathcal{C}\)_. Hallucinations exhibit task-specific characteristics in various Natural Language Generation (NLG) tasks. For instance, in Machine Translation, hallucination is often observed in the output when the input source undergoes specific perturbation (Lee et al., 2018). In Question Answering (QA), one common manifestation is semantic drift, where the generated answers deviate from the topic of the question (Li et al., 2021). Additionally, in retrieval-based QA, the retrieval model may introduce additional sources of hallucination (Ji et al., 2023). Various existing works seek to understand how hallucination happens, and have identified several factors. In various datasets, human generated ground truth summaries can contain additional information not present in the corresponding input texts (Dhingra et al., 2019; Wang et al., 2020). Training on such data may increase a model's tendency to hallucinate. During generation, hallucination may occur when the model attends to irrelevant parts of the input context (Tian et al., 2019), or utilizes knowledge acquired during training that is not grounded in the context (Longpre et al., 2021). Additionally, the decoding method also impacts the faithfulness of generation. Past work has observed that sampling-based decoding can lead to increased hallucination (Dziri et al., 2021; Lee et al., 2022; Wan et al., 2023). ### Methods for Reducing Hallucination Depending on the task and problem setup, various methods have been developed to detect and mitigate hallucinations. Existing approaches can be broadly categorized into training time mitigation and generation time mitigation. Training Time Mitigation.Noise in the pre-training corpus is shown to be a significant source of hallucination for language models (Zhou et al., 2023). Some past work has focused on applying simple mechanisms to filter training data, many of which are already used in training large language models (Touvron et al., 2023; Penedo et al., 2023; Li et al., 2023). Data curation is not only done in the pre-training stage but also can happen during supervised finetuning (SFT). Researches in this area focus on using high-quality, human curated, or domain-specific data (Elaraby et al., 2023) for SFT and have shown that this can lead to improved faithfulness (Zhou et al., 2023; Chen et al., 2023; Lee et al., 2023; Cao et al., 2023). Generation Time Mitigation.Recent publications have also explored how to enhance the faithfulness of generation during inference time (Zhang et al., 2023). One line of work performs post Figure 1: Schematic illustration of CoBa (using only token probability as the detection metric with threshold \(0.2\)). After the partial summary “_I live_”, the token “_in_” has a higher probability than “with”. However, “_I live in_” will pressure the model into hallucinating a place. We detect this because all the next tokens have a probability lower than our threshold \(0.2\). Backtracking enables the model to find an alternative continuation that avoids hallucination down the line. editing by training specialized models (Cao et al., 2020; Chen et al., 2021; Dong et al., 2020) or by directly prompting the models (Varshney et al., 2023; Mundler et al., 2023). Others modify the decoding algorithm. Lee et al. (2022) proposes to gradually decrease the value of \(p\) in top-\(p\) sampling (i.e. nucleus sampling), to reduce hallucinations introduced by randomness. Li et al. (2023) modifies attention to encourage more factual generations. Shi et al. (2023) proposes _Context-Aware Decoding (CAD)_ to suppress hallucinations arising from the model's prior knowledge; they adjust the context-conditional token logits with the unconditional logits. Wan et al. (2023) proposes _Lookahead_: At each decoding step, it rolls out future summaries for the top \(k\) tokens with the highest probabilities, adjusts their probabilities with BS-Fact, and picks the token with the highest adjusted probability. They also show that the performance can be further improved by ranking multiple candidates with a composite faithfulness score, or by distilling student models with the generated summaries. In contrast to these methods, CoBa does not tamper with token probabilities. Instead, it detects hallucinated tokens and fixes them through backtracking and local edits (see Figure 1). Most similar to our work is arguably King et al. (2022), a publication that we were not aware of until after the completion of this paper. While we do have distinct design choices and evaluations, we acknowledge that the two methods are rather similar and expect them to perform similarly under our setting. ## 3 Problem Setup Let \(\mathcal{M}_{\theta}\) be an autoregressive summarization model with parameters \(\theta\), and let \(\Sigma\) be its vocabulary. Given a context document \(\mathcal{C}=(c_{1},\cdots c_{m})\) as input, \(M_{\theta}\) produces a summary \(\mathcal{S}=(s_{1},\cdots,s_{n})\): \[\mathcal{M}_{\theta}(\mathcal{C})=\mathcal{S}\] where \(c_{1},\cdots c_{m},s_{1},\cdots,s_{n}\in\Sigma\); \(m\) and \(n\) are the lengths of the context and the summary respectively. In practice, \(\mathcal{M}_{\theta}\) can either be a specialized summarization model like PEGASUS (Zhang et al., 2020), or a general language model capable of zero-shot summarization like Flan-T5 (Chung et al., 2022). If \(\mathcal{M}_{\theta}\) requires prompting, we add a prompt like "summarize: " to the context as input. Model \(\mathcal{M}_{\theta}\) generates the summary autoregressively. At each step, given a partially generated summary \(\mathcal{S}_{<t}\) up to token \(s_{t-1}\), it outputs a distribution \(p_{\theta}(s_{t}|\mathcal{C},\mathcal{S}_{<t})\) for the next token \(s_{t}\) over the vocabulary \(\Sigma\). The probability of generating the summary \(\mathcal{S}\) is thus \[p(\mathcal{S})=\prod_{t=1}^{|\mathcal{S}|}p_{\theta}(s_{t}|\mathcal{C}, \mathcal{S}_{<t})\] ## 4 Reducing Hallucination at Inference We present a detection-correction approach for reducing hallucination at decoding time. The main idea is illustrated in Figure 1: If a hallucination occurs, the problem typically originates already with its preceding tokens. The partially decoded summary can "corner" the model such that there is no faithful next token. For example, in Figure 1, the natural continuation for the partial summary "I Figure 2: **Average token probability (top) and token-to-context distance (bottom) around the hallucination span. Token offset 0 stands for the token where hallucination starts, negative offsets stand for the tokens before hallucination and positive ones are for the hallucinated tokens. On average, the token which starts the hallucination has the lowest probability and is the furthest away from the context tokens compared to surrounding ones.** live in" is a name of a place. The source context however does not mention any places. We design strategies to detect such occurrences, and use backtracking (Tarjan, 1972) to find alternative phrases that prevent hallucinations down the line. ### Hallucination Detection We investigate different properties of hallucinated text and devise two strategies for detecting text that is not grounded in the context. #### 4.1.1 Uncertainty-based Detection The intuition behind uncertainty-based detection is that hallucination is likely to occur if the model is unsure about what it should generate next conditioning on the input. The conditional probability of a token is one way of measuring uncertainty and prior work has shown that the token-wise probability of autoregressive language models is well-calibrated (Kadavath et al., 2022). petryk2023learning also use a similar technique for evaluating and ranking the correctness of image captions. We validate that token probabilities are effective for identifying hallucinated tokens in summaries by computing probabilities on an annotated hallucination dataset from Maynez et al. (2020). The dataset contains generated summaries from different summarization models, such as finetuned BERT (Devlin et al., 2018), Pointer-Generator Model (See et al., 2017) and several more, with human annotations for hallucination spans. Figure 2 presents the conditional token probabilities of Flan-T5 XL around the hallucination span. Offset 0 represents where the hallucination starts, the negative offsets represent preceding tokens and the positive offsets represent successive tokens. In the figure, we observe a significant drop in token confidence at the start of hallucination. The average probability is only 0.2 in contrast with 0.5-0.6 for non-hallucinated tokens. The distribution of the probabilities is noisy shown as wide standard deviation in the figure, because of annotation noise and because some generated summaries can contain unnatural segments. Therefore, measuring conditional token probability is one way of detecting the beginning of hallucinations during the decoding process, when all possible next tokens have low probability, it suggests the absence of a suitable candidate, and potentially signals the onset of hallucination. Formally, at step \(t\), we flag the token if the following condition holds: \[p_{\theta}(s_{t}|\mathcal{C},\mathcal{S}_{<t})<\delta\] where \(\mathcal{C}\) is the context document, \(\mathcal{S}_{<t}\) is the partially generated summary, and \(\delta\) is the token level conditional probability threshold for hallucination. #### 4.1.2 Similarity-based Detection Another intuitive way of detecting hallucination is to find tokens in the generated summary that are not supported by the context, i.e., tokens that are not "close" to any part of the context document. One method of measuring closeness is by computing cosine distance in the embedding space of a language model. More concretely, given a proposed token, we compute the distance between its embedding and the embeddings of all tokens in the context and flag the token as a potential hallucination if the minimum distance is above a certain threshold. The detection criterion in this case is: \[d(v,\mathcal{C})=\min_{c_{i}\in\mathcal{C}}cos\_dist\big{(}\text{Emb}(v), \text{Emb}(c_{i})\big{)}>\varphi\] where \(v\) is the proposed token, \(\mathcal{C}\) is the context document and \(\varphi\) is the distance threshold. Figure 2 presents the minimum token-to-context distance computed over the annotated dataset from Maynez et al. (2020)'s with embeddings from Flan-t5 XL (the results are averaged over 5000 samples). The average token distance at the first word in a hallucination span is significantly higher than words at other positions, as expected. ### Hallucination Mitigation After detecting potential hallucination during decoding using the techniques described in subsection 4.1, we perform a local intervention to prevent the generation of hallucinated phrases. Specifically, we introduce a process similar to depth first search. We eliminate the last generated token \(s_{t}\) and try to propose an alternative token \(s^{\prime}_{t}\) that does _not_ satisfy the hallucination criteria. We keep track of the eliminations given a partial sequence \(\mathcal{S}_{<t}\) and context \(\mathcal{C}\) to avoid repetitive proposals. If \(s^{\prime}_{t}\) can be found we add it to the generation and continue the forward decoding. We also continue if the partial sequence \(\mathcal{S}_{<t}\) only contains the start-of-sequence token [50S]. Otherwise, we backtrack again, i.e. eliminate the current last token \(s_{t-1}\) and repeat the process (see Figure 1 for a pictorial description). Admittedly, sometimes the model is unable to find a good solution, and this is signaled by backtracking too many times. We therefore introduce an upper bound \(L\), for the number of decoding steps (both forward and backtracking) that can be performed. We pick \(L=10T\) where \(T\) is the maximum generation length for our model \(\mathcal{M}_{\theta}\). If an acceptable summary cannot be generated in \(L\) steps, we turn off the backtracking mechanism and adopt greedy decoding to generate the summary. We empirically observe that with reasonable threshold choices, less than 3% of the generations exceed the upper bound \(L\) when using moderate threshold values in general. ## 5 Experiments ### Datasets and Models We consider two models: Flan-T5 XL Chung et al. (2022) and LLaMA Touvron et al. (2023). We use the pretrained models without any further finetuning on individual datasets. We consider three datasets: Newsroom Grusky et al. (2018), CNN/Dailymail Nallapati et al. (2016), and XSUM Narayan et al. (2018). We report numbers on the full test set of CNN/Dailymail and XSUM, and randomly sample a subset of size 5000 from the Newsroom test set. The XSUM dataset uses the first sentence of the original article as the ground truth summary, and the rest of the article as the context document. Consequently, core information is sometimes missing from the context. To improve the completeness of the context and enable more meaningful comparison with the ground truth, we adopt a similar approach as Wang et al. (2020) and prepend the ground truth summary back to the articles before performing summarization. ### Baselines and Implementation Details We examine four baseline decoding methods: greedy decoding, nucleus sampling, Lookahead Wan et al. (2023) (see section 2), and CAD Shi et al. (2023). Note that Lookahead takes a long time to roll out future summaries and compute BS-Fact for each of the rollouts (for instance, generating 5000 Newsroom samples takes 108 hours). One natural way of increasing the speeding of this method is to perform "lookahead" once every \(l\) tokens instead of after every token. Thus, we consider four choices of \(l\) for Lookahead: \(l=1\) (the original version), \(l=2\), \(l=4\) and \(l=8\). Additional implementation details can be found in subsection A.1 in the Appendix. We consider two versions of CoBa: (1) CoBa that only uses the conditional word probabilities for detection, which we refer as CoBa in the tables; (2) CoBa that uses both the conditional word probability and the token-context distance, which we refer as CoBa-d. We use probability threshold \(\delta=0.2\) and distance threshold \(\varphi=0.5\) for Flan-T5, and \(\delta=0.3\) and \(\varphi=0.9\) for LLaMA. We evaluate CoBa's performance with greedy decoding and nucleus sampling. Since CoBa is complementary to most decoding methods, we can also use CoBa in conjunction with some of the baselines. We report results of using CoBa and CAD together. We do not evaluate using CoBa with Lookahead due to the high computational cost. ### Metrics To evaluate faithfulness, we compare the generated summaries with their _source documents_. We use the following metrics: **AlignScore**Zha et al. (2023) and **FactCC**Kryscinski et al. (2019), both of which employ learned models to score faithfulness; **BS-Fact**, which measures the BERTScore Zhang et al. (2019) precision of a generated summary with respect to its context document; **ROUGE-L**Lin (2004), which measures the longest common subsequence between the generation and reference. These metrics align relatively well with human judgement Pagnoni et al. (2021) and have reasonable runtime. We also report standard summarization metrics, including **ROUGE-L**, **BERTScore F1** and **Bleurt**Sellam et al. (2020), computed between the generated summaries and the datasets' _ground truth summaries_. It should be noted that the models are used in a zero-shot manner. The quality of the generated summaries depends on the model's capabilities, and they may have different styles compared to the ground truth. Therefore, this comparison may not always yield informative results. ### Results We report the faithfulness performance of Flan-T5 on the different datasets in Table 1, and the performance of LLaMA in Table 2. Note that all metrics are computed between the source document and the generated summary. We report the metrics between the generated and ground truth summaries in Table 4 and Table 5 in the Appendix. For Flan-T5, both Greedy with CoBa and Lookahead at every token are competitive across datasets and metrics. Lookahead is slightly better according to BS-Fact and ROUGE-L, but is significantly slower as seen in Figure 3. Greedy with CoBa is comparable to Lookahead every 4 tokens and is still much faster. For LLaMA, CoBa also attains performance gain. The improvement is smaller as LLaMA produces more faithful summaries than Flan-T5. It is important to note that the absolute \begin{table} \begin{tabular}{l|l c c c c} \hline \hline & **Method** & **AlignScore\(\uparrow\)** & **FactCC\(\uparrow\)** & **BS-Fact\(\uparrow\)** & **Rouge-L\(\uparrow\)** \\ \hline \multirow{9}{*}{**Baselines**} & Greedy & 0.765 & 0.604 & 0.919 & 0.131 \\ & + Lookahead (every 8 tok.) & 0.768 & 0.607 & 0.920 & 0.133 \\ & + Lookahead (every 4 tok.) & 0.774 & 0.607 & 0.922 & 0.136 \\ & + Lookahead (every 2 tok.) & 0.811 & 0.662 & 0.931 & 0.153 \\ & + Lookahead (every tok.) & 0.816 & 0.662 & 0.933 & 0.159 \\ & + CAD & 0.746 & 0.490 & 0.916 & 0.145 \\ & + CoBa & 0.821 & 0.674 & 0.923 & 0.138 \\ & + CoBa-d & 0.865 & 0.709 & 0.926 & 0.145 \\ & + CoBa + CAD & 0.773 & 0.515 & 0.919 & 0.149 \\ & + CoBa-d + CAD & 0.820 & 0.560 & 0.922 & 0.161 \\ \cline{2-6} & Nucleus & 0.636 & 0.482 & 0.902 & 0.101 \\ & + CAD & 0.694 & 0.430 & 0.907 & 0.117 \\ & + CoBa & 0.800 & 0.645 & 0.920 & 0.128 \\ & + CoBa-d & 0.857 & 0.692 & 0.923 & 0.139 \\ & + CoBa + CAD & 0.767 & 0.505 & 0.917 & 0.139 \\ & + CoBa-d + CAD & 0.817 & 0.552 & 0.921 & 0.154 \\ \hline \multirow{9}{*}{**Baselines**} & Greedy & 0.723 & 0.485 & 0.919 & 0.096 \\ & + Lookahead (every 8 tok.) & 0.727 & 0.486 & 0.919 & 0.096 \\ & + Lookahead (every 4 tok.) & 0.733 & 0.487 & 0.920 & 0.097 \\ & + Lookahead (every 2 tok.) & 0.756 & 0.514 & 0.925 & 0.101 \\ & + Lookahead (every tok.) & 0.767 & 0.524 & 0.926 & 0.102 \\ & + CAD & 0.694 & 0.383 & 0.919 & 0.094 \\ & + CoBa & 0.752 & 0.504 & 0.920 & 0.096 \\ & + CoBa-d & 0.791 & 0.523 & 0.921 & 0.104 \\ & + CoBa + CAD & 0.707 & 0.398 & 0.919 & 0.094 \\ & + CoBa-d + CAD & 0.735 & 0.414 & 0.923 & 0.103 \\ \cline{2-6} & Nucleus & 0.545 & 0.364 & 0.902 & 0.082 \\ & + CAD & 0.621 & 0.317 & 0.911 & 0.088 \\ & + CoBa & 0.730 & 0.489 & 0.917 & 0.093 \\ & + CoBa-d & 0.772 & 0.499 & 0.920 & 0.101 \\ & + CoBa + CAD & 0.695 & 0.373 & 0.918 & 0.093 \\ & + CoBa-d + CAD & 0.728 & 0.392 & 0.922 & 0.102 \\ \hline \multirow{9}{*}{**Baselines**} & Greedy & 0.840 & 0.506 & 0.922 & 0.146 \\ & + Lookahead (every 8 tok.) & 0.843 & 0.511 & 0.923 & 0.147 \\ & + Lookahead (every 4 tok.) & 0.848 & 0.514 & 0.925 & 0.149 \\ & + Lookahead (every 2 tok.) & 0.866 & 0.546 & 0.930 & 0.157 \\ & + Lookahead (every tok.) & 0.874 & 0.561 & 0.932 & 0.162 \\ & + CAD & 0.828 & 0.301 & 0.917 & 0.173 \\ & + CoBa & 0.869 & 0.554 & 0.924 & 0.149 \\ & + CoBa-d & 0.884 & 0.570 & 0.925 & 0.151 \\ & + CoBa + CAD & 0.836 & 0.312 & 0.918 & 0.174 \\ & + CoBa-d + CAD & 0.849 & 0.330 & 0.919 & 0.178 \\ \cline{2-6} & Nucleus & 0.706 & 0.310 & 0.907 & 0.122 \\ & + CAD & 0.777 & 0.232 & 0.911 & 0.157 \\ & + CoBa & 0.857 & 0.521 & 0.922 & 0.142 \\ & + CoBa-d & 0.872 & 0.533 & 0.923 & 0.145 \\ & + CoBa + CAD & 0.828 & 0.291 & 0.916 & 0.169 \\ & + CoBa-d + CAD & 0.841 & 0.313 & 0.918 & 0.174 \\ \hline \hline \end{tabular} \end{table} Table 1: **Faithfulness of the summaries generated with various decoding methods using Flan-T5. All the metrics are computed between the context document and the generated summary; higher is better.** values of FactCC is smaller for LLaMA, because LLaMA produces much longer summaries than Flan-T5, while FactCC has negative correlation with the summary length. We report the distribution of generated summary length in Figure 6 in the Appendix, to show that the performance gain is not caused by producing shorter summaries. In Figure 5, we present two qualitative examples comparing greedy decoding vs. CoBa and CoBa-d. In the first example, the greedy decoding produces the summary "The Boston Globe's review of "Looper" by John Sutter." with a name that does not appear in the source document. Backtracking successfully replaces it with the correct name. In the second example, although the extended name of the soccer club can include "United" based on real world knowledge, the document itself only refers the soccer club as "Scunthorpe". CoBa-d is able to detect this and remove "United". ### Analysis **Token Probability Threshold**. We examine the effects of using different values for the token confidence threshold, and present the results in Figure 4. We use the newsroom dataset and the Flan-T5 XL model. To better capture faithfulness, all the metrics are computed between source document and the generated summary. High value is better for all metrics. For AlignScore and BS-Fact, the improvement saturates at threshold 0.2-0.25, while FactCC continue to improve. **Embedding Distance Threshold**. We perform ablation studies on the choice of embedding distance threshold. Intuitively, the smaller the distance threshold is, the more similar the generated summaries are to their original documents. Results are presented in Table 3. "N/A" represents not applying the embedding distance threshold. We use token probability threshold 0.2, the Newsroom dataset, and the Flan-T5 XL model for the ablation experiments. Decreasing the threshold improves the performance, and the improvement saturates around threshold 0.5. **Document:**: Dear God, what have they done to Joseph Gordon-Levitt's face? In Rian Johnson's time-travel action drama, "Looper," the star has been prosthetically altered with a fresh set of eyebrows, a snubbed nose, green contact lenses, and what appear to be new lips. He's supposed to look like the young Bruce Willis. What he resembles, mostly, is mid-period Dove. The imposture is hardly convincing, but "Looper" is fast enough, weird enough, and just about smart enough to make you forget about that. The movie wants to mess with your head, depositing us in a deceprint, overcrowded 2044 and then sprinkling on time-loop... This is an article preview. The full story is available to BostonGlobe.com subscribers. Greedy Decoding: The Boston Globe's review of "Looper" by **John Sutter**. CoBa: The Boston Globe reviews "Looper" by Rian Johnson. **Document:**: Scunthorpe midfielder Neal Bishop has signed a one-year contract extension. The 35-year-old joined the Iron from Blackpool in 2013 and has made 119 league appearances for the League One side. He helped them to a third-placed finish this season, before they were beaten by Millwall in the play-off semi-finals. Bishop told the club website: "With the way the season finished, it's a sense of unfinished business and it was disappointing for all of us." Greedy Decoding: Scunthorpe United midfielder Neal Bishop has signed a new one-year contract with the Iron. CoBa-d: Scunthorpe have signed midfielder Neal Bishop on a one-year contract extension. ## 6 Limitations and Future Work In this study, we propose a method for reducing hallucinations in text summarization by backtracking. Our method consists of two steps: detection and backtracking. We employ two token level conditional probabilities and distance between generated tokens and context tokens to detect hallucinations. Both of these are effective ways of detecting hallucinated text, but there could be other complementary metrics that could improve detection. We defer the exploration of alternative metrics to future research endeavors. While our primary focus in this paper is summarization models, our method can easily be extended to other applications where generating factual text is paramount. For instance, in question-answering systems which first retrieve relevant documents and then generate an answer, we can define the retrieved documents to be the context and employ CoBa to produce factually correct answers. ## 7 Conclusion Current decoding methods don't explicitly allow a model to re-generate some part of the generated text when there is no highly probable completion to the partial text. Such a scenario would lead to hallucinations because the model is uncertain about how to complete the sentence and will sample a low probability word. We show that there is a relatively simple solution to mitigate hallucination, which we refer to as Correction with Backtracking (CoBa). CoBa is an inference-time method that requires no additional models, is computationally efficient, and can be directly applied to diverse summarization models without retraining. CoBa detects hallucinations by using conditional probabilities of the generated tokens and measuring the distance between the generated text and the context. To correct the hallucinated text, it applies backtracking to before the hallucination and re-generates text to avoid ending up in positions with only low scoring token options. We empirically verify that CoBa is able to identify and rectify hallucinated tokens \begin{table} \begin{tabular}{c c c c c} \hline \hline **Dist. Thresh** & **AlignScore\({}^{\dagger}\)** & **FactCC\({}^{\dagger}\)** & **BS-Fact\({}^{\dagger}\)** & **Rouge-L\({}^{\dagger}\)** \\ \hline N/A & 0.821 & 0.674 & 0.923 & 0.138 \\ 0.9 & 0.825 & 0.677 & 0.924 & 0.139 \\ 0.7 & 0.859 & 0.699 & 0.925 & 0.143 \\ 0.5 & 0.865 & 0.709 & 0.926 & 0.145 \\ 0.3 & 0.867 & 0.718 & 0.920 & 0.146 \\ 0.1 & 0.867 & 0.720 & 0.920 & 0.146 \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablation on the threshold on token embedding distance.** We use token confidence threshold \(\delta=0.2\) while varying the distance threshold \(\varphi\) for all the experiments in this table. Figure 5: **Qualitative examples of greedy decoding vs. CoBa and CoBa-d.** The hallucinated content is marked in red and the corrected details are marked in green. CoBa and CoBa-d correctly remove the hallucinated content by triggering backtracking at corresponding positions and generate summaries with more and faithful details. during autoregressive decoding, and we show that CoBa produces more factual summaries for various datasets. Our future work includes exploring other detection strategies as well as extending CoBa to more diverse tasks. ## Acknowledgement This research is supported by a gift from the Simons Foundation, grants from the National Science Foundation NSF (IIS-2107161, III1526012, IIS-1149882), Natural Sciences and Engineering Research Council of Canada (NSERC 567916) and IIS-1724282), the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR1719875), LinkedIn and NewYork-Presbyterian Hospital.
2310.11619
Determining the Betti numbers of $R/(x^{p^e},y^{p^e},z^{p^e})$ for most even degree hypersurfaces in odd characteristic
Let $k$ be a field of odd characteristic $p$. Fix an even number $d<p+1$ and a power $q\geq d+3$ of $p$. For most choices of degree $d$ standard graded hypersurfaces $R=k[x,y,z]/(f)$ with homogeneous maximal ideal $\mathfrak{m}$, we can determine the graded Betti numbers of $R/\mathfrak{m}^{[q]}$. In fact, given two fixed powers $q_0,q_1\geq d+3$, for most choices of $R$ the graded Betti numbers in high homological degree of $R/\mathfrak{m}^{[q_0]}$ and $R/\mathfrak{m}^{[q_1]}$ are the same up to a constant shift. This thesis shows this fact by combining our results with the work of Miller, Rahmati, and R.G. on link-$q$-compressed polynomials in Betti numbers of the frobenius powers of the maximal ideal over certain hypersurfaces. We show that link-$q$-compressed polynomials are indeed fairly common in many polynomial rings.
Heath Camphire
2023-10-17T23:09:21Z
http://arxiv.org/abs/2310.11619v1
Determining the Betti numbers of \(R/(x^{p^{e}},y^{p^{e}},z^{p^{e}})\) for most even degree hypersurfaces in odd characteristic ###### Abstract Let \(k\) be a field of odd characteristic \(p\). Fix an even number \(d<p+1\) and a power \(q\geq d+3\) of \(p\). For most choices of degree \(d\) standard graded hypersurfaces \(R=k[x,y,z]/(f)\) with homogeneous maximal ideal \(\mathfrak{m}\), we can determine the graded Betti numbers of \(R/\mathfrak{m}^{[q]}\). In fact, given two fixed powers \(q_{0},q_{1}\geq d+3\), for most choices of \(R\) the graded Betti numbers in high homological degree of \(R/\mathfrak{m}^{[q]}\) and \(R/\mathfrak{m}^{[q_{1}]}\) are the same up to a constant shift. This thesis shows this fact by combining our results with the work of Miller, Rahmati, and R.G on link-\(q\)-compressed polynomials in [11]. We show that link-\(q\)-compressed polynomials are indeed fairly common in many polynomial rings. ## 1 Introduction Kustin and Ulrich wrote in [10] about a peculiar phenomenon they observed: given a Noetherian graded algebra \((R,\mathfrak{m})\) over a field \(k\) of characteristic \(p>0\), for certain choices of \(\mathfrak{m}\)-primary ideal \(J\) of \(R\), the tail of the resolution of \(R/J^{[p^{e}]}\) appeared to be constant as a function of \(e\) (up to a graded shift). This is unexpected as the first several syzygy modules often have generators of vastly different degrees for different \(e\), however it is possible to find examples of \(J\) such that the \(\dim R\)-th syzygy module of \(R/J^{[p^{e}]}\) is independent of \(e\). There have been a few results so far related to this phenomenon. Kustin and Vraciu in [11] found conditions that would show that \(R/J\) and \(R/J^{[p^{e}]}\) both have finite projective dimension, and so the tails of their resolutions are the same because they're both zero. Also in [10], Kustin and Ulrich found conditions that the tails of the resolutions of \(R/J\) and \(R/J^{[p^{e}]}\) are isomorphic (up to a graded shift) as graded modules, though not necessarily as complexes with differential. Kustin, Rahmati, and Vraciu explicitly determined the resolutions of \(R/J^{[p^{e}]}\) in [11], where \(J=\mathfrak{m}^{[N]}\) for some \(N>0\), \(\mathfrak{m}\) is the homogeneous maximal ideal, and \(R\) is a diagonal hypersurface in two or three variables. From this they determined that the tail of the resolution of \(R/J^{[p^{e}]}\) is a periodic (not necessarily constant) function of \(e\) up to a graded shift. This is true even if the tail of the resolution is treated as a complex with differential, not just as a graded module. Kustin, R.G., and Vraciu studied diagonal hypersurfaces \(R\) in four indeterminates. In [11] they discovered that the Betti numbers in fixed homological degree of \(R/\mathfrak{m}^{[N]}\) (where \(\mathfrak{m}\) is the homogeneous maximal ideal) increase rather than stay constant as \(N\) increases. In fact, they determined in [11] that the tail of the resolution of \(R/\mathfrak{m}^{[N]}\) is the direct sum of resolutions of \(R/\mathfrak{m}^{[N]}\) for certain \(N^{\prime}\leq N\). Compared to three indeterminates, the four indeterminate case has much more complicated resolutions and Betti numbers. Our results show that for \(R=k[x,y,z]/\left(\left(xy-z^{2}\right)^{D}\right)\), the \(R\)-free resolution of \(R/\mathfrak{m}^{[p^{e}]}\) has its tail independent of \(e\) if \(p>2D-1\) and \(p\) is odd; see Corollary 5.8. We prove this with the method used in [10] while proving our main results, which we then combine with the work by Miller, Rahmati, and R.G. in [11]. Miller, Rahmati, and R.G. developed a property of algebras called \(\mathfrak{c}\)_-compressed_ (see Definition 2.15) in [11] as a stronger version of the relatively compressed property, which originally appeared in [13, 14] and only required the Hilbert function to reach a maximal value among graded Artinian algebras, but not necessarily their theoretical maximum. A homogeneous polynomial \(f\in k[x,y,z]\) is _link-\(q\)-compressed_ if \(\mathfrak{m}^{[q]}:f\) is \(\mathfrak{m}^{[q]}\)-compressed where \(\mathfrak{m}=(x,y,z)\) is the homogeneous maximal ideal (see Definition 2.17), and Miller, Rahmati, and R.G. showed that if \(f\in k[x,y,z]\) is link-\(q\)-compressed for two powers \(q=q_{0},q_{1}\) of \(p\), then the graded Betti numbers for the resolutions of \(R/\mathfrak{m}^{[q_{0}]}\) and \(R/\mathfrak{m}^{[q_{1}]}\) eventually agree up to a constant shift (see the exact results in Theorem 2.24). Their work also gives formulas for algebraic invariants like the Hilbert-Kunz function of \(R\) and the Castelnuovo-Mumford regularity of \(R/\mathfrak{m}^{[q]}\) when \(f\) is link-\(q\)-compressed. In [11] it is shown that link-\(p^{e}\)-compressed is a _Zariski-open_ condition on the coefficients of \(f\) for any given values of \(e\) and \(d\). This means that the condition holds for general choices of \(f\) only if it holds for at least one choice of \(f\). We discuss this further in Remark 2.27. The main result of this thesis is Theorem 5.11, which is the following: **Theorem A**.: _Let \(k\) be a field of odd characteristic \(p\). Fix a number \(D<\frac{p+1}{2}\) and a power \(q>1\) of \(p\). The polynomial \(\big{(}xy-z^{2}\big{)}^{D}\in k[x,y,z]\) is link-\(q\)-compressed._ The set of link-\(q\)-compressed polynomials is Zariski open as shown in [11], and our results show that it is also nonempty. This means that a general choice of polynomial \(f\) is link-\(q\)-compressed, and thus the conclusions in theorems about link-\(q\)-compressed polynomials in [11] apply to \(f\). We summarize these results as follows, which is Theorem 5.13: **Theorem B**.: _Let \(k\) be a field of odd characteristic \(p\). Fix an even number \(d<p+1\) and a power \(q\geq d+3\) of \(p\). For a general choice of degree \(d\) standard graded hypersurface \(k\)-algebra \(R\) over three indeterminates with homogeneous maximal ideal \(\mathfrak{m}\), the following hold:_ * _The minimal graded_ \(R\)_-free resolution of_ \(R/\mathfrak{m}^{[q]}\) _has the following eventually 2-periodic form_ \[...\xrightarrow{\boldsymbol{\partial}_{4}}R^{2d}\xrightarrow{\boldsymbol{ \partial}_{3}}R^{2d}\xrightarrow{\boldsymbol{\partial}_{4}}R^{2d}\xrightarrow{ \boldsymbol{\partial}_{3}}R^{2d}\xrightarrow{\boldsymbol{\partial}_{2}}R^{3} \xrightarrow{\boldsymbol{\partial}_{1}}R\to 0\] _whose differentials are maps of pure graded degrees_ \[\deg(\boldsymbol{\partial}_{1})=q,\;\deg(\boldsymbol{\partial}_{2})=\frac{1}{ 2}(q+d-1),\;\deg(\boldsymbol{\partial}_{3})=1,\;\deg(\boldsymbol{\partial}_{4 })=d-1.\] * _The Castelnuovo-Mumford regularity is given by_ \(\operatorname{reg}(R/\mathfrak{m}^{[q]})=\frac{3}{2}q+\frac{1}{2}d-\frac{5}{2}\)_._ * _The Hilbert-Kunz function of_ \(R\) _at_ \(q\) _is_ \(HK_{R}(q)=\frac{3}{4}dq^{2}-\frac{1}{12}(d^{3}-d)\)_._ This thesis has the following structure: Section 2 starts with Subsection 2.1, which covers matrix notation and results on Pfaffians, an invariant of skew-symmetric matrices analogous to determinants. The rest of Section 2 gives prior results related to the link-\(q\)-compressed condition, beginning with its original definition in Definitions 2.15 and 2.17. Sections 3 and 4 detail intermediate calculations used in our main results. Specifically, Section 3 covers relevant number theory results, while Section 4 lists determinant calculations related to the matrix \(\boldsymbol{M}\) in Notation 4.1. Section 5 includes our main results. We prove the first major result Theorem 5.11, which we described above as Theorem A. We combine this result with the results in [11] using their theory of link-\(q\)-compressed polynomials to prove Theorem 5.13. At the end of the section we list other examples and theorems related to the link-\(q\)-compressed condition. ## 2 Background ### Pfaffians Here we give the definition of a Pfaffian, as well as list several properties, formulas, and notations that we use throughout this thesis. More information on Pfaffians, including proofs for the properties to come, can be found in sources such as [1] and [13]. Much of our work involves calculating determinants and Pfaffians of matrices. We introduce notation used throughout our paper when performing those calculations: **Notation 2.1**.: * For a matrix \(\boldsymbol{T}\) of size \(M\times N\), the entry of \(\boldsymbol{T}\) in the \(i\)th row and \(j\)th column is denoted \(\boldsymbol{T}_{i,j}\). * We also let \(\boldsymbol{T}_{(i_{1}\cdots i_{m}),(j_{1}\cdots j_{n})}\) denote the matrix obtained by removing rows \(1\leq i_{1},\ldots,i_{m}\leq M\) and columns \(1\leq j_{1},\ldots,j_{n}\leq N\) from \(\boldsymbol{T}\). * If we don't remove any rows (resp. columns) we use \((-)\) in place of \((i_{1}\cdots i_{m})\) (resp. \((j_{1}\cdots j_{n})\)), which looks like \(\boldsymbol{T}_{(-),(j_{1}\cdots j_{n})}\) (resp. \(\boldsymbol{T}_{(i_{1}\cdots i_{m}),(-)}\)). * To make notation more compact, we use \(\boldsymbol{T}_{i_{1}\cdots i_{m}}\) as a shorthand for \(\boldsymbol{T}_{(i_{1}\cdots i_{m}),(i_{1}\cdots i_{m})}\). * Here we also note that all matrices in this thesis are denoted in boldface (which looks like \(\boldsymbol{T}\)), and column vectors have a vector arrow above them (which looks like \(\bar{T}\)), with row vectors written a vector arrow and transpose symbol (which looks like \(\bar{T}^{\top}\)). * We also use \(\bar{T}_{i_{1}\cdots i_{m}}\) to denote \(\bar{T}_{(i_{1}\cdots i_{m}),(-)}\) if \(\bar{T}\) is a single column, and \(\bar{T}^{\top}_{j_{1}\cdots j_{n}}\) to denote \(\big{(}\bar{T}^{\top}\big{)}_{(-),(j_{1}\cdots j_{n})}\) if \(\bar{T}^{\top}\) is a single row. **Definition 2.2** ([13, Corollary 2.2]).: If a skew-symmetric matrix \(\boldsymbol{A}\) is size \(s\times s\) where \(s>0\) is even, then its _Pfaffian_ can be calculated by using an analog of the cofactor expansion formula: \[\operatorname{Pf}\boldsymbol{A}=\sum_{i=1}^{s}(-1)^{i+j+H(j-i)}\boldsymbol{A} _{i,j}\operatorname{Pf}\left(\boldsymbol{A}_{i\bar{j}}\right),\] where \(j\) is any fixed index from \(1\) to \(s\) and \(H(x)=\begin{cases}1,&x\geq 0\\ 0,&x<0\end{cases}\) is the Heaviside step function. If \(s=0\), we define \[\operatorname{Pf}\boldsymbol{A}=1.\] If \(s\) is odd, then we set \[\operatorname{Pf}\boldsymbol{A}=0.\] _Remark 2.3_.: The summation formula doesn't change if we fix \(i\) and sum over \(j\) instead (see [13, Corollary 4.3]). This formula is especially useful when most entries in the \(i\)-th row or \(j\)-th column of \(\boldsymbol{A}\) are zero. _Remark 2.4_.: Pfaffians have a close relation to determinants: if \(\boldsymbol{A}\) is skew-symmetric, then \(\det\boldsymbol{A}=(\operatorname{Pf}\boldsymbol{A})^{2}\) (see [1, Theorem 2.3]). It should be no surprise, given our definition is an analog of the cofactor expansion formula for determinants. _Remark 2.5_.: If \(\boldsymbol{M}\) is a \(n\times n\) matrix, then \[\operatorname{Pf}\begin{bmatrix}\boldsymbol{0}&\boldsymbol{M}\\ -\boldsymbol{M}^{\top}&\boldsymbol{0}\end{bmatrix}=(-1)^{n(n-1)/2}\det \boldsymbol{M},\] where \(\boldsymbol{M}^{\top}\) is the transpose of \(\boldsymbol{M}\) (see [13, Lemma 3.1]). **Lemma 2.6**.: _If \(\mathbf{M}\) is a non-square matrix, then \(\operatorname{Pf}\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}=0\)._ Proof.: Fix \(\ell>0\). To prove that \(\operatorname{Pf}\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}=0\), we assume first that \(\mathbf{M}\) has size \((n+\ell)\times n\) and then that it has size \(n\times(n+\ell)\), and use induction on \(n\) both times. In either of these cases, \(\operatorname{Pf}\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}\) has size \(s\times s\) where \(s=(n+\ell)+n=2n+\ell\). First we assume that \(\mathbf{M}\) has \(\ell\) more rows than columns (so it has size \((n+\ell)\times n\)). If \(\mathbf{M}\) is a \(\ell\times 0\) matrix, then \(\operatorname{Pf}\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}=\operatorname{Pf}\mathbf{0}=0\). Let \(n\geq 0\) and assume that \(\operatorname{Pf}\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}=0\) for any matrix \(\mathbf{M}\) of size \((n+\ell)\times n\). Let \(\mathbf{M}\) be a \(\big{(}(n+1)+\ell\big{)}\times(n+1)\) matrix. Fix \(\bar{\beta}=\big{(}(n+1)+\ell\big{)}+(n+1)=2(n+1)+\ell\) and \(j=n+1=\beta-\big{(}(n+1)+\ell\big{)}\). If \(1\leq\alpha\leq(n+1)+\ell\), let \(i=\alpha\), then we have because \(\mathbf{M}_{(i),(j)}\) has size \((n+\ell)\times n\). If instead \((n+1)+\ell+1\leq\alpha\leq 2(n+1)+\ell\), let \(i=\alpha-((n+1)+\ell)\), then \((\alpha,\beta)\) are coordinates for the bottom right \(\mathbf{0}\) block of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}\), and so \[\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}_{\alpha,\beta}=\mathbf{0}_{i,j}=0.\] In either case, \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}_{\alpha,\beta}\operatorname{Pf}\begin{bmatrix} \mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}_{\hat{\alpha}\hat{\beta}}=0\), and thus by Definition 2.2, \[\operatorname{Pf}\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix} =\sum_{\alpha=1}^{2(n+1)+\ell}(-1)^{\alpha+\beta+H(\beta-\alpha) }\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}_{\alpha,\beta}\operatorname{Pf}\begin{pmatrix} \mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}_{\hat{\alpha}\hat{\beta}}\] \[=\sum_{\alpha=1}^{2(n+1)+\ell}(-1)^{\alpha+\beta+H(\beta-\alpha) }(0)\] \[=0.\] Therefore, by induction we have that \(\operatorname{Pf}\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}=0\) for any \(\mathbf{M}\) of size \((n+\ell)\times n\) with \(\ell>0\). The argument for \(n\times(n+\ell)\) matrices is similar. **Definition 2.7** ([12, Equation 1.16]).: If a skew-symmetric matrix \(\mathbf{A}\) is size \(s\times s\) and \(s\) is odd, we take Pfaffians of submatrices of \(\mathbf{A}\) in order to still gain some information about \(\mathbf{A}\). For any index \(\ell\) from \(1\) to \(s\), we define \[\operatorname{Pf}_{\ell}\mathbf{A}=(-1)^{\ell+1}\operatorname{Pf}\left(\mathbf{A}_{ \hat{\ell}}\right).\] **Definition 2.8**.: We denote the _classical adjoint_ of a matrix \(\mathbf{M}\) as \(\overline{\mathbf{M}}\), meaning \(\mathbf{M}\overline{\mathbf{M}}=\overline{\mathbf{M}}\mathbf{M}=\mathbf{(\det\mathbf{M})\mathbf{I}}\). This matrix is given by the formula \[\overline{\mathbf{M}}_{i,j}=(-1)^{j+i}\det\left(\mathbf{M}_{(j),(i)}\right).\] **Definition 2.9** ([14, Definition 1.17 and Observation 1.18]).: The Pfaffian version of the classical adjoint is denoted \(\mathbf{A}^{\vee}\) for a matrix \(\mathbf{A}\), where \(\mathbf{A}\mathbf{A}^{\vee}=\mathbf{A}^{\vee}\mathbf{A}=(\operatorname{Pf}\mathbf{A})\mathbf{I}\). This matrix is given by the formula \[(\mathbf{A}^{\vee})_{i,j}=(-1)^{j+i+H(i-j)}\operatorname{Pf}\left(\mathbf{A}_{\bar{j} \bar{i}}\right).\] **Lemma 2.10**.: _If \(\mathbf{M}\) is a \(n\times n\) matrix, then_ \[\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}=(-1)^{n(n-1)/2}\begin{bmatrix}\mathbf{0 }&-\overline{\mathbf{M}}^{\top}\\ \overline{\mathbf{M}}&\mathbf{0}\end{bmatrix}.\] Proof.: The matrix \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\) is a \(2\times 2\) block matrix consisting of \(n\times n\) blocks. Let \(1\leq\alpha,\beta\leq 2n\). Assume \(1\leq\alpha,\beta\leq n\). Let \(i=\alpha\) and \(j=\beta\). The \((\alpha,\beta)\) coordinate of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\) is the \((i,j)\) coordinate of the top left block of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\). By Definition 2.9, we have \[\begin{split}\left(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\right)_{\alpha,\beta}=&(-1)^{ \beta+\alpha+H(\alpha-\beta)}\operatorname{Pf}\left(\begin{bmatrix}\mathbf{0}& \mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}_{\hat{\beta}\hat{\alpha}}\right)\\ =&(-1)^{j+i+H(i-j)}\operatorname{Pf}\left[\begin{matrix}\mathbf{0}_{(\bar{j} \bar{i}),(\bar{j}\bar{i})}&\mathbf{M}_{(\bar{j}\bar{i}),(-)}\\ (-\mathbf{M}^{\top})_{(-),(\bar{j}\bar{i})}&\mathbf{0}\end{matrix}\right]\\ =&(-1)^{j+i+H(i-j)}\operatorname{Pf}\left[\begin{matrix}\mathbf{0}&\mathbf{M}_{(\bar{ j}\bar{i}),(-)}\\ -\mathbf{M}_{(\bar{j}\bar{i}),(-)}\end{matrix}^{\top}&\mathbf{0}\end{bmatrix}\\ =&(-1)^{j+i+H(i-j)}0=0,\end{split} \tag{1}\] where (1) comes from Lemma 2.6 because \(\mathbf{M}_{(\bar{j}\bar{i}),(-)}\) is size \((n-2)\times n\). This means that the top left block of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\) is \(\mathbf{0}\). Assume \(1\leq\alpha\leq n<\beta\leq 2n\). Let \(i=\alpha\) and \(j=\beta-n\). The \((\alpha,\beta)\) coordinate of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\) is the \((i,j)\) coordinate of the top right block of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\). By Definition 2.9, we have \[\begin{split}\left(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\right)_{\alpha,\beta}=&(-1)^{ \beta+\alpha+H(\alpha-\beta)}\operatorname{Pf}\left(\begin{bmatrix}\mathbf{0}& \mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}_{\hat{\beta}\hat{\alpha}}\right)\\ =&(-1)^{(n+j)+i}\operatorname{Pf}\left(\begin{bmatrix}\mathbf{0}_{(i),(i)}& \mathbf{M}_{(i),(j)}\\ -\left(\mathbf{M}_{(i),(j)}\right)^{\top}&\mathbf{0}_{(j),(j)}\end{bmatrix}\right)\\ =&(-1)^{n}(-1)^{i+j}\left((-1)^{(n-1)((n-1)-1)/2}\det\left(\mathbf{M}_{(i),(j)} \right)\right)\\ =&(-1)^{n}(-1)^{(n-1)(n-2)/2}\overline{\mathbf{M}}_{j,i}\\ =&-(-1)^{n-1}(-1)^{(n-1)n/2-(n-1)}\left(\overline{\mathbf{M}}^{\top}\right)_{i,j} \\ =&-(-1)^{n(n-1)/2}\left(\overline{\mathbf{M}}^{\top}\right)_{i,j},\end{split} \tag{2}\] where (2) comes from Remark 2.5 because \(\mathbf{M}_{(i),(j)}\) is size \((n-1)\times(n-1)\) and (3) comes from Definition 2.9. This means that the top right block of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\) is \(-(-1)^{n(n-1)/2}\overline{\mathbf{M}}^{\top}\). Assume \(1\leq\beta\leq n<\alpha\leq 2n\). Let \(i=\alpha-n\) and \(j=\beta\). The \((\alpha,\beta)\) coordinate of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\) is the \((i,j)\) coordinate of the bottom left block of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\). By Definition 2.9, we have \[\begin{split}\begin{pmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{pmatrix}^{\vee}\!\end{split}_{\alpha,\beta}=& (-1)^{\beta+\alpha+H(\alpha-\beta)}\operatorname{Pf}\left( \begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}_{\hat{\beta}\hat{\alpha}}\right)\\ =&(-1)^{j+(n+i)+1}\operatorname{Pf}\left(\begin{bmatrix}\mathbf{0}_{(j),(j)}& \mathbf{M}_{(j),(i)}\\ -\left(\mathbf{M}_{(j),(i)}\right)^{\top}&\mathbf{0}_{(i),(i)}\end{bmatrix}\right)\\ =&(-1)^{n+1}(-1)^{j+i}\left((-1)^{(n-1)((n-1)-1)/2}\det\left(\mathbf{M}_{(j),(i)} \right)\right)\\ =&(-1)^{n-1}(-1)^{(n-1)(n-2)/2}\overline{\mathbf{M}}_{i,j}\\ =&(-1)^{n-1}(-1)^{(n-1)n/2-(n-1)}\overline{\mathbf{M}}_{i,j}\\ =&(-1)^{n(n-1)/2}\overline{\mathbf{M}}_{i,j},\end{split} \tag{4}\] where (4) comes from Remark 2.5 because \(\mathbf{M}_{(i),(j)}\) is size \((n-1)\times(n-1)\) and (5) comes from Definition 2.9. This means that the bottom left block of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\) is \((-1)^{n(n-1)/2}\overline{\mathbf{M}}\). Assume \(n+1\leq\alpha,\beta\leq 2n\). Let \(i=\alpha-n\) and \(j=\beta-n\). The \((\alpha,\beta)\) coordinate of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\) is the \((i,j)\) coordinate of the bottom right block of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\). By Definition 2.9, we have \[\begin{split}\begin{pmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\!\end{split}_{\alpha,\beta}=& (-1)^{\beta+\alpha+H(\alpha-\beta)}\operatorname{Pf}\left( \begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}_{\hat{\beta}\hat{\alpha}}\right)\\ =&(-1)^{j+i+H(i-j)}\operatorname{Pf}\left[\begin{bmatrix}\mathbf{0}&\mathbf{M}_{(-),( \hat{I}\hat{J})}\\ -\mathbf{M}_{(-),(\hat{I}\hat{J})}^{\top}&\mathbf{0}\end{bmatrix}\right]\\ =&(-1)^{j+i+H(i-j)}0=0,\end{split} \tag{6}\] where (6) also comes from Lemma 2.6 because \(\mathbf{M}_{(-),(\hat{I}\hat{J})}\) is size \(n\times(n-2)\). This means that the bottom right block of \(\begin{bmatrix}\mathbf{0}&\mathbf{M}\\ -\mathbf{M}^{\top}&\mathbf{0}\end{bmatrix}^{\vee}\) is \(\mathbf{0}\). The following is used in our final results. **Lemma 2.11** ([16, Observation 1.22]).: _Let \(\mathbf{\theta}_{2}\) be an \((m+3)\times(m+3)\) skew-symmetric matrix with \(m\) even. If \(\mathbf{\theta}_{2}\) is partitioned into submatrices_ \[\mathbf{\theta}_{2}=\begin{bmatrix}\mathbf{\varphi}&\mathbf{\psi}\\ -\mathbf{\psi}^{\top}&\mathbf{\Phi}\end{bmatrix},\] _where \(\mathbf{\varphi}\) is an \(m\times m\) skew-symmetric matrix, \(\mathbf{\Phi}\) is a \(3\times 3\) skew-symmetric matrix, and \(\mathbf{\psi}\) is a \(m\times 3\) matrix, then for each index \(\ell\), with \(1\leq\ell\leq 3\),_ \[\operatorname{Pf}_{m+\ell}(\mathbf{\theta}_{2})=\operatorname{Pf}_{\ell}\left(\mathbf{ \psi}^{\top}\mathbf{\varphi}^{\vee}\mathbf{\psi}+\operatorname{Pf}(\mathbf{\varphi})\mathbf{ \Phi}\right).\] _Remark 2.12_.: This version of [10, Observation 1.22] replaces \(\boldsymbol{\psi}\) in the original version with \(\boldsymbol{\psi}^{\top}\) so that it matches later notation in this thesis. The last note we make here regards the degree of determinants. _Remark 2.13_.: If a \(n\times n\) matrix \(\boldsymbol{M}\) has entries in a graded ring \(R\) that are all in the same graded component \(R_{d}\) (meaning \(\boldsymbol{M}\) is a map of pure graded degree \(d\)), then \(\det\boldsymbol{M}\in R_{nd}\). The Leibniz formula for the determinant of \(\boldsymbol{M}\) is \[\det\boldsymbol{M}=\sum_{\sigma\in S_{n}}\left(\operatorname{sgn}\sigma\prod _{h=1}^{n}\boldsymbol{M}_{h,\sigma(h)}\right),\] where \(S_{n}\) is the \(n\)th symmetric group and \(\operatorname{sgn}\) is the signature function. Since \(\boldsymbol{M}_{h,\sigma(h)}\in R_{d}\) for each \(h\) and \(\sigma\), each term \(\operatorname{sgn}\sigma\prod_{h=1}^{n}\boldsymbol{M}_{h,\sigma(h)}\) in this sum belongs to \(R_{\sum_{h=1}^{n}d}=R_{nd}\), and so \(\det\boldsymbol{M}\in R_{nd}\). ### The link-\(q\)-compressed condition This thesis revolves around homogeneous polynomials that are link-\(q\)-compressed. Here we give the technical definition of link-\(q\)-compressed. After this, we introduce an equivalent condition to use as an alternate definition. **Notation 2.14**.: Let \(H_{i}\) denote the Hilbert function, which is defined by \(H_{i}(B)=\dim_{k}B_{i}\) of a graded algebra \(B\). **Definition 2.15** ([14, Definition 2.3]).: Let \(\mathfrak{c}\subseteq P\) be a homogeneous complete intersection ideal such that \(P/\mathfrak{c}\) is Artinian. Let \(J\subseteq P\) be a homogeneous Gorenstein ideal containing \(\mathfrak{c}\). We say that the algebra \(A=P/J\), or equivalently the ideal \(J\), is _\(\mathfrak{c}\)-compressed_ if, for every \(i\), \(H_{i}(P/J)\) takes on the maximum possible value, i.e., \[H_{i}(P/J)=\min\{H_{i}(P/\mathfrak{c}),H_{s-i}(P/\mathfrak{c})\}\] where \(s\) is the degree of the socle of \(A\). **Notation 2.16**.: Let \(P=k[x_{1},\ldots,x_{n}]\) be a standard graded polynomial ring over a characteristic \(p>0\) field \(k\). Define the homogeneous maximal ideal \(\mathfrak{m}=(x_{1},\ldots,x_{n})\). If \(q=p^{e}\) is a power of \(p\), define the \(q\)th Frobenius power of the maximal ideal as \(\mathfrak{m}^{[q]}=(x_{1}^{q},\ldots,x_{n}^{q})\) **Definition 2.17** ([14, Definition 2.9]).: Let \(f\in P\) be homogeneous, and \(q\) be a power of \(p\). We say that \(f\) is _link-\(q\)-compressed_ if the ideal \(\mathfrak{m}^{[q]}:f\) is \(\mathfrak{m}^{[q]}\)-compressed. In order to connect this definition to the alternate condition in Lemma 2.20, we first discuss Macaulay's inverse systems. Macaulay discovered in 1918 a one-to-one correspondence between Artinian Gorenstein algebras \(P/I\) (where \(P\) is the polynomial ring we've been using) and cyclic \(P\)-submodules of the inverse power algebra \(D=k[x_{1}^{-1},\ldots,x_{n}^{-1}]\), see [11, Section IV.]. Here \(x_{i}^{-1}\) has degree \(1\) in \(D\), and the \(P\)-module action on \(D\) is defined by the \(k\)-linear action where \[(x_{1}^{a_{1}}\cdots x_{n}^{a_{n}})(x_{1}^{-b_{1}}\cdots x_{n}^{-b_{n}})= \begin{cases}x_{1}^{-(b_{1}-a_{1})}\cdots x_{n}^{-(b_{n}-a_{n})}&\forall i\ b_{i}\geq a_{i}\\ 0&\text{else}\end{cases}\] for all \(i\) and \(a_{1},b_{1},\ldots,a_{n},b_{n}\geq 0\). This algebra \(D\) is also isomorphic to the injective hull of \(k\) (as proved by Northcott in [13]). We call the ring of inverse polynomials \(D\) and set \(S=P\) because they are respectively isomorphic to the divided power \(k\)-algebra and the symmetric \(k\)-algebra of a \(k\)-vector space with basis \(x_{1},\ldots,x_{n}\) (see [13, Appendix A]). The correspondence between homogeneous Artinian Gorenstein ideals \(I\) of \(P\) and cyclic graded submodules \((\varphi)\) of \(D\) is shown in [13, Lemma 2.12], and is as follows: For any homogeneous element \(\varphi\) of \(D\), let \(I(\varphi)=(0\,{:}_{S}\,\varphi)=\{r\in S\mid r\varphi=0\}\) be an ideal of \(S=P\), which is a graded Artinian ideal. Also, set \(s=\deg\varphi\), where \(s\) is the socle degree of \(P/I(\varphi)\). In the other direction: for any homogeneous Artinian Gorenstein ideal \(I\) of \(P\), \(I^{\perp}=(0\,{:}_{D}\,I)=\{\varphi\in D\mid I\varphi=0\}\) is a cyclic \(S\)-submodule \((\varphi)\) of \(D\) because \(I\) is Artinian Gorenstein. This element \(\varphi\) is called an inverse system for \(I\). Given a homogeneous inverse polynomial \(\varphi\) in \(D\), we have a corresponding map \(\Phi:S\to D\) given by \(\Phi(g)=g\varphi\) for any \(g\in S\). This map then restricts by degree to a family of maps \(\Phi_{i}:=\Phi|_{S_{i}}:S_{i}\to D_{s-i}\), where \(s=\deg\varphi\). Note that \(I(\varphi)=\ker\Phi\) by definition. The following gives us a way to calculate \(\varphi\) when \(J\) contains a Frobenius power of the maximal ideal of \(P\): **Lemma 2.18** ([14, Lemma 2.7]).: _For any Gorenstein ideal \(J\subseteq P\) that contains \(\mathfrak{c}=\mathfrak{m}^{[q]}\), there exists a homogeneous \(f\in P\) of degree \(d\) with \(1<d<q\) such that \(J\) is of the form_ \[J=\mathfrak{m}^{[q]}:f.\] _The inverse polynomial of \(J\) is_ \[\varphi=f[x_{1}^{(q-1)}...x_{n}^{(q-1)}]\] _where multiplication on the left by \(f\) is via the \(S\)-module structure on \(D\). Note that \(\varphi\) can also be written in inverse variables as_ \[\varphi=\frac{f}{(x_{1}^{q-1}...x_{n}^{q-1})}.\] _In particular, its degree is_ \[\deg\varphi=s=n(q-1)-d\quad\text{where}\quad d=\deg f.\] _Conversely, any \(\varphi\) that is a linear combination of inverse power monomials of degree \(s<n(q-1)\) with each variable having power strictly less than \(q\) can be written in the form above and so provides an inverse system whose associated ideal \(J\) is as above._ Combining this lemma with our understanding of the \(\Phi\) map from earlier, we have \(\ker\Phi=I(\varphi)=J=\mathfrak{m}^{[q]}:f\). **Lemma 2.19** (cf. [14, Lemma 2.5]).: _Let \(\mathfrak{c}=\mathfrak{m}^{[q]}=(x_{1}^{q},\ldots,x_{n}^{q})\subseteq P\) and let \(C=(X_{1}^{q},\ldots,X_{n}^{q})\) be the lift of \(\mathfrak{c}\) to the symmetric algebra \(S\) obtained by replacing each \(x_{i}\) by \(X_{i}\). Let \(J\subseteq P\) be a homogeneous Gorenstein ideal containing \(\mathfrak{c}\) with inverse polynomial \(\varphi\) and let \(s\) be the degree of the socle of \(P/J\). The following conditions are equivalent._ 1. \(J\) _is_ \(\mathfrak{c}\)_-compressed._ 2. _For_ \(i\leq s/2\) _the kernel of the map_ \(\Phi_{i}:S_{i}\to D_{s-i}\) _is generated by the monomials_ \(X_{1}^{a_{1}}...X_{n}^{a_{n}}\) _in_ \(S_{i}\) _with_ \(a_{j}\geq q\) _for some_ \(1\leq j\leq n\)_._ For the purpose of making our later arguments more transparent, we expand on this as follows: **Lemma 2.20**.: _Let \(f\in P\) be a homogeneous polynomial with degree \(d=\deg f\). Then \(f\) is link-\(q\)-compressed if and only if the nonzero elements of \((\mathfrak{m}^{[q]}:f)/\mathfrak{m}^{[q]}\) all have degree \(>s/2\), where the socle degree \(s\) of \(P/(\mathfrak{m}^{[q]}:f)\) is \(n(q-1)-d\)._ Proof.: Define \(\varphi=\frac{f}{(x_{1}^{q-1}\cdots x_{n}^{q-1})}\) as in Lemma 2.18. Also by Lemma 2.18, \[\ker\Phi=I(\varphi)=J=((x_{1}^{q},\ldots,x_{n}^{q}):f),\] which means \[\ker\Phi_{i}=S_{i}\cap\ker\Phi=((x_{1}^{q},\ldots,x_{n}^{q}):f)_{i}=(\mathfrak{ m}^{[q]}:f)_{i}\] for any \(i\). Given any \(1\leq j\leq n\), \((x_{j}^{q})\) is generated as a \(k\)-vector space by the set of \(x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\) with \(a_{j}\geq q\), so \(\mathfrak{m}^{[q]}=\sum_{j=1}^{n}(x_{j}^{q})\) is the ideal generated by the set of monomials \(x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\) with \(a_{j}\geq q\) for some \(1\leq j\leq n\). Thus for any \(i\), \((\mathfrak{m}^{[q]})_{i}\) is the set generated by the monomials \(x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in P_{i}\) with \(a_{j}\geq q\) for some \(1\leq j\leq n\). From Lemma 2.19 we know \(\mathfrak{m}^{[q]}:f\) is \(\mathfrak{m}^{[q]}\)-compressed (i.e., \(f\) is link-\(q\)-compressed) if and only if for all \(i\leq s/2\) the kernel of the map \(\Phi_{i}:S_{i}\to D_{s-i}\) is generated by the monomials \(X_{1}^{a_{1}}\cdots X_{n}^{a_{n}}\) in \(S_{i}\) (which correspond to \(x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\) in \(P_{i}\)) with \(a_{j}\geq q\) for some \(1\leq j\leq n\). We showed that \(\ker\Phi_{i}=(\mathfrak{m}^{[q]}:f)_{i}\), and that \((\mathfrak{m}^{[q]})_{i}\) is the set generated by the monomials \(x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in P_{i}\) with \(a_{j}\geq q\) for some \(1\leq j\leq n\), meaning this condition is equivalent to requiring \((\mathfrak{m}^{[q]}:f)_{i}=(\mathfrak{m}^{[q]})_{i}\) for all \(i\leq s/2\). Since \((\mathfrak{m}^{[q]}:f)\supseteq\mathfrak{m}^{[q]}\), we could rephrase this condition further as \(\big{(}(\mathfrak{m}^{[q]}:f)/\mathfrak{m}^{[q]}\big{)}_{i}=0\) for all \(i\leq s/2\), meaning every element of \((\mathfrak{m}^{[q]}:f)/\mathfrak{m}^{[q]}\) with degree \(\leq s/2\) is zero, or equivalently every nonzero element has degree \(>s/2\). ### Prior link-\(q\)-compressed results Here we discuss more facts about link-\(q\)-compressed polynomials. These facts all come from Miller, Rahmati, and R.G.'s results in [10]. **Notation 2.21**.: Let \(P=k[x,y,z]\) be a standard graded polynomial ring over a field \(k\) of characteristic \(p>0\) with homogeneous maximal ideal \(\mathfrak{m}=(x,y,z)\). Let \(R=P/(f)\) with homogeneous \(f\in P\) with \(d=\deg f\geq 2\). Let \(q\) be a power of \(p\). In the following theorems we work with graded ideals of the polynomial ring \(P\): the graded complete intersection ideal \[\mathfrak{c}=(c_{1},c_{2},c_{3}),\] and the linked ideals \(\mathfrak{c}+(f)\) and \(\mathfrak{c}:f=\mathfrak{c}:(\mathfrak{c}+(f))\). Since \(\mathfrak{c}+(f)\) is an almost complete intersection ideal, \(P/(\mathfrak{c}:f)\) is a Gorenstein ring (see [1, Remark 2.7] for this and more results on linkage). We study these to specialize to the case \[\mathfrak{c}=\mathfrak{m}^{[q]}=(x^{q},y^{q},z^{q})\] and the linked ideals \[\mathfrak{c}+(f)=\mathfrak{m}^{[q]}+(f)=(x^{q},y^{q},z^{q},f)\quad\text{and} \quad\mathfrak{c}:f=\mathfrak{m}^{[q]}:f.\] Assume \(c_{1},c_{2},c_{3}\) are part of a minimal generating set for \(\mathfrak{c}:f\). Fix a set of minimal homogeneous generators for \(\mathfrak{c}:f\) as follows \[\mathfrak{c}:f=(c_{1},c_{2},c_{3},w_{1},\ldots,w_{m})\qquad\text{so that }m=\mu( \mathfrak{c}:f)-3.\] Given \(\mathfrak{c}\) and \(f\) we also have matrices \(\boldsymbol{\psi}\) and \(\boldsymbol{\varphi}\) with entries in \(P\), where \(\boldsymbol{\psi}\) is \(m\times 3\) and \(\boldsymbol{\varphi}\) is \(m\times m\) and skew-symmetric. These matrices appear in the resolution of \(P/(\mathfrak{c}:f)\) from the Buchsbaum-Eisenbud structure theorem in [1], as well as the following resolutions (see [12, Lemma 5.2] for the form of the structure theorem these maps come from). The following propositions show the structure of the \(P\)-free and \(R\)-free resolutions of \(P/(\mathfrak{c}+(f))=R/\mathfrak{c}\). These results are refined in Theorem 2.24 where we assume \(\mathfrak{c}=\mathfrak{m}^{[q]}\) and \(f\) is link-\(q\)-compressed. **Proposition 2.22** ([12, Proposition 5.6]).: _Assume that \(\deg c_{i}>d\) for \(i=1,2,3\). The minimal homogeneous resolution of \(P/(\mathfrak{c}+(f))\) over \(P\) is of the form_ \[0\gets P\xleftarrow{\left[\begin{matrix}\vec{c}^{\top}&uf \end{matrix}\right]\bigoplus_{P}^{3}\xleftarrow{\left[\begin{matrix}\boldsymbol {\psi}^{\top}\boldsymbol{\varphi}^{\vee}&uf\boldsymbol{I}\\ -\vec{w}^{\top}&-\vec{c}^{\top}\end{matrix}\right]}}\bigoplus_{P^{3}}^{P^{m}} \xleftarrow{\left[\begin{matrix}\boldsymbol{\varphi}\\ -\boldsymbol{\psi}^{\top}\end{matrix}\right]}P^{m}\gets 0\] _with \(\operatorname{Pf}(\boldsymbol{\varphi})=uf\) for some unit \(u\in k\), \(\vec{c}^{\top}=\left[\begin{matrix}c_{1}&c_{2}&c_{3}\end{matrix}\right]\) and \(\vec{w}^{\top}=\left[\begin{matrix}w_{1}&\cdots&w_{m}\end{matrix}\right]\)._ **Proposition 2.23** ([12, Proposition 5.8]).: _Assume that \(\deg c_{i}>d\) for \(i=1,2,3\). The \(R\)-free resolution of \(R/\mathfrak{c}=P/(\mathfrak{c}+(f))\) is_ \[...\xrightarrow{\boldsymbol{\varphi}}R^{m}\xrightarrow{\boldsymbol{\varphi}^ {\vee}}R^{m}\xrightarrow{\boldsymbol{\varphi}}R^{m}\xrightarrow{\boldsymbol{ \psi}^{\top}\boldsymbol{\varphi}^{\vee}}R^{3}\xrightarrow{\vec{c}^{\top}}R\to 0\] _where by \(\boldsymbol{\varphi}\), \(\boldsymbol{\varphi}^{\vee}\), \(\boldsymbol{\psi}^{\top}\), and \(\vec{c}^{\top}\) above we mean the images in \(R\) of these \(P\)-matrices._ _In addition, the homogeneous skew-symmetric matrix \(\boldsymbol{\varphi}\) has Pfaffian equal to \(uf\) for some unit \(u\in k\), and the pair \((\boldsymbol{\varphi},\boldsymbol{\varphi}^{\vee})\) of matrices over \(P\) is the matrix factorization of \(uf\) over \(P\) associated to the periodic portion._ The following shows that link-\(q\)-compressed polynomials have nice graded Betti numbers for \(R/\mathfrak{m}^{[q]}\). **Theorem 2.24** ([12, Theorem 5.10 and Theorem A]).: _Let \(R=k[x,y,z]/(f)\) be a standard graded hypersurface ring over a field \(k\) of characteristic \(p>0\) with homogeneous maximal ideal \(\mathfrak{m}\). Suppose that \(p\) and \(d=\deg f\) have opposite parity. Let \(q\geq d+3\) be a power of \(p\)._ _Assume further that \(f\) is link-\(q\)-compressed. Then the following hold._ 1. _The matrix_ \(\boldsymbol{\varphi}\) _in the matrix factorization_ \((\boldsymbol{\varphi},\boldsymbol{\varphi}^{\vee})\) _is a_ \(2d\times 2d\) _linear matrix with Pfaffian equal to_ \(uf\) _for some unit_ \(u\in k\)_, and its Pfaffian adjoint_ \(\boldsymbol{\varphi}^{\vee}\) _is a_ \(2d\times 2d\) _matrix with entries of degree_ \(d-1\)_. The matrix_ \(\boldsymbol{\psi}\) _is_ \(2d\times 3\) _with entries of degree_ \(\frac{1}{2}(q-d+1)\)_. In particular, the minimal graded resolution of_ \(R/\mathfrak{m}^{[q]}\) _over_ \(R\) _has the following eventually 2-periodic form_ \[...\xrightarrow{\boldsymbol{\varphi}}R^{2d}(-b-d)\xrightarrow{\boldsymbol{ \varphi}^{\vee}}R^{2d}(-b-1)\xrightarrow{\boldsymbol{\varphi}}R^{2d}(-b) \xrightarrow{\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee}}R^{3}(-q) \xrightarrow{\vec{c}^{\top}}R\to 0\] _where_ \(b=\frac{3}{2}q+\frac{1}{2}d-\frac{1}{2}\)_._ _As a result, for any two such values of_ \(q\)_, say_ \(q_{1}>q_{0}\geq d+3\)_, whenever_ \(f\) _is link-_\(q_{i}\)_-compressed for_ \(i=0,1\)_, the graded Betti numbers in high homological degree of the_ \(R\)_-modules_ \(R/\mathfrak{m}^{[q_{0}]}\) _and_ \(R/\mathfrak{m}^{[q_{1}]}\) _are the same, up to a constant shift of_ \(\frac{3}{2}(q_{1}-q_{0})\)_._ 2. _The minimal graded resolution over_ \(P=k[x,y,z]\) _of_ \(P/(\mathfrak{m}^{[q]}+(f))=R/\mathfrak{m}^{[q]}\) _has the form_ \[0\to P^{2d}(-b-1)\xrightarrow{\begin{bmatrix}\boldsymbol{\varphi}\\ -\boldsymbol{\psi}^{\top}\end{bmatrix}}\bigoplus_{P^{2d}(-b)}\xrightarrow{ \begin{bmatrix}\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee}&uf \boldsymbol{I}\\ -\vec{w}^{\top}&-\vec{c}^{\top}\end{bmatrix}}\bigoplus_{P(-d)}^{P^{3}(-q)} \xrightarrow{\begin{bmatrix}\vec{c}^{\top}&uf\end{bmatrix}}P\to 0.\] _In particular, the Castelnuovo-Mumford regularity is given by_ \[\operatorname{reg}(R/\mathfrak{m}^{[q]})=\frac{3}{2}q+\frac{1}{2}d-\frac{5}{2}.\] **Notation 2.25**.: For a finitely generated graded \(k\)-algebra \(R\) with homogeneous maximal ideal \(\mathfrak{m}\), the Hilbert-Kunz function of \(R\) is defined as \[HK_{R}(q)=\dim_{k}R/\mathfrak{m}^{[q]}.\] **Theorem 2.26** ([14, Theorem 5.11]).: _Suppose that \(p\) and \(d\) have opposite parity. If \(f\in k[x,y,z]\) is link-\(q\)-compressed for some \(q\) then the Hilbert-Kunz function of \(R\) at \(q\) is_ \[HK_{R}(q)=\frac{3}{4}dq^{2}-\frac{1}{12}(d^{3}-d).\] In order to show that link-\(q\)-compressed is a common condition for polynomials, we note the following. _Remark 2.27_.: For fixed values of \(e\) and \(d=\deg f\), a polynomial \(f\) being link-\(p^{e}\)-compressed is a Zariski open condition on the coefficients of \(f\), as was shown in [14, Remark 2.10]. If the set of \(f\) with this condition is shown to also be nonempty, we say it holds for general choices of \(f\). It is a stronger condition for a given \(f\) to be link-\(p^{e}\)-compressed for all \(e>0\); this would mean the coefficients of \(f\) lie in a countable intersection of Zariski open sets. Such a choice of \(f\) is called _very general_. The following tells us about the generators of \(\mathfrak{m}^{[q]}:f\) when \(f\) is link-\(q\)-compressed. **Corollary 2.28** (cf. [14, Corollary 4.4]).: _Let \(P=k[x,y,z]\), assume that \(f\) is link-\(q\)-compressed for some \(q\), with \(q\geq d+3\) (so that \(q\leq\frac{s}{2}\))._ _Then the minimal homogeneous generators for \(\mathfrak{m}^{[q]}:f\) in degrees \(>\frac{s}{2}\) lie in degree \(\frac{s}{2}+1\) when \(s\) is even._ **Proposition 2.29** (cf. [14, Proposition 4.5]).: _Let \(P=k[x,y,z]\), assume that \(f\) is link-\(q\)-compressed for some \(q\), with \(q\geq d+3\) (so that \(q\leq\frac{s}{2}\)). Then \(x^{q},y^{q},z^{q}\) are part of a minimal set of generators for \(\mathfrak{m}^{[q]}:f\), and if \(s\) is even then \(\mathfrak{m}^{[q]}:f\) has exactly \(2d\) additional minimal generators in degree \(\frac{s}{2}+1\) and no others of higher degree._ We have a similar result for the socle of \(R/\mathfrak{m}^{[q]}\). **Theorem 2.30** (cf. [14, Theorem 4.1]).: _Let \(R=k[x,y,z]/(f)\) be a standard graded hypersurface ring over a field \(k\) with homogeneous maximal ideal \(\mathfrak{m}\). Set_ \[q=p^{e}\quad d=\deg f\quad\text{and}\quad s=3(q-1)-d.\] _Assume that \(q\geq d+3\). Assume further that \(f\) is link-\(q\)-compressed. Then the following hold for the socle module \(\operatorname{soc}(R/\mathfrak{m}^{[q]})\)._ 1. _Its generators lie in degree_ \(s_{2}=\frac{1}{2}(3(q-1)+d-2)\) _if_ \(s\) _is even._ 2. _If_ \(s\) _is even, its dimension satisfies_ \(\dim_{k}\operatorname{soc}\left(R/\mathfrak{m}^{[q]}\right)_{s_{2}}=2d\) ## 3 Relevant number theory The following sequence of coefficients is vital for our results. This section is dedicated to proving theorems about these coefficients. **Notation 3.1**.: For any integer \(t\geq 0\), we define the dyadic rational \(\lambda_{t}=2^{-2t}\binom{2t}{t}\in\mathbb{Z}[2^{-1}]\). _Remark 3.2_.: For any \(t\geq 0\), \(\lambda_{t}\) is well defined in any \(\mathbb{Z}[2^{-1}]\)-algebra, including fields which have characteristic not equal to \(2\). _Remark 3.3_.: The Maclaurin expansion of the function \((1-x)^{-1/2}\) uses the \(\lambda_{t}=2^{-2t}\binom{2t}{t}\) coefficients: \[\frac{1}{\sqrt{1-x}}=\sum_{t=0}^{\infty}\lambda_{t}x^{t}\] for all \(-1\leq t<1\). The following are important facts we need for several results: **Lemma 3.4**.: _For any \(t\geq 0\), the following hold in any ring for which they are well-defined:_ 1. \((2t)!=2^{t}t!\prod_{h=1}^{t}(2h-1)\)_,_ 2. \(2^{t}\prod_{h=1}^{t}(2h-1)=t!\binom{2t}{t}\)_,_ 3. \(\prod_{h=1}^{t}(2h-1)=2^{t}t!\lambda_{t}\)_, and_ 4. \(\left(\prod_{h=1}^{t}(2h-1)\right)^{2}=(2t)!\lambda_{t}\)_._ Proof.: Let \(t\geq 0\). 1. By separating the even and odd factors of \((2t)!\), we have \[(2t)!=\prod_{h=1}^{2t}h=\prod_{h=1}^{t}(2h)\prod_{h=1}^{t}(2h-1)=\prod_{h=1}^{ t}2\prod_{h=1}^{t}h\prod_{h=1}^{t}(2h-1)=2^{t}t!\prod_{h=1}^{t}(2h-1).\] 2. We also have \(t!\binom{2t}{t}=\frac{(2t)!}{t!}=2^{t}\prod_{h=1}^{t}(2h-1)\) using \(1\). 3. From \(2\) we can see that \[2^{t}t!\lambda_{t}=2^{t}t!\left(2^{-2t}\binom{2t}{t}\right)=2^{-t}t!\binom{2t }{t}=\prod_{h=1}^{t}(2h-1).\] 4. Finally, \[(2t)!\lambda_{t}=\left(2^{t}t!\prod_{h=1}^{t}(2h-1)\right)\lambda_{t}=\left( \prod_{h=1}^{t}(2h-1)\right)^{2}.\qed\] _Remark 3.5_.: Note that \(1\) and \(2\) of this previous Lemma are true in any ring since only integers are involved. On the other hand, \(3\) and \(4\) are only true in \(\mathbb{Z}[2^{-1}]\)-algebras because they involve \(\lambda_{t}\in\mathbb{Z}[2^{-1}]\), as well as integers. In order to prove a vital lemma about \(\lambda_{t}\), we recall two well-known theorems about integers modulo a prime: **Theorem** (Fermat's little theorem).: _Let \(p\) be a prime number. Then for any integer \(a\), \(a^{p}\equiv a\mod p\). By applying this repeatedly, we also see that \(a^{q}\equiv a\mod p\) if \(q\) is any power of \(p\)._ **Theorem** (Lucas's theorem).: _Let \(p\) be a prime number. Let \(a,b\geq 0\) be integers and let_ \[a=a_{n}p^{n}+a_{n-1}p^{n-1}+\cdots+a_{0}\qquad\text{and}\qquad b=b_{n}p^{n}+b_{ n-1}p^{n-1}+\cdots+b_{0}\] _be the base \(p\) expansions of \(a\) and \(b\). Then_ \[\binom{a}{b}\equiv\prod_{i=0}^{n}\binom{a_{i}}{b_{i}}\mod p,\] _where we use the convention that \(\binom{m}{n}=0\) if \(m<n\)._ The following lemma is the reason the \(\lambda_{t}\) coefficients are vital to our results: **Lemma 3.6**.: _Let \(p\) be an odd prime and \(q=p^{e}\) where \(e>0\). Set \(\pi=(q-1)/2\). Then, in any ring of characteristic \(p>2\),_ \[(-1)^{t}\binom{\pi}{t}=2^{-2t}\binom{2t}{t}=\lambda_{t}\] _for all \(0\leq t<q\)._ Proof.: We prove a statement in the integers to prove the lemma. The following work shows that \(2^{2t}\left((-1)^{t}\binom{\pi}{t}\right)\equiv\binom{2t}{t}\mod p\) for all \(0\leq t<q\). Define \(\pi_{0}=(p-1)/2\). Let \(0\leq t_{0}\leq p-1\). Then we have \[t_{0}!\left(2^{2t_{0}}\left((-1)^{t_{0}}\binom{\pi_{0}}{t_{0}} \right)\right) =\left((-1)^{t_{0}}2^{2t_{0}}\right)\left(t_{0}!\binom{\pi_{0}}{t_ {0}}\right)\] \[= 2^{t_{0}}(-2)^{t_{0}}\prod_{h=1}^{t_{0}}((\pi_{0}+1)-h)\] \[= 2^{t_{0}}\left(\prod_{h=1}^{t_{0}}(-2)\prod_{h=1}^{t_{0}}((\pi_{ 0}+1)-h)\right)\] \[= 2^{t_{0}}\prod_{h=1}^{t_{0}}-2((\pi_{0}+1)-h)\] \[= 2^{t_{0}}\prod_{h=1}^{t_{0}}((2h-1)-(2\pi_{0}+1))\] \[= 2^{t_{0}}\prod_{h=1}^{t_{0}}((2h-1)-p)\] \[\equiv 2^{t_{0}}\prod_{h=1}^{t_{0}}(2h-1)\mod p\] \[= t_{0}!\binom{2t_{0}}{t_{0}},\] where the last line holds by Lemma 3.4. Thus \[2^{2t_{0}}\left((-1)^{t_{0}}\binom{\pi_{0}}{t_{0}}\right)\equiv\binom{2t_{0}} {t_{0}}\mod p\] (*) since \(t_{0}!\neq 0\mod p\) and \(p\) is prime. Let \(0\leq t<q\). Write the base \(p\) expansion of \(t\) as \(t=\sum_{i=0}^{e-1}t_{i}p^{i}\), so \(0\leq t_{i}<p\) for all \(0\leq i\leq e-1\). Also note that \[\sum_{i=0}^{e-1}\pi_{0}p^{i}=\frac{p-1}{2}\sum_{i=0}^{e-1}p^{i}=\frac{p^{e}-1}{2 }=\pi.\] Then we have \[2^{2t}\left((-1)^{t}\binom{\pi}{t}\right)= (-4)^{\sum_{i=0}^{e-1}t_{i}p^{i}}\left(\sum_{i=0}^{e-1}\pi_{0}p^{i} \right)\] \[=\left(\prod_{i=0}^{e-1}(-4)^{t_{i}p^{i}}\right)\left(\sum_{i=0}^{ e-1}\pi_{0}p^{i}\right)\] \[=\left(\prod_{i=0}^{e-1}\left((-4)^{p^{i}}\right)^{t_{i}}\right) \left(\sum_{i=0}^{e-1}\pi_{0}p^{i}\right)\] \[\equiv \left(\prod_{i=0}^{e-1}\left((-4)^{p^{i}}\right)^{t_{i}}\right) \left(\prod_{i=0}^{e-1}\binom{\pi_{0}}{t_{i}}\right)\mod p \tag{1}\] \[= \prod_{i=0}^{e-1}\left(\left((-4)^{p^{i}}\right)^{t_{i}}\binom{ \pi_{0}}{t_{i}}\right)\] \[\equiv \prod_{i=0}^{e-1}\left((-4)^{t_{i}}\binom{\pi_{0}}{t_{i}}\right) \mod p\] (2) \[= \prod_{i=0}^{e-1}\left(2^{2t_{i}}\left((-1)^{t_{i}}\binom{\pi_{0} }{t_{i}}\right)\right)\mod p\] \[\equiv \prod_{i=0}^{e-1}\binom{2t_{i}}{t_{i}}\mod p, \tag{3}\] where (1) holds by Lucas's theorem, (2) holds by Fermat's little theorem, and (3) holds by Equation (*). If \(t_{i}\leq\pi_{0}\) for all \(i\), this means that \(0\leq 2t_{i}<p\), and thus \(2t=\sum_{i=0}^{e-1}(2t_{i})p^{i}\) is the base \(p\) expansion of \(2t\), which means by Lucas's theorem we have \[2^{2t}\left((-1)^{t}\binom{\pi}{t}\right)\equiv\prod_{i=0}^{e-1}\binom{2t_{i} }{t_{i}}\equiv\left(\sum_{i=0}^{e-1}2t_{i}p^{i}\right)=\binom{2t}{t}\mod p.\] On the other hand, if \(t\) has at least one digit greater than \(\pi_{0}\), we show that both \(2^{2t}\left((-1)^{t}\binom{\pi}{t}\right)\) and \(\binom{2t}{t}\) are congruent to \(0\mod p\). Let \(0\leq I\leq e-1\) be the smallest index such that \(t_{I}>\pi_{0}\). Then \(0\leq 2t_{i}<p\) for \(i<I\) and \(0\leq 2t_{I}-p<p\), so \(2t=\sum_{i=0}^{I-1}(2t_{i})p^{i}+(2t_{I}-p)p^{I}+\cdots\) is the base \(p\) expansion of \(2t\), up to the \(I\)th degree. Note that because \(t_{I}>\pi_{0}\), \(\binom{\pi_{0}}{t_{I}}=0\), and thus \(\binom{2t_{I}}{t_{I}}\equiv 2^{2t_{I}}\left((-1)^{t_{I}}\binom{\pi_{0}}{t_{I}} \right)=0\mod p\) by Equation (*). Note that we also have \(\binom{2t_{I}-p}{t_{I}}=0\) because \(t_{I}>2t_{I}-p\), which follows from \(t_{I}<p\). Thus, we have \[\binom{2t}{t}=\binom{\sum_{i=0}^{I-1}(2t_{i})p^{i}+(2t_{I}-p)p^{I}+\cdots}{ \sum_{i=0}^{I-1}t_{i}p^{i}+t_{I}p^{I}+\cdots}\equiv\prod_{i=0}^{I-1}\binom{2t_ {i}}{t_{i}}\binom{2t_{I}-p}{t_{I}}(\cdots)=\prod_{i=0}^{I-1}\binom{2t_{i}}{t_{ i}}(0)(\cdots)=0\mod p\] because of Lucas's theorem and \(\binom{2t_{I}-p}{t_{I}}=0\), and \[2^{2t}\left((-1)^{t}\binom{\pi}{t}\right)\equiv\prod_{i=0}^{e-1}\binom{2t_{i} }{t_{i}}=\prod_{i=0}^{I-1}\binom{2t_{i}}{t_{i}}\binom{2t_{I}}{t_{I}}\prod_{i= 1}^{e-1}\binom{2t_{i}}{t_{i}}\equiv\prod_{i=0}^{I-1}\binom{2t_{i}}{t_{i}}(0) \prod_{i=I+1}^{e-1}\binom{2t_{i}}{t_{i}}=0\mod p\] because \(\binom{2t_{I}}{t_{I}}\equiv 0\mod p\). Therefore \(2^{2t}\left((-1)^{t}\binom{\pi}{t}\right)\equiv\binom{2t}{t}\mod p\) for all \(0\leq t<q\). **Lemma 3.7**.: _Let \(p\) be an odd prime, with \(q>1\) a power of \(p\). Define \(\pi=\frac{q-1}{2}\). In \(S[x,y,z]\), where \(S\) is any characteristic \(p\) ring, we have_ \[z^{q}=z\sum_{t=0}^{\pi}\lambda_{t}(xy)^{\pi-t}(xy-z^{2})^{t}.\] Proof.: We see that \[z^{q}= z(z^{2})^{\pi}\] \[= z(xy-(xy-z^{2}))^{\pi}\] \[= z\sum_{t=0}^{\pi}(-1)^{t}\binom{t}{t}(xy)^{\pi-t}(xy-z^{2})^{t}\] \[= z\sum_{t=0}^{\pi}\lambda_{t}(xy)^{\pi-t}(xy-z^{2})^{t},\] where the third equality follows from binomial expansion and the fourth equality follows from Lemma 3.6. ## 4 Determinant calculations The following matrices play such a vital role in our results that we dedicate this entire section to calculating certain determinants of it. The matrices and determinants we describe in this section exist in \(\mathbb{Z}[x,y,z]\), so they can exist in \(S[x,y,z]\) where \(S\) is any \(\mathbb{Z}\)-algebra. Occasionally we will have to assume these matrices and determinants exist in \(\mathbb{Z}[2^{-1}][x,y,z]\), in which case they can exist in \(S[x,y,z]\) where \(S\) is any \(\mathbb{Z}[2^{-1}]\)-algebra. We note the few times we must make this additional assumption when they happen. In either case, \(S\) can be a field of odd characteristic. **Notation 4.1**.: Let \(d\) be a positive integer. Define the \(d\times d\) matrix \[\mathbf{M}=\begin{bmatrix}-(d-1)z&1x&&&&\\ -(d-1)y&-(d-3)z&2x&&&\\ &-(d-2)y&-(d-5)z&3x&&&\\ &&-(d-3)y&\ddots&\ddots&&\\ &&&\ddots&\ddots&(d-2)x&&\\ &&&&-2y&(d-3)z&(d-1)x\\ &&&&-1y&(d-1)z\end{bmatrix}.\] The entries of \(\mathbf{M}\) are in \(\mathbb{Z}[x,y,z]\). We also define a generalization of \(\mathbf{M}\): for every \(n\geq 0\), let \(\mathbf{L}_{n}\) denote the \(n\times n\) matrix \[\begin{bmatrix}-(d-1)z&1x&&\\ -(d-1)y&-(d-3)z&2x&&\\ &-(d-2)y&\ddots&&\ddots&\\ &&\ddots&\ddots&((n-1)-1)x&\\ &&&&-((d+1)-(n-1))y&(2(n-1)-(d+1))z&(n-1)x\\ &&&&-((d+1)-n)y&(2n-(d+1))z\end{bmatrix}\] with entries in \(\mathbb{Z}[x,y,z]\). Explicitly, the entries of both \(\mathbf{M}\) and \(\mathbf{L}_{n}\) are: * \((2i-(d+1))z=(2j-(d+1))z\) on the main diagonal \((i=j)\), * \(ix=(j-1)x\) on the upper diagonal \((i=j-1)\), * \(((d+1)-i)y=(d-j)y\) on the lower diagonal \((i=j+1)\), and * \(0\) everywhere else. From this it is clear that \(\mathbf{M}=\mathbf{L}_{d}\), and in fact \(\mathbf{L}_{n}\) consists of the first \(n\) rows and columns of \(\mathbf{M}\) for any \(n\leq d\). Besides the determinant of \(\mathbf{M}\) itself, we need to know \(\det\left(\mathbf{M}_{(\hat{d}),(\hat{1})}\right)\), \(\det\left(\mathbf{M}_{(\hat{1}),(\hat{d})}\right)\), and \(\det\left(\mathbf{M}_{(\hat{1}),(\hat{1})}\right)\). We find \(\det\left(\mathbf{M}_{(\hat{d}),(\hat{1})}\right)\) and \(\det\left(\mathbf{M}_{(\hat{1}),(\hat{d})}\right)\) here: **Proposition 4.2**.: _We have \(\det\left(\mathbf{M}_{(\hat{d}),(\hat{1})}\right)=(d-1)!x^{d-1}\) and \(\det\left(\mathbf{M}_{(\hat{1}),(\hat{d})}\right)=(-1)^{d-1}(d-1)!y^{d-1}\)._ Proof.: When we remove the first column and last row from the matrix \(\mathbf{M}\), we get the matrix \[\mathbf{M}_{(\hat{d}),(\hat{1})}=\begin{bmatrix}1x&&&&\\ -(d-3)z&2x&&&\\ -(d-2)y&-(d-5)z&3x&&\\ &-(d-3)y&\ddots&\ddots&&\\ &&\ddots&\ddots&(d-2)x&\\ &&&-2y&(d-3)z&(d-1)x\end{bmatrix}.\] Notice that this is a lower triangular matrix, with \(hx\) the \(h\)th diagonal entry. This means that \[\det\left(\mathbf{M}_{(\hat{d}),(\hat{1})}\right)=\prod_{h=1}^{d-1}(hx)=(d-1)!x^{ d-1}.\] We can see that \(\mathbf{M}_{(\hat{1}),(\hat{d})}\) is \[\begin{bmatrix}-(d-1)y&-(d-3)z&2x&&&\\ &-(d-2)y&-(d-5)z&3x&&\\ &&-(d-3)y&\ddots&\ddots&\\ &&&\ddots&\ddots&(d-2)x\\ &&&&-2y&(d-3)z\\ &&&&-1y\end{bmatrix},\] which is upper triangular with \(-(d-h)y\) as the \(h\)th diagonal entry. Thus we have \[\det\left(\mathbf{M}_{(\hat{1}),(\hat{d})}\right)=\prod_{h=1}^{d-1}(-(d-h)y)=(-1)^ {d-1}(d-1)!y^{d-1}.\qed\] We show that \(A_{n}\) in the following Notation is a formula for \(\det\mathbf{L}_{n}\): **Notation 4.3**.: For any \(t\) and \(N\) with \(0\leq t\leq N\), and any \(\nu\in\{0,1\}\), define \[a_{t,N,\nu}=(-1)^{t+\nu}\left(\prod_{h=1}^{N+t+\nu}\left(d-(2h-1)\right) \right)\left(\prod_{h=t+1}^{N}\left(d-2h\right)\right)\left(N\atop t\right) \in\mathbb{Z}.\] Note that these products are well-defined since \(0\leq t\leq N\). Set \(F=xy-z^{2}\). Given \(n\geq 0\), set \(n=2N+\nu\) with \(N\geq 0\) and \(\nu\in\{0,1\}\). Then we define \[A_{n}=A_{2N+\nu}= z^{\nu}\sum_{t=0}^{N}a_{t,N,\nu}(xy)^{N-t}F^{t}\] \[= z^{\nu}\sum_{t=0}^{N}(-1)^{t+\nu}\left(\prod_{h=1}^{N+t+\nu}(d-(2 h-1))\right)\left(\prod_{h=t+1}^{N}(d-2h)\right)\binom{N}{t}(xy)^{N-t}F^{t}\in \mathbb{Z}[x,y,z].\] We prove that \(\det\mathbf{L}_{n}=A_{n}\) for all \(n\geq 0\) by showing that both sequences satisfy the following homogeneous linear recurrence relation and initial values: * \(u_{0}=1\), * \(u_{1}=-(d-1)z\), and * \(u_{n}=(n-1)(d-(n-1))xyu_{n-2}-(d-(2n-1))zu_{n-1}\) for all \(n\geq 2\). We begin with \(\det\mathbf{L}_{n}\): **Lemma 4.4**.: _The following are true regarding the sequence \(\det(\mathbf{L}_{n})\):_ * \(\det(\mathbf{L}_{0})=1\)_,_ * \(\det(\mathbf{L}_{1})=-(d-1)z\)_, and_ * \(\det(\mathbf{L}_{n})=(n-1)(d-(n-1))xy\det(\mathbf{L}_{n-2})-(d-(2n-1))z\det(\mathbf{L}_{n -1})\) _for all_ \(n\geq 2\)_._ Proof.: For any \(n\geq 2\), we apply the cofactor expansion formula repeatedly over the last columns to show the following: \[\det(\mathbf{L}_{n})\] \[= \det\begin{bmatrix}-(d-1)z&1x\\ -(d-1)y&\ddots&\ddots\\ &\ddots&\ddots&(n-3)x\\ &&-((d+3)-n)y&(2n-(d+5))z&(n-2)x\\ &&&-((d+2)-n)y&(2n-(d+3))z&(n-1)x\\ &&&-((d+1)-n)y&(2n-(d+1))z\end{bmatrix}\] \[= (-1)^{n+n}(2n-(d+1))z\det\begin{bmatrix}-(d-1)z&1x\\ -(d-1)y&\ddots&\ddots\\ &\ddots&\ddots&(n-3)x\\ &&-((d+3)-n)y&(2n-(d+5))z&(n-2)x\\ &&&-((d+2)-n)y&(2n-(d+3))z\end{bmatrix}\] \[+(-1)^{n-1+n}(n-1)x\det\begin{bmatrix}-(d-1)z&1x\\ -(d-1)y&\ddots&\ddots\\ &\ddots&\ddots&(n-3)x\\ &&-((d+3)-n)y&(2n-(d+5))z&(n-2)x\\ &&&0&-((d+1)-n)y\end{bmatrix}\] \[= -(d-(2n-1))z\det(\mathbf{L}_{n-1})\] \[-(n-1)x(-1)^{(n-1)+(n-1)}(-((d+1)-n)y)\det\begin{bmatrix}-(d-1)z& 1x\\ -(d-1)y&\ddots&\ddots\\ &\ddots&\ddots&(n-3)x\\ &&-((d+3)-n)y&(2n-(d+5))z\end{bmatrix}\] \[= -(d-(2n-1))z\det(\mathbf{L}_{n-1})+(n-1)(d-(n-1))xy\det(\mathbf{L}_{n-2}).\] This gives us a recursive formula: \[\det(\mathbf{L}_{n})=(n-1)(d-(n-1))xy\det(\mathbf{L}_{n-2})-(d-(2n-1))z\det(\mathbf{L}_{n- 1})\] for all \(n\geq 2\). We also have \(\det(\mathbf{L}_{0})=1\) because \(\mathbf{L}_{0}\) is a \(0\times 0\) matrix, and \(\det(\mathbf{L}_{1})=\det\left[-(d-1)z\right]=-(d-1)z\). For \(A_{n}\), we start with the initial conditions: **Lemma 4.5**.: _The following are true regarding the sequence \(A_{n}\):_ * \(A_{0}=1\) _and_ * \(A_{1}=-(d-1)z\)_._ Proof.: For \(\nu\in\{0,1\}\) we have \[A_{\nu}=A_{2(0)+\nu}= z^{\nu}\sum_{t=0}^{0}(-1)^{t\leftrightarrow\nu}\left(\prod_{h=1}^ {0+t+\nu}(d-(2h-1))\right)\left(\prod_{h=t+1}^{0}(d-2h)\right)\binom{0}{t}(xy) ^{0-t}F^{t}\] \[= z^{\nu}(-1)^{0+\nu}\left(\prod_{h=1}^{0+0+\nu}(d-(2h-1))\right) \left(\prod_{h=0+1}^{0}(d-2h)\right)\binom{0}{0}(xy)^{0-0}F^{0}\] \[= (-1)^{\nu}z^{\nu}\prod_{h=1}^{\nu}(d-(2h-1))\] by Notation 4.3, and thus we have \[A_{0}=(-1)^{0}z^{0}\prod_{h=1}^{0}(d-(2h-1))=1\qquad\text{and}\qquad A_{1}=(-1)^{ 1}z^{1}\prod_{h=1}^{1}(d-(2h-1))=-(d-1)z.\qed\] Next we show that \(A_{n}\) satisfies the recursive formula, in both the cases where \(n\) is even and \(n\) is odd. **Notation 4.6**.: For any \(N\geq 0\), any \(t\in\mathbb{Z}\), and any \(\nu\in\{0,1\}\), define \[\tilde{a}_{t,N,\nu}=(-1)^{t+\nu}\left(\prod_{h=1}^{N+t+\nu}(\tilde{d}-(2h-1)) \right)\left(\prod_{h=t+1}^{N}(\tilde{d}-2h)\right)\binom{N}{t}.\] Here \(\tilde{d}\) is an indeterminate, so \(\tilde{a}_{t,N,\nu}\) is in \(\mathbb{Z}(\tilde{d})\). We use the conventions that \(\binom{\beta}{\alpha}=0\) unless \(0\leq\alpha\leq\beta\), and that \(\prod_{h=\alpha}^{\beta}f(h)=\left(\prod_{h=\beta+1}^{\alpha-1}f(h)\right)^{-1}\) for \(\beta<\alpha-1\). _Remark 4.7_.: Unless \(0\leq t\leq N\), \(\tilde{a}_{t,N,\nu}=0\) because \(\binom{N}{t}=0\). Also unless \(0\leq t\leq N\), one of the products \(\prod_{h=1}^{N+t+\nu}(\tilde{d}-(2h-1))\) or \(\prod_{h=t+1}^{N}(\tilde{d}-2h)\) might need to be inverted (using the convention mentioned above, that \(\prod_{h=\alpha}^{\beta}f(h)=\left(\prod_{h=\beta+1}^{\alpha-1}f(h)\right)^{-1}\) for \(\beta<\alpha-1\)); this is the reason we introduce \(\tilde{d}\) and define \(\tilde{a}_{t,N,\nu}\) as an element of the field of rational fractions \(\mathbb{Z}(\tilde{d})\). Even though \(\tilde{a}_{t,N,\nu}\) is \(0\) unless \(0\leq t\leq N\), these changes are necessary to make all factors of \(\tilde{a}_{t,N,\nu}\) well-defined for all \(t\in\mathbb{Z}\). On the other hand, if \(0\leq t\leq N\), then \(\tilde{a}_{t,N,\nu}\in\mathbb{Z}[\tilde{d}]\), and in fact \(\tilde{a}_{t,N,\nu}\) with \(\tilde{d}\) mapped to \(d\) is \(a_{t,N,\nu}\), the coefficient of \((xy)^{N-t}F^{t}\) in the formula for \(A_{2N+\nu}\) from Notation 4.3. **Notation 4.8**.: Given \(n\geq 0\), set \(n=2N+\nu\) with \(N\geq 0\) and \(\nu\in\{0,1\}\). Then we define \[\tilde{A}_{n}=\tilde{A}_{2N+\nu}:=z^{\nu}\sum_{t=0}^{N}\tilde{a}_{t,N,\nu}(xy) ^{N-t}F^{t}\] as an element of \(\mathbb{Z}[\tilde{d}][x,y,z]\). _Remark 4.9_.: If we map \(\tilde{d}\) to \(d\), then \(\tilde{A}_{2N+\nu}\) becomes \(A_{2N+\nu}\) from Notation 4.3. Also, if we treat \(\tilde{A}_{n}\) as an element of \(\mathbb{Z}(\tilde{d})(x,y,z)\), we write \[\tilde{A}_{2N+\nu}=(xy)^{N}z^{\nu}\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N,\nu }\left(\frac{F}{xy}\right)^{t},\] since \(\tilde{a}_{t,N,\nu}=0\) unless \(0\leq t\leq N\). First we prove the odd case: **Lemma 4.10**.: _For all odd \(n\geq 2\), we have_ \[A_{n}=(n-1)(d-(n-1))xyA_{n-2}-(d-(2n-1))zA_{n-1}.\] Proof.: Fix \(N\geq 1\). We first prove that \[\tilde{a}_{t,N,1}=2N(\tilde{d}-2N)\tilde{a}_{t,N-1,1}-(\tilde{d}-(4N+1)) \tilde{a}_{t,N,0}\] (*) for all \(t\in\mathbb{Z}\). To prove this, we note that * \(\tilde{a}_{t,N,1}=(-1)^{t+1}\prod_{h=1}^{N+t+1}(\tilde{d}-(2h-1))\prod_{h=t+1} ^{N}(\tilde{d}-2h)\binom{N}{t}\), * \(\tilde{a}_{t,N-1,1}=(-1)^{t+1}\prod_{h=1}^{N+t}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N-1 }(\tilde{d}-2h)\binom{N-1}{t}\), and * \(\tilde{a}_{t,N,0}=(-1)^{t}\prod_{h=1}^{N+t}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N}( \tilde{d}-2h)\binom{N}{t}\) for any \(t\in\mathbb{Z}\) by Notation 4.6. Now we see that \[2N(\tilde{d}-2N) \tilde{a}_{t,N-1,1}-(\tilde{d}-(4N+1))\tilde{a}_{t,N,0}\] \[= 2N(\tilde{d}-2N)\left((-1)^{t+1}\prod_{h=1}^{N+t}(\tilde{d}-(2h- 1))\prod_{h=t+1}^{N-1}(\tilde{d}-2h)\binom{N-1}{t}\right)\] \[-(\tilde{d}-(4N+1))\left((-1)^{t}\prod_{h=1}^{N+t}(\tilde{d}-(2h- 1))\prod_{h=t+1}^{N}(\tilde{d}-2h)\binom{N}{t}\right)\] \[= (-1)^{t+1}\prod_{h=1}^{N+t}(\tilde{d}-(2h-1))\left(2N\prod_{h=t+1 }^{N}(\tilde{d}-2h)\binom{N-1}{t}+(\tilde{d}-(4N+1))\prod_{h=t+1}^{N}(\tilde{d }-2h)\binom{N}{t}\right)\] \[= (-1)^{t+1}\prod_{h=1}^{N+t}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N}( \tilde{d}-2h)\left(2N-t)+(\tilde{d}-(4N+1))\binom{N}{t}\right)\] \[= (-1)^{t+1}\prod_{h=1}^{N+t}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N}( \tilde{d}-2h)(\tilde{d}-(2(N+t+1)-1))\binom{N}{t}\] \[= (-1)^{t+1}\prod_{h=1}^{N+t+1}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N}( \tilde{d}-2h)(\tilde{d}-(2(N+t+1)-1))\binom{N}{t}\] \[= (-1)^{t+1}\prod_{h=1}^{N+t+1}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N}( \tilde{d}-2h)\binom{N}{t}\] \[= \tilde{a}_{t,N,1}.\] Here we made use of the fact that \(N\binom{N-1}{t}=(N-t)\binom{N}{t}\) for all \(t\in\mathbb{Z}\). Using Remark 4.9, we write * \(\tilde{A}_{2N+1}=(xy)^{N}z\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N,1}\left( \frac{F}{xy}\right)^{t}\), * \(\tilde{A}_{2N}=(xy)^{N}\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N,0}\left(\frac{F }{xy}\right)^{t}\), and * \(\tilde{A}_{2(N-1)+1}=(xy)^{N-1}z\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N-1,1} \left(\frac{F}{xy}\right)^{t}\). Using Equation (*), which we proved earlier, we see that \[2N(\tilde{d}-2N) xy\tilde{A}_{2(N-1)+1}-(\tilde{d}-(4N+1))z\tilde{A}_{2N}\] \[= 2N(\tilde{d}-2N)xy\left((xy)^{N-1}z\sum_{t=-\infty}^{\infty} \tilde{a}_{t,N-1,1}\left(\frac{F}{xy}\right)^{t}\right)-(\tilde{d}-(4N+1))z \left((xy)^{N}\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N,0}\left(\frac{F}{xy} \right)^{t}\right)\] \[= (xy)^{N}z\sum_{t=-\infty}^{\infty}\left(2N(\tilde{d}-2N)\tilde{ a}_{t,N-1,1}-(\tilde{d}-(4N+1))\tilde{a}_{t,N,0}\right)\left(\frac{F}{xy}\right)^{t}\] \[= (xy)^{N}z\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N,1}\left(\frac{F }{xy}\right)^{t}\] \[= \tilde{A}_{2N+1}\] which proves that \[\tilde{A}_{2N+1}=2N(\tilde{d}-2N)xy\tilde{A}_{2(N-1)+1}-(\tilde{d}-(4N+1))z\tilde{ A}_{2N}\] for \(N\geq 1\). When we set \(\tilde{d}\) to \(d\), this proves that \[A_{2N+1}=2N(d-2N)xyA_{2(N-1)+1}-(d-(4N+1))zA_{2N}.\] If we write \(n=2N+1\), then this proves the lemma. And now we prove the even case: **Lemma 4.11**.: _For all even \(n\geq 2\), we have_ \[A_{n}=(n-1)(d-(n-1))xyA_{n-2}-(d-(2n-1))zA_{n-1}.\] Proof.: Fix \(N\geq 1\). We first want to prove that \[\tilde{a}_{t,N,0}=(2N-1)(\tilde{d}-(2N-1))\tilde{a}_{t,N-1,0}-(\tilde{d}-(4N-1 ))(\tilde{a}_{t,N-1,1}-\tilde{a}_{t-1,N-1,1})\] (*) for all \(t\in\mathbb{Z}\). To prove this, we see that * \(\tilde{a}_{t,N,0}=(-1)^{t}\prod_{h=1}^{N+t}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N -1}(\tilde{d}-2h)\binom{N}{t}\), * \(\tilde{a}_{t,N-1,0}=(-1)^{t}\prod_{h=1}^{N-1+t}(\tilde{d}-(2h-1))\prod_{h=t+1} ^{N-1}(\tilde{d}-2h)\binom{N-1}{t}\), * \(\tilde{a}_{t,N-1,1}=(-1)^{t+1}\prod_{h=1}^{N+t}(\tilde{d}-(2h-1))\prod_{h=t+1} ^{N-1}(\tilde{d}-2h)\binom{N-1}{t}\), and * \(\tilde{a}_{t-1,N-1,1}=(-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1))\prod_{h=t} ^{N-1}(\tilde{d}-2h)\binom{N-1}{t-1}\) for any \(t\in\mathbb{Z}\) by Notation 4.6. Now we see that \[\tilde{a}_{t,N-1,1}-\tilde{a}_{t-1,N-1,1}\] \[= (-1)^{t+1}\prod_{h=1}^{N+t}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N-1} (\tilde{d}-2h)\binom{N-1}{t}-(-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1)) \prod_{h=t}^{N-1}(\tilde{d}-2h)\binom{N-1}{t-1}\] \[= -(-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1))\left((\tilde{d}-( 2(N+t)-1))\prod_{h=t+1}^{N-1}(\tilde{d}-2h)\binom{N-1}{t}+\prod_{h=t}^{N-1}( \tilde{d}-2h)\binom{N-1}{t-1}\right)\] \[= -(-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N-1} (\tilde{d}-2h)\left((\tilde{d}-2t)\binom{N}{t}-(2N-1)\binom{N-1}{t}+(\tilde{ d}-2t)\binom{N-1}{t-1}\right)\] \[= -(-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N-1 }(\tilde{d}-2h)\left((\tilde{d}-2t)\binom{N}{t}-(2N-1)\binom{N-1}{t}\right).\] where the last line holds because \(\binom{N}{t}=\binom{N-1}{t}+\binom{N-1}{t-1}\) for all \(t\in\mathbb{Z}\). We use this to show that \[(2N-1)(\tilde{d}-(2N-1))\tilde{a}_{t,N-1,0}-(\tilde{d}-(4N-1))\big{(} \tilde{a}_{t,N-1,1}-\tilde{a}_{t-1,N-1,1}\big{)}\] \[= (2N-1)(\tilde{d}-(2N-1))\left((-1)^{t}\prod_{h=1}^{N-1+t}(\tilde{ d}-(2h-1))\prod_{h=t+1}^{N-1}(\tilde{d}-2h)\binom{N-1}{t}\right)\] \[-(\tilde{d}-(4N-1))\left(-(-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-( 2h-1))\prod_{h=t+1}^{N-1}(\tilde{d}-2h)\left((\tilde{d}-2t)\binom{N}{t}-(2N-1) \binom{N-1}{t}\right)\right)\] \[= (-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N-1} (\tilde{d}-2h)\] \[\left((2N-1)\big{(}\tilde{d}-(2N-1)\big{)}\binom{N-1}{t}+(\tilde {d}-(4N-1))\left((\tilde{d}-2t)\binom{N}{t}-(2N-1)\binom{N-1}{t}\right)\right)\] \[= (-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N-1} (\tilde{d}-2h)\] \[= (-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N-1} (\tilde{d}-2h)\left((2N-1)(2N-t)\right)\binom{N}{t}+(\tilde{d}-(4N-1))(\tilde {d}-2t)\binom{N}{t}\bigg{)} \tag{1}\] \[= (-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N-1} (\tilde{d}-2h)\left((2N-1)(2N-t)\right)\binom{N}{t}+(\tilde{d}-(4N-1))(\tilde {d}-2t)\binom{N}{t}\bigg{)}\] \[= (-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N-1} (\tilde{d}-2h)\left((2N-1)(N-t)+(\tilde{d}-(4N-1))(\tilde{d}-2t)\right)\binom {N}{t}\] \[= (-1)^{t}\prod_{h=1}^{N+t-1}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N-1} (\tilde{d}-2h)\left((\tilde{d}-(2(N+t)-1))(\tilde{d}-2N)\right)\binom{N}{t}\] (2) \[= (-1)^{t}\prod_{h=1}^{N+t}(\tilde{d}-(2h-1))\prod_{h=t+1}^{N}( \tilde{d}-2h)\binom{N}{t}\] \[= \tilde{a}_{t,N,0}\] where line (1) holds because \(N\binom{N-1}{t}=(N-t)\binom{N}{t}\) for any \(N\geq 1\) and any \(t\in\mathbb{Z}\), and line (2) holds because \[2(2N-1)(N-t)+(\tilde{d}-(4N-1)) (\tilde{d}-2t)\] \[= 2(2N-1)(N-t)+(\tilde{d}-(4N-1))((\tilde{d}-2N)+2(N-t))\] \[= 2((2N-1)+(\tilde{d}-(4N-1)))(N-t)+(\tilde{d}-(4N-1))(\tilde{d}-2N)\] \[= 2(\tilde{d}-2N)(N-t)+(\tilde{d}-(4N-1))(\tilde{d}-2N)\] \[= (2(N-t)+(\tilde{d}-(4N-1)))(\tilde{d}-2N)\] \[= (\tilde{d}-(2(N+t)-1))(\tilde{d}-2N).\] Using Remark 4.9, we write * \(\tilde{A}_{2N}=(xy)^{N}\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N,0}\left(\frac{F} {xy}\right)^{t}\), * \(\tilde{A}_{2(N-1)+1}=(xy)^{N-1}z\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N-1,1} \left(\frac{F}{xy}\right)^{t}\), and * \(\tilde{A}_{2(N-1)}=(xy)^{N-1}\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N-1,0}\left( \frac{F}{xy}\right)^{t}\). We can see that \(z^{2}=xy-F=xy\left(1-\frac{F}{xy}\right)\), and with this we prove that \[z\tilde{A}_{2(N-1)+1}= z\left(\left(xy\right)^{N-1}z\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N-1,1}\left(\frac{F}{xy}\right)^{t}\right)\] \[= \left(xy\right)^{N-1}z^{2}\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N-1,1}\left(\frac{F}{xy}\right)^{t}\] \[= \left(xy\right)^{N}\left(1-\frac{F}{xy}\right)\sum_{t=-\infty}^{ \infty}\tilde{a}_{t,N-1,1}\left(\frac{F}{xy}\right)^{t}\] \[= \left(xy\right)^{N}\left(\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N-1,1}\left(\frac{F}{xy}\right)^{t}-\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N-1,1}\left(\frac{F}{xy}\right)^{t}\right)\] \[= \left(xy\right)^{N}\left(\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N-1,1}\left(\frac{F}{xy}\right)^{t}-\sum_{t=-\infty}^{\infty}\tilde{a}_{t-1, N-1,1}\left(\frac{F}{xy}\right)^{t}\right)\] \[= \left(xy\right)^{N}\sum_{t=-\infty}^{\infty}\left(\tilde{a}_{t,N-1,1}-\tilde{a}_{t-1,N-1,1}\right)\left(\frac{F}{xy}\right)^{t},\] where (**) holds because the sum on the right is reindexed (\(t\) is replaced with \(t-1\)). Using Equation (*), we see that \[(2N-1)(\tilde{d}-(2N-1))xy\tilde{A}_{2(N-1)}-(\tilde{d}-(4N-1))z \tilde{A}_{2(N-1)+1}\] \[= (2N-1)(\tilde{d}-(2N-1))xy\left((xy)^{N-1}\sum_{t=-\infty}^{ \infty}\tilde{a}_{t,N-1,0}\left(\frac{F}{xy}\right)^{t}\right)\] \[-(\tilde{d}-(4N-1))\left((xy)^{N}\sum_{t=-\infty}^{\infty}( \tilde{a}_{t,N-1,1}-\tilde{a}_{t-1,N-1,1})\left(\frac{F}{xy}\right)^{t}\right)\] \[= (xy)^{N}\sum_{t=-\infty}^{\infty}(2N-1)(\tilde{d}-(2N-1))\tilde{ a}_{t,N-1,0}\left(\frac{F}{xy}\right)^{t}\] \[-(xy)^{N}\sum_{t=-\infty}^{\infty}(\tilde{d}-(4N-1))(\tilde{a}_{ t,N-1,1}-\tilde{a}_{t-1,N-1,1})\left(\frac{F}{xy}\right)^{t}\] \[= (xy)^{N}\sum_{t=-\infty}^{\infty}\left((2N-1)(\tilde{d}-(2N-1)) \tilde{a}_{t,N-1,0}-(\tilde{d}-(4N-1))(\tilde{a}_{t,N-1,1}-\tilde{a}_{t-1,N-1,1})\right)\left(\frac{F}{xy}\right)^{t}\] \[= (xy)^{N}\sum_{t=-\infty}^{\infty}\tilde{a}_{t,N,0}\left(\frac{F} {xy}\right)^{t}\] \[= \tilde{A}_{2N}\] which proves that \[\tilde{A}_{2N}=(2N-1)(\tilde{d}-(2N-1))xy\tilde{A}_{2(N-1)}-(\tilde{d}-(4N-1) )z\tilde{A}_{2(N-1)+1}\] for \(N\geq 1\). When we set \(\tilde{d}\) to \(d\), this proves that \[A_{2N}=(2N-1)(d-(2N-1))xyA_{2(N-1)}-(d-(4N-1))zA_{2(N-1)+1}.\] If we write \(n=2N\), then this proves the lemma. **Theorem 4.12**.: _Let \(F=xy-z^{2}\). Given \(n\geq 0\), set \(n=2N+\nu\) with \(N\geq 0\) and \(\nu\in\{0,1\}\). Then we have the following:_ \[\det(\mathbf{L}_{n})=\det(\mathbf{L}_{2N+\nu})=A_{2N+\nu}=z^{\nu}\sum_{t=0}^{N}(-1)^{t+ \nu}\left(\prod_{h=1}^{N+t+\nu}(d-(2h-1))\right)\left(\prod_{h=t+1}^{N}(d-2h) \right)\binom{N}{t}(xy)^{N-t}F^{t}.\] Proof.: Both \(\det(\mathbf{L}_{n})\) and \(A_{n}\) satisfy the same following homogeneous linear recurrence relation and initial values: * \(u_{0}=1\), * \(u_{1}=-(d-1)z\), * and \(u_{n}=(n-1)(d-(n-1))xyu_{n-2}-(d-(2n-1))zu_{n-1}\) for all \(n\geq 2\). We have proved this with the following lemmas: * \(\det(\mathbf{L}_{0})=1\) by Lemma 4.4 and \(A_{0}=1\) by Lemma 4.5), * \(\det(\mathbf{L}_{1})=-(d-1)z\) by Lemma 4.4 and \(A_{1}=-(d-1)z\) by Lemma 4.5, * and finally, \(\det(\mathbf{L}_{n})=(n-1)(d-(n-1))xy\det(\mathbf{L}_{n-2})-(d-(2n-1))z\det(\mathbf{L}_{n -1})\) for all \(n\geq 2\) by Lemma 4.4 and \(A_{n}=(n-1)(d-(n-1))xyA_{n-2}-(d-(2n-1))zA_{n-1}\) for all \(n\geq 2\) by Lemmas 4.10 and 4.11. Therefore, \(\det(\mathbf{L}_{n})=A_{n}\) for all \(n\geq 0\). We only care about the case when \(d\) is even. If we consider the case when \(d\) is odd we see that the determinant is trivial. **Corollary 4.13**.: _If \(d\) is odd, then \(\det\mathbf{M}=0\)._ Proof.: Set \(d=2D+1\). We defined \(\mathbf{M}\) and \(\mathbf{L}_{n}\) in Notation 4.1 in a way such that \(\mathbf{M}=\mathbf{L}_{d}\). By setting \(n=d\) in the formula from Theorem 4.12, we know that \[\det(\mathbf{M})= \det(\mathbf{L}_{d})=\det(\mathbf{L}_{2D+1})\] \[= z^{1}\sum_{t=0}^{D}(-1)^{t+1}\prod_{h=1}^{D+t+1}(d-(2h-1))\prod_ {h=t+1}^{D}(d-2h)\binom{D}{t}(xy)^{D-t}F^{t}.\] We see that \(\prod_{h=1}^{D+t+1}(d-(2h-1))=0\) if \(d-(2h-1)=0\) for some \(1\leq h\leq D+t+1\). This is the case because if \(h=D+1\), then \(d-(2h-1)=(2D+1)-(2(D+1)-1)=0\) and \(1\leq h\leq D+t+1\) for any \(0\leq t\leq D\). Therefore, \[\det(\mathbf{M})= z^{1}\sum_{t=0}^{D}(-1)^{t+1}\prod_{h=1}^{D+t+1}(d-(2h-1))\prod_ {h=t+1}^{D}(d-2h)\binom{D}{t}(xy)^{D-t}F^{t}\] \[= z^{1}\sum_{t=0}^{D}(-1)^{t+1}(0)\prod_{h=t+1}^{D}(d-2h)\binom{D}{ t}(xy)^{D-t}F^{t}=z^{1}\sum_{t=0}^{D}0=0.\qed\] We now assume that \(d\) is even for the rest of this paper, and write \(d=2D\). **Corollary 4.14**.: _If \(d=2D\) is even, then_ \[\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)=-(d-1)!z\sum_{t=0}^{D-1} \lambda_{t}(xy)^{(D-1)-t}F^{t}\] _as an element of \(\mathbb{Z}[2^{-1}][x,y,z]\)._ Proof.: Recall that \(\lambda_{t}=2^{-2t}\binom{2t}{t}\) from Notation 3.1 is a dyadic rational, which is why we require 2 to be invertible. We defined \(\boldsymbol{L}_{n}\) in Notation 4.1 so that it consists of the first \(n\) rows and columns of \(\boldsymbol{M}\) if \(n\leq d\), so \(\boldsymbol{M}_{(\hat{d}),(\hat{d})}=\boldsymbol{L}_{d-1}\) since \(\boldsymbol{M}\) has size \(d\times d\). If we set \(n=d-1\) in the formula from Theorem 4.12, then \[\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)= \det(\boldsymbol{L}_{d-1})=\det(\boldsymbol{L}_{2(D-1)+1})\] \[= z^{1}\sum_{t=0}^{D-1}(-1)^{t+1}\prod_{h=1}^{D-1+t+1}(d-(2h-1)) \prod_{h=t+1}^{D-1}(d-2h)\binom{D-1}{t}(xy)^{(D-1)-t}F^{t}\] \[= -z\sum_{t=0}^{D-1}(-1)^{t}\prod_{h=1}^{D+t}(d-(2h-1))\prod_{h=t+1 }^{D-1}(d-2h)\binom{D-1}{t}(xy)^{(D-1)-t}F^{t}.\] Further, \[(-1)^{t}\prod_{h=1}^{D+t}(d-(2h-1))\prod_{h=t+1}^{D-1}(d-2h)\binom {D-1}{t}\] \[= (-1)^{t}\left(\prod_{h=1}^{D}(2D-(2h-1))\prod_{h=D+1}^{D+t}(2D-(2 h-1))\right)\prod_{h=t+1}^{D-1}(2D-2h)\binom{D-1}{t}\] \[= (-1)^{t}\prod_{h=1}^{D}(2h-1)\prod_{h=1}^{t}(-(2h-1))\prod_{h=1}^ {(D-1)-t}(2h)\binom{D-1}{t}\] \[= \prod_{h=1}^{D}(2h-1)\prod_{h=1}^{t}(2h-1)\left(2^{(D-1)-t}((D-1) -t)!\right)\binom{D-1}{t}\] \[= \prod_{h=1}^{D}(2h-1)\left(2^{t}t!\lambda_{t}\right)2^{(D-1)-t}( (D-1)-t)!\binom{D-1}{t} \tag{1}\] \[= 2^{D-1}\left(t!((D-1)-t)!\binom{D-1}{t}\right)\prod_{h=1}^{D}(2h -1)\lambda_{t}\] \[= 2^{D-1}(D-1)!\prod_{h=1}^{D}(2h-1)\lambda_{t}\] \[= \left(2^{D-1}(D-1)!\prod_{h=1}^{D-1}(2h-1)\right)(2D-1)\lambda_{t}\] \[= (2(D-1))!(2D-1)\lambda_{t}\] (2) \[= (d-1)!\lambda_{t},\] where we use \(\prod_{h=1}^{t}(2h-1)=2^{t}t!\lambda_{t}\) from Lemma 3.4 on line (1) and \(2^{t}t!\prod_{h=1}^{t}(2h-1)=(2t)!\) also from Lemma 3.4 on line (2). Therefore, we have \[\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)= -z\sum_{t=0}^{D-1}(-1)^{t}\prod_{h=1}^{D+t}(d-(2h-1))\prod_{h=t+1}^{ D-1}(d-2h)\binom{D-1}{t}(xy)^{(D-1)-t}F^{t}\] \[= -z\sum_{t=0}^{D-1}((d-1)!\lambda_{t})(xy)^{(D-1)-t}F^{t}\] \[= -(d-1)!z\sum_{t=0}^{D-1}\lambda_{t}(xy)^{(D-1)-t}F^{t}.\qed\] **Proposition 4.15**.: _If \(d=2D\) is even, then_ \[\det\left(\boldsymbol{M}_{(\hat{1}),(\hat{1})}\right)=(d-1)!z\sum_{t=0}^{D-1} \lambda_{t}(xy)^{(D-1)-t}F^{t}\] _as an element of \(\mathbb{Z}[2^{-1}][x,y,z]\)._ Proof.: Recall that \(\lambda_{t}=2^{-2t}\binom{2t}{t}\) from Notation 3.1 is a dyadic rational, which is why we require 2 to be invertible. In order to turn \(\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)\) into \(\det\left(\boldsymbol{M}_{(\hat{1}),(\hat{1})}\right)\), we reverse the order of all the columns and rows of \(\boldsymbol{M}\) (which does not affect the determinant since \(\boldsymbol{M}\) has the same number of rows and columns), in order to obtain \[\begin{bmatrix}(d-1)z&-1y\\ (d-1)x&(d-3)z&-2y\\ &(d-2)x&(d-5)z&-3y\\ &&(d-3)x&\ddots&\ddots\\ &&&\ddots&\ddots&-(d-3)y\\ &&&&3x&-(d-5)z&-(d-2)y\\ &&&&2x&-(d-3)z&-(d-1)y\\ &&&&1x&-(d-1)z\end{bmatrix}.\] Since \[\boldsymbol{M}=\begin{bmatrix}-(d-1)z&1x\\ -(d-1)y&-(d-3)z&2x\\ &-(d-2)y&-(d-5)z&3x\\ &&-(d-3)y&\ddots&\ddots\\ &&&&\ddots&(d-3)x\\ &&&&-3y&(d-5)z&(d-2)x\\ &&&&-2y&(d-3)z&(d-1)x\\ &&&&-1y&(d-1)z\end{bmatrix},\] we can also obtain this matrix by applying an invertible linear transformation to \(\boldsymbol{M}\) sending \(x\mapsto-y\), \(y\mapsto-x\), and \(z\mapsto-z\). Therefore, \(\det\left(\boldsymbol{M}_{(\hat{1}),(\hat{1})}\right)\) is equal to \(\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)\) with this transformation applied. We know from Corollary 4.14 that \[\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)=-(d-1)!z\sum_{t=0}^{D-1} \lambda_{t}(xy)^{(D-1)-t}F^{t};\] note that \(xy\) and \(F=xy-z^{2}\), and thus \(\sum_{t=0}^{D-1}\lambda_{t}(xy)^{(D-1)-t}F^{t}\), aren't affected by the simultaneous transformations \(x\mapsto-y\), \(y\mapsto-x\), and \(z\mapsto-z\). Therefore \[\det\left(\mathbf{M}_{\hat{1},\hat{1}}\right)=(d-1)!z\sum_{t=0}^{D-1}\lambda_{t}( xy)^{(D-1)-t}F^{t}.\qed\] **Proposition 4.16**.: _If \(d=2D\) is even, then \(\det\mathbf{M}=\left(\prod_{h=1}^{D}(2h-1)^{2}\right)F^{D}\)._ Proof.: We defined \(\mathbf{M}\) and \(\mathbf{L}_{n}\) in Notation 4.1 in a away such that \(\mathbf{M}=\mathbf{L}_{d}\). By setting \(n=d\) in the formula from Theorem 4.12, we get \[\det(\mathbf{M})= \det(\mathbf{L}_{d})=\det(\mathbf{L}_{2D+0})\] \[= z^{0}\sum_{t=0}^{D}(-1)^{t+0}\prod_{h=1}^{D+t+0}(d-(2h-1))\prod_{ h=t+1}^{D}(d-2h)\binom{D}{t}(xy)^{D-t}F^{t}\] \[= \sum_{t=0}^{D}(-1)^{t}\prod_{h=1}^{D+t}(d-(2h-1))\prod_{h=t+1}^{D }(d-2h)\binom{D}{t}(xy)^{D-t}F^{t}.\] If we focus on the \(\prod_{h=t+1}^{D}(d-2h)\) factor of the coefficients, we see that \(\prod_{h=t+1}^{D}(d-2h)=1\) if \(t=D\), but \(\prod_{h=t+1}^{D}(d-2h)=0\) if \(t<D\) because \(d-2h=0\) when \(h=D\). Therefore, \[\det(\mathbf{M})=(-1)^{D}\prod_{h=1}^{D+D}(d-(2h-1))\binom{D}{D}(xy)^{D-D}F^{D}=(- 1)^{D}\prod_{h=1}^{2D}(d-(2h-1))F^{D}.\] If we look at \(\prod_{h=1}^{2D}(d-(2h-1))\), we can see that its factors split nicely between \(h\leq D\) and \(h\geq D+1\): \[\prod_{h=1}^{2D}(d-(2h-1))\] \[= \prod_{h=1}^{D}(2D-(2h-1))\prod_{h=D+1}^{2D}(2D-(2h-1))\] \[= \prod_{h=1}^{D}(2h-1)\prod_{h=1}^{D}(-(2h-1))\] \[= (-1)^{D}\prod_{h=1}^{D}(2h-1)^{2}.\] Thus \(\det(\mathbf{M})=\prod_{h=1}^{D}(2h-1)^{2}F^{D}\). ## 5 Main results We use the results discussed thus far to prove in this section that the polynomial \((xy-z^{2})^{D}\) is link-\(q\)-compressed in \(k[x,y,z]\) for all powers \(q\) of \(p\), the odd characteristic of the field \(k\), as long as \(p>2D-1\). We use this fact to show most choices of degree \(2D\) homogeneous polynomials in \(k[x,y,z]\) are link-\(q\)-compressed, and thus the conclusions listed in Subsection 2.3 hold for these choices of polynomials. This section lists and proves these statements. To prove our results, we make use of the following lemma: **Lemma 5.1** (cf. [12, Lemma 2.3]).: _Let \(P\) be a commutative Noetherian ring, \(\bar{x}^{\top}=\begin{bmatrix}x_{1}&x_{2}&x_{3}\end{bmatrix}\) such that \(x_{1},x_{2},x_{3}\) generate a perfect grade 3 ideal in \(P\), \(\boldsymbol{\varphi}\) a skew-symmetric matrix in \(P\) of size \(m\times m\) (where \(m\) is even), \(\boldsymbol{\psi}\) a \(m\times 3\) matrix in \(P\), and \(u\) a unit in \(P\). Define \(f=u^{-1}\operatorname{Pf}\boldsymbol{\varphi}\) and \(\boldsymbol{X}=\begin{bmatrix}0&x_{3}&-x_{2}\\ -x_{3}&0&x_{1}\\ x_{2}&-x_{1}&0\end{bmatrix}\). Assume that the entries of \(u\boldsymbol{X}-\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee} \boldsymbol{\psi}\) are in the ideal \((f)P\). Define_ \[\boldsymbol{\Phi}=(u\boldsymbol{X}-\boldsymbol{\psi}^{\top}\boldsymbol{ \varphi}^{\vee}\boldsymbol{\psi})/(uf)\text{ and define }\boldsymbol{ \partial}_{2}=\begin{bmatrix}\boldsymbol{\varphi}&\boldsymbol{\psi}\\ -\boldsymbol{\psi}^{\top}&\boldsymbol{\Phi}\end{bmatrix}\text{.}\] _Then the ideal \(I_{1}(\bar{x}^{\top}):f\) is generated by the maximal order Pfaffians of \(\boldsymbol{\partial}_{2}\)._ _Also,_ \[0\xrightarrow{}P^{m}\xrightarrow{}\bigoplus_{P^{3}}^{P^{m}}\xrightarrow{} \bigoplus_{P^{3}}^{P^{m}}\xrightarrow{}\bigoplus_{P^{3}}^{P^{3}}\xrightarrow {}\bigoplus_{P}^{P^{3}}\xrightarrow{}\bigoplus_{P}^{P^{3}}\xrightarrow{} \begin{bmatrix}\bar{x}^{\top}&uf\end{bmatrix}P\] _is a free \(P\)-resolution of \(P/(f,I_{1}(\bar{x}^{\top}))\), where \(\bar{b}^{\top}=\begin{bmatrix}\operatorname{Pf}_{1}\boldsymbol{\partial}_{2}& \cdots&\operatorname{Pf}_{m}\boldsymbol{\partial}_{2}\end{bmatrix}\)._ _Lastly, if \(R=P/(f)\), then_ \[\cdots\xrightarrow{}R^{m}\xrightarrow{}R^{m}\xrightarrow{}\boldsymbol{ \varphi}^{\vee}\xrightarrow{}R^{m}\xrightarrow{}R^{m}\xrightarrow{} \boldsymbol{\psi}^{\vee}\xrightarrow{}R^{3}\xrightarrow{}\bar{x}^{\top}R\] _is a free \(R\)-resolution of \(R/I_{1}(\bar{x}^{\top})\)._ _Remark 5.2_.: We make only two significant changes from the original lemma. The first is that \(\boldsymbol{\psi}\) is replaced with its transpose (the original paper had \(\boldsymbol{\psi}\) as a \(3\times m\) matrix instead of \(m\times 3\)). The second is that every \(f\) is replaced with \(uf\) (the original paper had \(\operatorname{Pf}\boldsymbol{\varphi}=f\)). The statement as given here is equivalent to its original formulation. We establish the following in order to apply Lemma 5.1: **Notation 5.3**.: Let \(k\) be a field with \(\operatorname{char}k=p\), an odd prime, and let \(P=k[x,y,z]\) with homogeneous maximal ideal \(\mathfrak{m}=(x,y,z)\). Define the polynomial \(f=F^{D}\) in \(P\), where \(F=xy-z^{2}\) and \(D\geq 1\). Let \(d=\deg f=(\deg F)D=2D\). Also let \(R=P/(f)\). Let \(q>1\) be a power of \(p\). Since \(p\) is odd, \(q\) is odd as well; define \(\pi=(q-1)/2\). Now set \(x_{1}=x^{q}\), \(x_{2}=y^{q}\), \(x_{3}=z^{q}\). This sets \(\bar{x}^{\top}=\begin{bmatrix}x^{q}&y^{q}&z^{q}\end{bmatrix}\), \(\boldsymbol{X}=\begin{bmatrix}0&z^{q}&-y^{q}\\ -z^{q}&0&x^{q}\\ y^{q}&-x^{q}&0\end{bmatrix}\), and \(\mathfrak{m}^{[q]}=(x^{q},y^{q},z^{q})=I_{1}(\bar{x}^{\top})\). We have \[z^{q}=z\sum_{t=0}^{\pi}\lambda_{t}(xy)^{\pi-t}F^{t}=z\left((xy)^{\pi-(D-1)}g+fG \right),\] where the first equality is from Lemma 3.7, and the second equality is clear if we define \(g:=\sum_{t=0}^{D-1}\lambda_{t}(xy)^{(D-1)-t}F^{t}\) and \(G:=\sum_{t=D}^{\pi}\lambda_{t}(xy)^{\pi-t}F^{t-D}\). Here \(\lambda_{t}\) is from Notation 3.1. We define the \(2d\times 3\) block matrix \[\boldsymbol{\psi}=\begin{bmatrix}d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}&\bar{ 0}&d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}\\ \bar{0}&-x^{\pi-(D-1)}\tilde{e}_{1}&y^{\pi-(D-1)}\tilde{e}_{d}\end{bmatrix},\] where we use the length \(d\) elementary column vectors \(\vec{e}_{1}=\begin{bmatrix}1\\ 0\\ \vdots\\ 0\end{bmatrix}\) and \(\vec{e}_{d}=\begin{bmatrix}0\\ \vdots\\ 1\end{bmatrix}\), and also define the \(2d\times 2d\) block matrix of pure graded degree \(1\) \[\boldsymbol{\varphi}=\begin{bmatrix}\boldsymbol{0}&\boldsymbol{M}\\ -\boldsymbol{M}^{\top}&\boldsymbol{0}\end{bmatrix},\] where we use the \(d\times d\) tridiagonal matrix of pure graded degree \(1\) \[\boldsymbol{M}=\begin{bmatrix}-(d-1)z&1x\\ -(d-1)y&-(d-3)z&2x\\ &-(d-2)y&-(d-5)z&3x\\ &&-(d-3)y&\ddots&\ddots\\ &&\ddots&\ddots&(d-3)x\\ &&&&&-3y&(d-5)z&(d-2)x\\ &&&&&-2y&(d-3)z&(d-1)x\\ &&&&&-1y&(d-1)z\end{bmatrix}.\] Recall that we defined this matrix in Notation 4.1. Define \(\boldsymbol{\Phi}=zG\begin{bmatrix}0&1&0\\ -1&0&0\\ 0&0&0\end{bmatrix}\); we prove in Lemma 5.6 that \(u\boldsymbol{X}-\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee} \boldsymbol{\psi}=uf\boldsymbol{\Phi}\), where \(u=(-1)^{D}d!\lambda_{D}\). Also define \[\boldsymbol{\partial}_{2}=\begin{bmatrix}\boldsymbol{\varphi}&\boldsymbol{ \psi}\\ -\boldsymbol{\psi}^{\top}&\boldsymbol{\Phi}\end{bmatrix}=\begin{bmatrix} \boldsymbol{0}&\boldsymbol{M}&d\lambda_{D}y^{\pi-(D-1)}\vec{e}_{1}&\bar{0}&d \lambda_{D}x^{\pi-(D-1)}\vec{e}_{d}\\ -\boldsymbol{M}^{\top}&\boldsymbol{0}&\bar{0}&-x^{\pi-(D-1)}\vec{e}_{1}&y^{ \pi-(D-1)}\vec{e}_{d}\\ -d\lambda_{D}y^{\pi-(D-1)}\vec{e}_{1}^{\top}&\bar{0}^{\top}&0&zG&0\\ \bar{0}^{\top}&x^{\pi-(D-1)}\vec{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\vec{e}_{d}^{\top}&-y^{\pi-(D-1)}\vec{e}_{d}^{\top} &0&0&0\end{bmatrix}.\] Our goal is to apply Lemma 5.1 to the objects we've established. This means we must prove that: * \(u=(-1)^{D}d!\lambda_{D}\) is a unit, * \(\operatorname{Pf}\boldsymbol{\varphi}=uf\), and * the entries of \(u\boldsymbol{X}-\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee} \boldsymbol{\psi}\) are in the ideal \((f)P\), which we'll prove by showing that \(u\boldsymbol{X}-\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee} \boldsymbol{\psi}=uf\boldsymbol{\Phi}\). Once we apply Lemma 5.1, we will know the maximal order Pfaffians of \(\boldsymbol{\partial}_{2}\) are generators of the ideal \(I_{1}(\boldsymbol{x}):f=\mathfrak{m}^{[q]}:f\). When we show these Pfaffians have degree greater than \(s/2\) except for those in \(\mathfrak{m}^{[q]}\), this proves that \(f\) is link-\(q\)-compressed. Recall from Lemma 2.18 that \(s=3(q-1)-\deg f=2(3\pi-D)\) is the socle dimension of \(R/\mathfrak{m}^{[q]}\). **Lemma 5.4**.: _Assume \(d=2D\) is even. We have \(u:=(-1)^{D}d!\lambda_{D}=(-1)^{D}\left(\prod_{h=1}^{D}(2h-1)\right)^{2}\) of a field \(k\) with odd characteristic \(p>d-1\) is a unit._ Proof.: We established in Lemma 3.4 that \((2t)!\lambda_{t}=\left(\prod_{h=1}^{t}(2h-1)\right)^{2}\) for any \(t\geq 0\), and so we have \[u=(-1)^{D}(2D)!\lambda_{D}=(-1)^{D}\left(\prod_{h=1}^{D}(2h-1)\right)^{2}.\] This means \(u\) is an integer that is a product of odd numbers at most \(2D-1=d-1\). Since \(\operatorname{char}k>d-1\), this means each factor of \(u\) is invertible, and thus so is \(u\). **Lemma 5.5**.: _Assume \(d=2D\) is even. Then \(\operatorname{Pf}\boldsymbol{\varphi}=uf\)._ Proof.: We define \(u\), \(f\), and \(\boldsymbol{\varphi}\) as in Notation 5.3. To find \(\operatorname{Pf}\boldsymbol{\varphi}\), we use the formula from Remark 2.5 to see \[\operatorname{Pf}\boldsymbol{\varphi}=\operatorname{Pf}\begin{bmatrix} \boldsymbol{0}&\boldsymbol{M}\\ -\boldsymbol{M}^{\top}&\boldsymbol{0}\end{bmatrix}=(-1)^{d(d-1)/2}\det \boldsymbol{M}=(-1)^{D(2D-1)}\det\boldsymbol{M}=(-1)^{D}\det\boldsymbol{M},\] and then since \(\det\boldsymbol{M}=\left(\prod_{h=1}^{D}(2h-1)\right)^{2}F^{D}=\left(\prod_{h= 1}^{D}(2h-1)\right)^{2}f\) by Proposition 4.16 and the previous paragraph, we have \(\operatorname{Pf}\boldsymbol{\varphi}=(-1)^{D}\left(\prod_{h=1}^{D}(2h-1) \right)^{2}f=uf\) using Lemma 5.4. **Lemma 5.6**.: _We have \(\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\nu}\boldsymbol{\psi}+uf \boldsymbol{\Phi}=u\boldsymbol{X}\), which means that the entries of \(u\boldsymbol{X}-\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\nu} \boldsymbol{\psi}\) are in the ideal \((f)P\)._ Proof.: We define \(u\), \(f\), \(\boldsymbol{\varphi}\), \(\boldsymbol{\psi}\), \(\boldsymbol{\Phi}\), and \(\boldsymbol{X}\) in Notation 5.3. We use Lemma 2.10 to see that \[\boldsymbol{\varphi}^{\vee}=\begin{bmatrix}\boldsymbol{0}&\boldsymbol{M}\\ -\boldsymbol{M}^{\top}&\boldsymbol{0}\end{bmatrix}^{\vee}=(-1)^{d(d-1)/2} \begin{bmatrix}\boldsymbol{0}&-\overline{\boldsymbol{M}}^{\top}\\ \overline{\boldsymbol{M}}&\boldsymbol{0}\end{bmatrix}=(-1)^{D}\begin{bmatrix} \boldsymbol{0}&-\overline{\boldsymbol{M}}^{\top}\\ \overline{\boldsymbol{M}}&\boldsymbol{0}\end{bmatrix},\] where the blocks are \(d\times d\) matrices. From this we can calculate \[\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee}\boldsymbol{\psi}\] \[= \begin{bmatrix}d\lambda_{D}y^{\pi-(D-1)}\bar{e}_{1}^{\top}& \bar{0}^{\top}\\ \bar{0}^{\top}&-x^{\pi-(D-1)}\bar{e}_{1}^{\top}\\ d\lambda_{D}x^{\pi-(D-1)}\bar{e}_{d}^{\top}&y^{\pi-(D-1)}\bar{e}_{d}^{\top} \end{bmatrix}(-1)^{D}\begin{bmatrix}\boldsymbol{0}&-\overline{\boldsymbol{M}} ^{\top}\\ \overline{\boldsymbol{M}}&\boldsymbol{0}\end{bmatrix}\boldsymbol{\psi}\] \[= (-1)^{D}\begin{bmatrix}\bar{0}^{\top}&-d\lambda_{D}y^{\pi-(D-1)} \bar{e}_{1}^{\top}\overline{\boldsymbol{M}}^{\top}\\ -x^{\pi-(D-1)}\bar{e}_{1}^{\top}\overline{\boldsymbol{M}}&\bar{0}^{\top}\\ y^{\pi-(D-1)}\bar{e}_{d}^{\top}\overline{\boldsymbol{M}}&-d\lambda_{D}x^{\pi-( D-1)}\bar{e}_{d}^{\top}\overline{\boldsymbol{M}}^{\top}\end{bmatrix}\boldsymbol{ \psi}\] \[= (-1)^{D}\begin{bmatrix}\bar{0}^{\top}&-d\lambda_{D}y^{\pi-(D-1)} \bar{e}_{1}^{\top}\overline{\boldsymbol{M}}^{\top}\\ -x^{\pi-(D-1)}\bar{e}_{1}^{\top}\overline{\boldsymbol{M}}&\bar{0}^{\top}\\ y^{\pi-(D-1)}\bar{e}_{d}^{\top}\overline{\boldsymbol{M}}&-d\lambda_{D}x^{\pi-( D-1)}\bar{e}_{d}^{\top}\overline{\boldsymbol{M}}^{\top}\end{bmatrix}\begin{bmatrix}d \lambda_{D}y^{\pi-(D-1)}\bar{e}_{1}&\bar{0}&d\lambda_{D}x^{\pi-(D-1)}\bar{e}_ {d}\\ \bar{0}&-x^{\pi-(D-1)}\bar{e}_{1}&y^{\pi-(D-1)}\bar{e}_{d}\end{bmatrix}\] \[= (-1)^{D}\begin{bmatrix}0&d\lambda_{D}(xy)^{\pi-(D-1)}\bar{e}_{1} ^{\top}\overline{\boldsymbol{M}}^{\top}\bar{e}_{1}&-d\lambda_{D}(y^{2})^{\pi -(D-1)}\bar{e}_{1}^{\top}\overline{\boldsymbol{M}}^{\top}\bar{e}_{d}\\ d\lambda_{D}(y^{2})^{\pi-(D-1)}\bar{e}_{d}^{\top}\overline{\boldsymbol{M}} \bar{e}_{1}&d\lambda_{D}(x^{2})^{\pi-(D-1)}\bar{e}_{d}^{\top}\overline{ \boldsymbol{M}}^{\top}\bar{e}_{1}&0\end{bmatrix}\] \[= (-1)^{D}d\lambda_{D}\begin{bmatrix}0&(xy)^{\pi-(D-1)}\overline{ \boldsymbol{M}}_{1,1}&-y^{q-(d-1)}\overline{\boldsymbol{M}}_{d,1}\\ -(xy)^{\pi-(D-1)}\overline{\boldsymbol{M}}_{1,1}&0&-x^{q-(d-1)}\overline{ \boldsymbol{M}}_{1,d}\\ y^{q-(d-1)}\overline{\boldsymbol{M}}_{d,1}&x^{q-(d-1)}\overline{\boldsymbol{M}}_{1,d}&0\end{bmatrix}.\] Note that we used the fact that \(2(\pi-(D-1))=q-(d-1)\). We write the following using Definition 2.8: \[\overline{\boldsymbol{M}}_{1,1}=(-1)^{1+1}\det\left(\boldsymbol{M}_{(\hat{1}),( \hat{1})}\right)=\det\left(\boldsymbol{M}_{(\hat{1}),(\hat{1})}\right).\] \[\overline{\boldsymbol{M}}_{1,d}=(-1)^{d+1}\det\left(\boldsymbol{M}_{(\hat{d}),( \hat{1})}\right)=-\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{1})}\right).\] and \[\overline{\boldsymbol{M}}_{d,1}=(-1)^{1+d}\det\left(\boldsymbol{M}_{(\hat{1}), (\hat{d})}\right)=-\det\left(\boldsymbol{M}_{(\hat{1}),(\hat{d})}\right).\] This lets us write \[\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee}\boldsymbol{\psi}\] \[= (-1)^{D}d\lambda_{D}\begin{bmatrix}0&(xy)^{\pi-(D-1)}\det\left( \boldsymbol{M}_{(\hat{1}),(\hat{1})}\right)&y^{q-(d-1)}\det\left(\boldsymbol {M}_{(\hat{1}),(\hat{d})}\right)\\ -(xy)^{\pi-(D-1)}\det\left(\boldsymbol{M}_{(\hat{1}),(\hat{1})}\right)&0&x^{q -(d-1)}\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{1})}\right)\\ -y^{q-(d-1)}\det\left(\boldsymbol{M}_{(\hat{1}),(\hat{d})}\right)&-x^{q-(d-1) }\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{1})}\right)&0\end{bmatrix}.\] Using Proposition 4.2, we have \[\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{1})}\right)=(d-1)!x^{d-1}\] and \[\det\left(\boldsymbol{M}_{(\hat{1}),(\hat{d})}\right)=(-1)^{d-1}(d-1)!y^{d-1}= -(d-1)!y^{d-1},\] as well as \[\det\left(\boldsymbol{M}_{(\hat{1}),(\hat{1})}\right)=(d-1)!z\left(\sum_{t=0} ^{D-1}\lambda_{t}(xy)^{(D-1)-t}F^{t}\right)=(d-1)!zg\] by Proposition 4.15. Recall that \(g=\sum_{t=0}^{D-1}\lambda_{t}(xy)^{(D-1)-t}F^{t}\) from Notation 5.3. We put these together to see that \[\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee}\boldsymbol{\psi}\] \[= (-1)^{D}d\lambda_{D}\begin{bmatrix}0&(xy)^{\pi-(D-1)}\left((d-1)! zg\right)&y^{q-(d-1)}\left(-(d-1)!y^{d-1}\right)\\ -(xy)^{\pi-(D-1)}\left((d-1)!zg\right)&0&x^{q-(d-1)}\left((d-1)!x^{d-1}\right) \\ -y^{q-(d-1)}\left(-(d-1)!y^{d-1}\right)&-x^{q-(d-1)}\left((d-1)!x^{d-1}\right) &0\end{bmatrix}\] \[= (-1)^{D}d!\lambda_{D}\begin{bmatrix}0&(xy)^{\pi-(D-1)}zg&-y^{q} \\ -(xy)^{\pi-(D-1)}zg&0&x^{q}\\ y^{q}&-x^{q}&0\end{bmatrix}\] \[= u\begin{bmatrix}0&(xy)^{\pi-(D-1)}zg&-y^{q}\\ -(xy)^{\pi-(D-1)}zg&0&x^{q}\\ y^{q}&-x^{q}&0\end{bmatrix},\] and so we have \[\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee}\boldsymbol{\psi}=u \begin{bmatrix}0&(xy)^{\pi-(D-1)}zg&-y^{q}\\ -(xy)^{\pi-(D-1)}zg&0&x^{q}\\ y^{q}&-x^{q}&0\end{bmatrix}.\] Recall that that \(z^{q}=z\left((xy)^{\pi-(D-1)}g+fG\right)\) from Notation 5.3. Thus, we have \[u\boldsymbol{X}= u\begin{bmatrix}0&z^{q}&-y^{q}\\ -z^{q}&0&x^{q}\\ y^{q}&-x^{q}&0\end{bmatrix}\] \[= u\begin{bmatrix}0&(xy)^{\pi-(D-1)}zg&-y^{q}\\ -(xy)^{\pi-(D-1)}zg&0&x^{q}\\ y^{q}&-x^{q}&0\end{bmatrix}+u\begin{bmatrix}0&fzG&0\\ -fzG&0&0\\ 0&0&0\end{bmatrix}\] \[= \boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee}\boldsymbol{ \psi}+ufzG\begin{bmatrix}0&1&0\\ -1&0&0\\ 0&0&0\end{bmatrix}\] \[= \boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee}\boldsymbol{ \psi}+uf\boldsymbol{\Phi}.\] Therefore, \(u\boldsymbol{X}-\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee} \boldsymbol{\psi}=uf\boldsymbol{\Phi}\), which has entries in \((f)P\). **Theorem 5.7**.: _Let \(D\geq 1\) and let \(k\) be a field with odd characteristic \(p>2D-1\). Let \(q>1\) be a power of \(p\). Set \(f=\left(xy-z^{2}\right)^{D}\) as an element of \(P=k[x,y,z]\)._ _Then the ideal \(I_{1}(\tilde{x}^{\top}):f=\mathfrak{m}^{[q]}:f\) is generated by the maximal order Pfaffians of \(\boldsymbol{\partial}_{2}\)._ _Also,_ \[0\xrightarrow{}P^{2d}\xrightarrow{\left[\begin{matrix}\boldsymbol{\varphi}\\ -u\boldsymbol{\psi}^{\top}\end{matrix}\right]}\bigoplus_{P^{2d}}^{P^{2d}} \xrightarrow{\left[\begin{matrix}u\boldsymbol{\psi}^{\top}\boldsymbol{\varphi} ^{\vee}&uf\boldsymbol{I}\\ -\tilde{b}^{\top}&-\tilde{x}^{\top}\end{matrix}\right]}\bigoplus_{P}^{P^{3}} \xrightarrow{\left[\tilde{x}^{\top}\quad uf\right]}P\] _is a free \(P\)-resolution of \(P/(f,\mathfrak{m}^{[q]})\), where \(\tilde{b}^{\top}=\left[\mathrm{Pf}_{1}\,\boldsymbol{\partial}_{2}\;\;\cdots \;\;\;\mathrm{Pf}_{2d}\,\boldsymbol{\partial}_{2}\right]\)._ _Lastly, if \(R=P/(f)=k[x,y,z]/\left(\left(xy-z^{2}\right)^{D}\right)\), then_ \[\cdots\to R^{2d}\xrightarrow{\boldsymbol{\varphi}}R^{2d}\xrightarrow{ \boldsymbol{\varphi}^{\vee}}R^{2d}\xrightarrow{\boldsymbol{\varphi}^{\vee}}R ^{2d}\xrightarrow{\boldsymbol{\psi}^{\vee}\boldsymbol{\varphi}^{\vee}}R^{3} \xrightarrow{\tilde{x}^{\top}}R\] _is a free \(R\)-resolution of \(R/\mathfrak{m}^{[q]}\). Further, these maps are of pure graded degrees_ \[\deg(\tilde{x}^{\top})=q,\;\deg(\boldsymbol{\psi})=\frac{1}{2}(q-(d-1)),\; \deg(\boldsymbol{\varphi})=1,\;\deg(\boldsymbol{\varphi}^{\vee})=d-1.\] Proof.: Lemmas 5.4, 5.5 and 5.6 prove that the variables from Notation 5.3 satisfy Lemma 5.1. With this, we have shown that \(R/\mathfrak{m}^{[q]}\) with \(R=k[x,y,z]/\left(\left(xy-z^{2}\right)^{D}\right)\) is an example of the phenomenon detailed in [11] where the tail of the \(R/\mathfrak{m}^{[q]}\) is independent of \(q\): **Corollary 5.8**.: _The tail end of the \(R\)-free resolutions of \(R/\mathfrak{m}^{[q]}\) are independent of \(q\), since \(\boldsymbol{\varphi}\) and \(\boldsymbol{\varphi}^{\vee}\) do not depend on \(q\)._ In order to show that \(\left(xy-z^{2}\right)^{D}\) is link-\(q\)-compressed using Lemma 2.20, we prove that the maximal order Pfaffians of \(\boldsymbol{\partial}_{2}\), which generate \(\mathfrak{m}^{[q]}:f\) as we established above, are either in the ideal \(\mathfrak{m}^{[q]}\) or have degree greater than \(\frac{s}{2}=\frac{3(q-1)-d}{2}=\frac{3(2\pi)-(2D)}{2}=3\pi-D\) (in fact, they are all of degree \(\frac{s}{2}+1=3\pi-(D-1)\)). **Proposition 5.9**.: _The Pfaffians \(\mathrm{Pf}_{\ell}(\boldsymbol{\partial}_{2})\) for \(1\leq\ell\leq 2d\) are all homogeneous of degree \(\frac{s}{2}+1=3\pi-(D-1)\)._ Proof.: We discuss the following Pfaffians of \(\mathbf{\partial}_{2}\) in pairs as follows: * \(\operatorname{Pf}_{1}\mathbf{\partial}_{2}\) and \(\operatorname{Pf}_{d+1}\mathbf{\partial}_{2}\), * \(\operatorname{Pf}_{d}\mathbf{\partial}_{2}\) and \(\operatorname{Pf}_{2d}\mathbf{\partial}_{2}\), and * \(\operatorname{Pf}_{\ell}\mathbf{\partial}_{2}\) and \(\operatorname{Pf}_{d+\ell}\mathbf{\partial}_{2}\) for \(1<\ell<d\). In order to prove that \(f\) is link-\(q\)-compressed using Lemma 2.20, we need only show that the remaining Pfaffians (which generate \(\mathfrak{m}^{[q]}:f\) by Lemma 5.1) all have degree \(s/2+1=3\pi-(D-1)\). This is the degree they must have anyway if \(f\) is link-\(q\)-compressed by Corollary 2.28. We recall that \(\mathbf{\partial}_{2}\) is a \((2d+3)\times(2d+3)\) block matrix with the structure \[\mathbf{\partial}_{2}=\left[\begin{array}{cccc}\mathbf{0}&\mathbf{M}&d\lambda_{D}y^{\pi- (D-1)}\tilde{e}_{1}&\bar{0}&d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}\\ -\mathbf{M}^{\top}&\mathbf{0}&\bar{0}&-x^{\pi-(D-1)}\tilde{e}_{1}&y^{\pi-(D-1)}\tilde {e}_{d}\\ \hline-d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top}&\bar{0}^{\top}&0&zG&0\\ \bar{0}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}\tilde{e}_{d}^{ \top}&0&0&0\\ \end{array}\right]\] from Notation 5.3. Also note the size of each component of \(\mathbf{\partial}_{2}\): * \(\mathbf{M}\) and \(\mathbf{0}\) are \(d\times d\) matrices, and * \(\bar{e}_{1}\), \(\tilde{e}_{d}\), and \(\bar{0}\) are length \(d\) column vectors. The remaining components of \(\mathbf{\partial}_{2}\) are scalars. Thus, we have * \(\begin{bmatrix}\mathbf{0}\\ -\mathbf{M}^{\top}\\ -d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top}\\ \bar{0}^{\top}\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}\end{bmatrix}\) is columns \(1\) through \(d\) of \(\mathbf{\partial}_{2}\), * \(\begin{bmatrix}\mathbf{M}\\ \mathbf{0}\\ \bar{0}^{\top}\\ x^{\pi-(D-1)}\tilde{e}_{1}^{\top}\\ -y^{\pi-(D-1)}\tilde{e}_{d}^{\top}\end{bmatrix}\) is columns \(d+1\) through \(2d\) of \(\mathbf{\partial}_{2}\), * \(\begin{bmatrix}d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}\\ \bar{0}\\ 0\\ -zG\\ 0\end{bmatrix}\) is column \(2d+1\) of \(\mathbf{\partial}_{2}\), * \(\begin{bmatrix}\bar{0}\\ -x^{\pi-(D-1)}\tilde{e}_{1}\\ zG\\ 0\end{bmatrix}\) is column \(2d+2\) of \(\mathbf{\partial}_{2}\), and \[\bullet\ \begin{bmatrix}d\lambda_{D}x^{\pi-(D-1)}\bar{e}_{d}\\ y^{\pi-(D-1)}\bar{e}_{d}\\ 0\\ 0\\ 0\end{bmatrix}\text{ is column }2d+3\text{ of }\boldsymbol{\partial}_{2}.\] We start with the calculation of the degree of \(\operatorname{Pf}_{1}\boldsymbol{\partial}_{2}\). Using Definition 2.7, we write \[\operatorname{Pf}_{1}\boldsymbol{\partial}_{2}\] \[= \operatorname{Pf}_{1}\begin{bmatrix}\boldsymbol{0}&\boldsymbol{M} &d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}&\bar{0}&d\lambda_{D}x^{\pi-(D-1)} \tilde{e}_{d}\\ -\boldsymbol{M}^{\top}&\boldsymbol{0}&\bar{0}&-x^{\pi-(D-1)}\tilde{e}_{1}&y^{ \pi-(D-1)}\tilde{e}_{d}\\ -d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top}&\bar{0}^{\top}&0&zG&0\\ \bar{0}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}\tilde{e}_{d}^{ \top}&0&0&0\end{bmatrix}\] \[= (-1)^{(1)+1}\] \[\operatorname{Pf}\begin{bmatrix}\boldsymbol{0}_{(1),(1)}& \boldsymbol{M}_{(1),(-)}&\bar{0}_{1}&\bar{0}_{1}&d\lambda_{D}x^{\pi-(D-1)}( \tilde{e}_{d})_{1}\\ -\boldsymbol{M}^{\top}_{(1),(-)}&\boldsymbol{0}&\bar{0}&-x^{\pi-(D-1)}\tilde{ e}_{1}&y^{\pi-(D-1)}\tilde{e}_{d}\\ (-d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top})_{-(-1)}&\bar{0}^{\top}&0&zG& 0\\ (\bar{0}^{\top})_{-(1),(1)}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ (-d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top})_{-(-1)}&-y^{\pi-(D-1)}\tilde{ e}_{d}^{\top}&0&0&0\end{bmatrix}\] \[= \operatorname{Pf}\begin{bmatrix}\boldsymbol{0}_{(1),(1)}& \boldsymbol{M}_{(1),(-)}&\bar{0}_{1}&\bar{0}_{1}&d\lambda_{D}x^{\pi-(D-1)}( \tilde{e}_{d})_{1}\\ -\boldsymbol{M}_{(1),(-)}^{\top}&\boldsymbol{0}&\bar{0}&-x^{\pi-(D-1)}\tilde {e}_{1}&y^{\pi-(D-1)}\tilde{e}_{d}\\ \bar{0}_{1}^{\top}&\bar{0}^{\top}&0&zG&0\\ \bar{0}_{1}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\left(\tilde{e}_{d}\right)_{1}^{\top}&-y^{\pi-(D-1) }\tilde{e}_{d}^{\top}&0&0&0\end{bmatrix}.\] Note that we replaced every instance of \(\big{(}d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}\big{)}_{(\bar{1}),(-)}\) with \(\bar{0}_{1}\) in the last line because the first entry in \(d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}\) is the only nonzero entry. To make the calculations easier to read, all cofactor expansions are performed over fixed columns (fixed \(j\) values). Using the cofactor expansion formula in Definition 2.2, we further write \[\operatorname{Pf}_{1}\boldsymbol{\partial}_{2}= \operatorname{Pf}\begin{bmatrix}\boldsymbol{0}_{(\hat{1}),(\hat{1} )}&\boldsymbol{M}_{(\hat{1}),(-)}&\tilde{0}_{\hat{1}}&\tilde{0}_{\hat{1}}&d \lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}\\ -\begin{pmatrix}\boldsymbol{M}_{(\hat{1}),(-)}\end{bmatrix}^{\top}&\boldsymbol {0}&\tilde{0}&-x^{\pi-(D-1)}\tilde{e}_{1}&y^{\pi-(D-1)}\tilde{e}_{d}\\ \tilde{0}_{\hat{1}}^{\top}&\tilde{0}^{\top}&0&zG&0\\ \tilde{0}_{\hat{1}}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}^{\top}&-y^{\pi-(D-1)} \tilde{e}_{d}^{\top}&0&0&0\end{bmatrix}\] \[= (-1)^{(2d+1)+(2d)+H((2d)-(2d+1))}\left(-zG\right)\] \[\operatorname{Pf}\begin{bmatrix}\boldsymbol{0}_{(\hat{1}),(\hat {1})}&\boldsymbol{M}_{(\hat{1}),(-)}&d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d} )_{\hat{1}}\\ -\begin{pmatrix}\boldsymbol{M}_{(\hat{1}),(-)}\end{pmatrix}^{\top}& \boldsymbol{0}&y^{\pi-(D-1)}\tilde{e}_{d}\\ -d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}^{\top}&-y^{\pi-(D-1)} \tilde{e}_{d}^{\top}&0\end{bmatrix}\] \[= (zG)(-1)^{(d-1)+(2d)+H((2d)-(d-1))}\left(d\lambda_{D}x^{\pi-(D-1) }\right)\operatorname{Pf}\begin{bmatrix}\boldsymbol{0}_{(\hat{1}\hat{d}),( \hat{1}\hat{d})}&\boldsymbol{M}_{(\hat{1}\hat{d}),(-)}\\ -\begin{pmatrix}\boldsymbol{M}_{(\hat{1}\hat{d}),(-)}\end{pmatrix}^{\top}& \boldsymbol{0}\end{bmatrix}\] \[+(zG)(-1)^{(2d-1)+(2d)+H((2d)-(2d-1))}\left(y^{\pi-(D-1)}\right) \operatorname{Pf}\begin{bmatrix}\boldsymbol{0}_{(\hat{1}),(\hat{1})}& \boldsymbol{M}_{(\hat{1}),(\hat{d})}\\ -\begin{pmatrix}\boldsymbol{M}_{(\hat{1}),(\hat{d})}\end{pmatrix}^{\top}& \boldsymbol{0}_{(\hat{d}),(\hat{d})}\end{bmatrix}\] \[= (zG)(-1)^{(d-1)+(2d)+H((2d)-(d-1))}\left(d\lambda_{D}x^{\pi-(D-1) }\right)(0)\] \[+(zG)(-1)^{(2d-1)+(2d)+H((2d)-(2d-1))}\left(y^{\pi-(D-1)}\right) \left((-1)^{(d-1)(d-2)/2}\det\begin{pmatrix}\boldsymbol{M}_{(\hat{1}),(\hat{ d})}\end{pmatrix}\right)\] \[= (zG)(y^{\pi-(D-1)})(-1)^{(2D-1)(2D-2)/2}\left((-1)^{d-1}(d-1)!y^ {d-1}\right)\] \[= (zG)(y^{\pi-(D-1)})(-1)^{D-1}(-1)^{2D-1}(d-1)!y^{2D-1}\] \[= (-1)^{D}(d-1)!y^{\pi+D}zG,\] where line (i) is true by Lemma 2.6 and Remark 2.5, and line (ii) is true by Corollary 4.2. Recall that \(G=\sum_{t=D}^{\pi}\lambda_{t}(xy)^{\pi-t}F^{t-D}\). Note that \((xy)^{\pi-t}F^{t-D}\) has degree \(2(\pi-t)+2(t-D)=2(\pi-D)\) for all \(D\leq t\leq\pi\), so \(G\in P_{2(\pi-D)}\). Thus we have \[\operatorname{Pf}_{1}\boldsymbol{\partial}_{2}=(-1)^{D}(d-1)!y^{\pi+D}zG\in P _{(\pi+D)+1+2(\pi-D)}=P_{3\pi-(D-1)}=P_{s/2+1},\] and so we see that \(\operatorname{Pf}_{1}\boldsymbol{\partial}_{2}\) is either \(0\) or degree \(s/2+1\) as we desired. We now show that this is true for the remaining Pfaffians. We calculate the degree of \(\operatorname{Pf}_{d+1}\boldsymbol{\partial}_{2}\) by a similar method to our previous calculation. Using Definition 2.7 and the cofactor expansion formula in Definition 2.2, we write \[\operatorname{Pf}_{d+1}\boldsymbol{\partial}_{2}\] \[= \operatorname{Pf}_{d+1}\begin{bmatrix}\boldsymbol{0}&\boldsymbol{M} &d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}&\bar{0}&d\lambda_{D}x^{\pi-(D-1)} \tilde{e}_{d}\\ -\boldsymbol{M}^{\top}&\boldsymbol{0}&\bar{0}&-x^{\pi-(D-1)}\tilde{e}_{1}&y^{ \pi-(D-1)}\tilde{e}_{d}\\ -d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top}&\bar{0}^{\top}&0&zG&0\\ \bar{0}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}\tilde{e}_{d}^{ \top}&0&0&0\end{bmatrix}\] \[= (-1)^{(d+1)+1}\operatorname{Pf}\begin{bmatrix}\boldsymbol{0}& \boldsymbol{M}_{(-),(\bar{1})}&d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}&\bar{0} &d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}\\ -\boldsymbol{M}_{(-),(\bar{1})}\end{bmatrix}\] \[= (-1)^{(d+1)+1}\operatorname{Pf}\begin{bmatrix}\boldsymbol{0}& \boldsymbol{M}_{(-),(\bar{1})}&d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}\\ -\boldsymbol{M}_{(-),(\bar{1})}\end{bmatrix}\] \[= (-1)^{(2d)+(2d+1)+H((2d+1)-(2d))}\,(zG)\] \[\operatorname{Pf}\begin{bmatrix}\boldsymbol{0}&\boldsymbol{M}_{ (-),(\bar{1})}&d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}\\ -\boldsymbol{M}_{(-),(\bar{1})}\end{bmatrix}\] \[= (zG)(-1)^{(d)+(2d)+H((2d)-(d))}\,(d\lambda_{D}x^{\pi-(D-1)}) \operatorname{Pf}\begin{bmatrix}\boldsymbol{0}&\boldsymbol{M}_{(-),(\bar{1} \bar{d})}\\ -\boldsymbol{M}_{(-),(\bar{1}\bar{d})}\end{bmatrix}\] \[+(zG)(-1)^{(2d-1)+(2d)+H((2d)-(2d-1))}\,(y^{\pi-(D-1)}) \operatorname{Pf}\begin{bmatrix}\boldsymbol{0}&\boldsymbol{M}_{(-),(\bar{1} \bar{d})}\\ -\boldsymbol{M}_{(-),(\bar{1}\bar{d})}\end{bmatrix}\] \[= (zG)(-1)^{(d)+(2d)+H((2d)-(d))}\,(d\lambda_{D}x^{\pi-(D-1)}) \left((-1)^{D-1}\det\boldsymbol{M}_{(\bar{d}),(\bar{1})}\right)\] (iii) \[+(zG)(-1)^{(2d-1)+(2d)+H((2d)-(2d-1))}\,(y^{\pi-(D-1)})(0)\] \[= -(zG)(d\lambda_{D}x^{\pi-(D-1)})\left((-1)^{D-1}\left((d-1)!x^{2 D-1}\right)\right)\] (iv) \[= (-1)^{D}d!\lambda_{D}x^{\pi+D}zG,\] where line (iii) is true by Lemma 2.6 and Remark 2.5, and line (iv) is true by Corollary 4.2. Thus we have \[\operatorname{Pf}_{d+1}\boldsymbol{\partial}_{2}=(-1)^{D}d!\lambda_{D}x^{\pi +D}zG\in P_{(\pi+D)+1+2(\pi-D)}=P_{s/2+1},\] and so we see that \(\operatorname{Pf}_{d+1}\boldsymbol{\partial}_{2}\) is either \(0\) or degree \(s/2+1\) as we desired. We now calculate the degrees of \(\operatorname{Pf}_{d}\boldsymbol{\partial}_{2}\) and \(\operatorname{Pf}_{2d}\boldsymbol{\partial}_{2}\) together because they have a common factor. Using Definition 2.7 and the cofactor expansion formula in Definition 2.2, we write \[\begin{split}&\text{Pf}_{d}\,\mathbf{\theta}_{2}\\ =&\text{Pf}_{d}\begin{bmatrix}\mathbf{0}&\mathbf{M}&d\lambda_{D}y^{ \pi-(D-1)}\tilde{e}_{1}&\vec{0}&d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}\\ -\mathbf{M}^{\top}&\mathbf{0}&\vec{0}&-x^{\pi-(D-1)}\tilde{e}_{1}&y^{\pi-(D-1)} \tilde{e}_{d}\\ -d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top}&\vec{0}^{\top}&0&zG&0\\ \vec{0}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}\tilde{e}_{d}^{ \top}&0&0&0\end{bmatrix}\\ \\ =&(-1)^{(d)+1}\\ &\text{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{d}),(\hat{d})}&\mathbf{M}_{(\hat{d}),(-)} &d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}&\vec{0}_{\hat{d}}&\vec{0} _{\hat{d}}\\ -\left(\mathbf{M}_{(\hat{d}),(-)}\right)^{\top}&\vec{0}&\vec{0}&-x^{\pi-(D-1)} \tilde{e}_{1}&y^{\pi-(D-1)}\tilde{e}_{d}\\ -d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&\vec{0}&zG&0\\ \vec{0}_{\hat{d}}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ \vec{0}_{\hat{d}}^{\top}&-y^{\pi-(D-1)}\tilde{e}_{d}^{\top}&0&0&0\end{bmatrix} \\ \\ =&-(-1)^{(2d-1)+(2d+2)+H((2d+2)-(2d-1))}y^{\pi-(D-1)}\\ &\text{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{d}),(\hat{d})}&\mathbf{M}_{(\hat{d}),( \hat{d})}&d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}&\vec{0}_{\hat{d} }\\ -\left(\mathbf{M}_{(\hat{d}),(\hat{d})}\right)^{\top}&\mathbf{0}_{(\hat{d}),(\hat{d} )}&\vec{0}_{\hat{d}}&-x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}\\ -d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&\vec{0}_{\hat{d} }^{\top}&0&zG\\ \vec{0}_{\hat{d}}^{\top}&x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&-zG&0 \end{bmatrix}.\end{split}\] We also have \[\begin{split}&\operatorname{Pf}_{2d}\boldsymbol{\partial}_{2}\\ &=\operatorname{Pf}_{2d}\begin{bmatrix}\boldsymbol{0}&\boldsymbol{M}&d \lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}&\tilde{0}&d\lambda_{D}x^{\pi-(D-1)} \tilde{e}_{d}\\ -\boldsymbol{M}^{\top}&\boldsymbol{0}&\tilde{0}&-x^{\pi-(D-1)}\tilde{e}_{1} &y^{\pi-(D-1)}\tilde{e}_{d}\\ -d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top}&\tilde{0}^{\top}&0&zG&0\\ \tilde{0}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}\tilde{e}_{d}^{ \top}&0&0&0\end{bmatrix}\end{split}\] \[\begin{split}&=(-1)^{(2d)+1}\\ &\operatorname{Pf}\begin{bmatrix}\boldsymbol{0}&\boldsymbol{M}_{(-),(\hat{d} )}&d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}&\tilde{0}&d\lambda_{D}x^{\pi-(D-1 )}\tilde{e}_{d}\\ -\left(\boldsymbol{M}_{(-),(\hat{d})}\right)^{\top}&\boldsymbol{0}_{(\hat{d}),( \hat{d})}&\tilde{0}_{\hat{d}}&-x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}& \tilde{0}_{\hat{d}}\\ -d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top}&\tilde{0}^{\top}&0&zG&0\\ \tilde{0}^{\top}&x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&\tilde{0}^{\top}&0&0&0\\ \end{bmatrix}\end{split}\] \[\begin{split}&=-(-1)^{(d)+(2d+2)+H((2d+2)-(d))}(d\lambda_{D}x^ {\pi-(D-1)})\\ &\operatorname{Pf}\begin{bmatrix}\boldsymbol{0}_{(\hat{d}),(\hat{d} )}&\boldsymbol{M}_{(\hat{d}),(\hat{d})}&d\lambda_{D}y^{\pi-(D-1)}(\tilde{e }_{1})_{\hat{d}}&\tilde{0}_{\hat{d}}\\ -\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)^{\top}&\boldsymbol{0}_{(\hat{d}),( \hat{d})}&\tilde{0}_{\hat{d}}&-x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}\\ -d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&\tilde{0}^{\top}&0&zG\\ \tilde{0}^{\top}_{\hat{d}}&x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&-zG&0 \end{bmatrix}\end{split}.\] To finish off both of these cases, we now calculate the degree of \[\operatorname{Pf}\begin{bmatrix}\boldsymbol{0}_{(\hat{d}),(\hat{d})}& \boldsymbol{M}_{(\hat{d}),(\hat{d})}&d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1 })_{\hat{d}}&\tilde{0}_{\hat{d}}\\ -\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)^{\top}&\boldsymbol{0}_{(\hat{d}),(\hat{d})}&\tilde{0}_{\hat{d}}&-x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}\\ -d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&\tilde{0}^{\top}&0&zG\\ \tilde{0}^{\top}_{\hat{d}}&x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&-zG&0 \end{bmatrix}\] by first applying more cofactor expansion, allowing us to write \[\Pr\begin{bmatrix}\mathbf{0}_{(\hat{d}),(\hat{d})}&\boldsymbol{M}_{(\hat{d}),( \hat{d})}&d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}&\bar{0}_{\hat{d}} \\ -\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)^{\top}&\mathbf{0}_{(\hat{d}),(\hat{d})}&\bar{0}_{\hat{d}}&-x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}\\ -d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&0&zG\\ \bar{0}_{\hat{d}}^{\top}&x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&-zG&0 \end{bmatrix}\] \[= (-1)^{(d)+(2d)+H((2d)-(d))}\left(-x^{\pi-(D-1)}\right)\] \[\Pr\begin{bmatrix}\mathbf{0}_{(\hat{d}),(\hat{d})}&\boldsymbol{M }_{(\hat{d}),(\hat{1}\hat{d})}&d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat {d}}\\ -\left(\boldsymbol{M}_{(\hat{d}),(\hat{1}\hat{d})}\right)^{\top}&\mathbf{0}_{ (\hat{1}\hat{d}),(\hat{1}\hat{d}}&\bar{0}_{\hat{1}\hat{d}}\\ -d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&\bar{0}_{\hat{1} \hat{d}}^{\top}&0\end{bmatrix}\] \[+(-1)^{(2d-1)+(2d)+H((2d)-(2d-1))}\left(zG\right)\] \[\Pr\begin{bmatrix}\mathbf{0}_{(\hat{d}),(\hat{d})}&\boldsymbol{M }_{(\hat{d}),(\hat{d})}\\ -\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)^{\top}&\mathbf{0}_{(\hat{ d}),(\hat{d})}\end{bmatrix}\] \[= x^{\pi-(D-1)}(-1)^{(1)+(2d-2)+H((2d-2)-(1))}\left(d\lambda_{D}y^{\pi-(D-1)} \right)\Pr\begin{bmatrix}\mathbf{0}_{(\hat{1}\hat{d}),(\hat{1}\hat{d})}& \boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{d})}\\ -\left(\boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{d})}\right)^{\top}& \mathbf{0}_{(\hat{1}\hat{d}),(\hat{1}\hat{d})}\end{bmatrix}\] \[+(-1)^{(2d-1)+(2d)+H((2d)-(2d-1))}\left(zG\right)\Pr\begin{bmatrix} \mathbf{0}_{(\hat{d}),(\hat{d})}&\boldsymbol{M}_{(\hat{d}),(\hat{d})}\\ -\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)^{\top}&\mathbf{0}_{(\hat {d}),(\hat{d})}\end{bmatrix}\] \[= x^{\pi-(D-1)}(-1)^{(1)+(2d-2)+H((2d-2)-(1))}\left(d\lambda_{D}y^ {\pi-(D-1)}\right)\left((-1)^{(d-2)(d-3)/2}\det\left(\boldsymbol{M}_{(\hat{1 }\hat{d}),(\hat{1}\hat{d})}\right)\right)\] \[+(-1)^{(2d-1)+(2d)+H((2d)-(2d-1))}\left(zG\right)\left((-1)^{D-1 }\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{d})}\right)\right)\] \[= x^{\pi-(D-1)}(d\lambda_{D}y^{\pi-(D-1)})(-1)^{(2D-2)(2D-3)/2}\det \left(\boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{d})}\right)+(zG)(-1)^{D-1 }(-(d-1)!zg)\] (vi) \[= (-1)^{D-1}d\lambda_{D}(xy)^{\pi-(D-1)}\det\left(\boldsymbol{M}_{ (\hat{1}\hat{d}),(\hat{1}\hat{d})}\right)+(-1)^{D}(d-1)!z^{2}gG.\] Here line (v) is a result of Remark 2.5, and line (vi) is true by Corollary 4.14. Recall that \(g=\sum_{t=0}^{D-1}\lambda_{t}(xy)^{(D-1)-t}F^{t}\). Note that \((xy)^{(D-1)-t}F^{t}\) has degree \(2((D-1)-t)+2t=2(D-1)\) for all \(0\leq t\leq D-1\), so \(g\in P_{2(D-1)}\). We also have \(\det\left(\boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{d})}\right)\in P_{(d-2) 1}=P_{2(D-1)}\) by Remark 2.13, since \(\boldsymbol{M}\) is size \(d\times d\) and pure graded degree \(1\). Thus we have both \[(xy)^{\pi-(D-1)}\det\left(\boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{d})} \right)\in P_{2(\pi-(D-1))+2(D-1)}=P_{2\pi}\] and \[z^{2}gG\in P_{2+2(D-1)+2(\pi-D)}=P_{2\pi},\] so we see that \[\Pr\begin{bmatrix}\mathbf{0}_{(\hat{d}),(\hat{d})}&\mathbf{M}_{(\hat{d}),(\hat{d})}&d \lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}&\tilde{0}_{\hat{d}}\\ -\begin{pmatrix}\mathbf{M}_{(\hat{d}),(\hat{d})}\end{pmatrix}^{\top}&\mathbf{0}_{(\hat {d}),(\hat{d})}&\tilde{0}_{\hat{d}}&-x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}\\ -d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&\tilde{0}_{\hat{d} }^{\top}&0&zG\\ \tilde{0}_{\hat{d}}^{\top}&x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&-zG&0 \end{bmatrix}\] From this we have both \[\Pr_{d}\mathbf{\partial}_{2}\] \[= -(-1)^{(2d-1)+(2d+2)+H((2d+2)-(2d-1))}y^{\pi-(D-1)}\] \[\Pr\begin{bmatrix}\mathbf{0}_{(\hat{d}),(\hat{d})}&\mathbf{M}_{(\hat{d}), (\hat{d})}&d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}&\tilde{0}_{\hat{ d}}\\ -\begin{pmatrix}\mathbf{M}_{(\hat{d}),(\hat{d})}\end{pmatrix}^{\top}&\mathbf{0}_{( \hat{d}),(\hat{d})}&\tilde{0}_{\hat{d}}&-x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d} }\\ -d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&\tilde{0}_{\hat{d} }^{\top}&0&zG\\ \tilde{0}_{\hat{d}}^{\top}&x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&-zG&0 \end{bmatrix}\] \[\in P_{(\pi-(D-1))+2\pi}=P_{s/2+1}\] and \[\Pr_{2d}\mathbf{\partial}_{2}\] \[= -(-1)^{(d)+(2d+2)+H((2d+2)-(d))}(d\lambda_{D}x^{\pi-(D-1)})\] \[\Pr\begin{bmatrix}\mathbf{0}_{(\hat{d}),(\hat{d})}&\mathbf{M}_{(\hat{d}), (\hat{d})}&d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}&\tilde{0}_{\hat {d}}\\ -\begin{pmatrix}\mathbf{M}_{(\hat{d}),(\hat{d})}\end{pmatrix}^{\top}&\mathbf{0}_{( \hat{d}),(\hat{d})}&\tilde{0}_{\hat{d}}&-x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d }}\\ -d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&\tilde{0}_{\hat{d} }^{\top}&0&zG\\ \tilde{0}_{\hat{d}}^{\top}&x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{d}}^{\top}&-zG& 0\end{bmatrix}\] \[\in P_{(\pi-(D-1))+2\pi}=P_{s/2+1},\] which means \(\Pr_{d}\mathbf{\partial}_{2}\) and \(\Pr_{2d}\mathbf{\partial}_{2}\) have degree \(s/2+1\). Now that we have calculated the degrees of \(\Pr_{1}\mathbf{\partial}_{2}\), \(\Pr_{d}\mathbf{\partial}_{2}\), \(\Pr_{d+1}\mathbf{\partial}_{2}\), and \(\Pr_{2d}\mathbf{\partial}_{2}\), we only need to calculate the degree of \(\Pr_{\ell}\mathbf{\partial}_{2}\) and \(\Pr_{d+\ell}\mathbf{\partial}_{2}\) for \(1<\ell<d\). Let \(1<\ell<d\). Using Definition 2.7 and the cofactor expansion formula in Definition 2.2, we write \[\begin{array}{l}\mathrm{Pf}_{\ell}\,\mathbf{\partial}_{2}\\ \\ =\mathrm{Pf}_{\ell}\begin{bmatrix}\mathbf{0}&\mathbf{M}&d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_ {1}&\bar{0}&d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}\\ -\mathbf{M}^{\top}&\mathbf{0}&\bar{0}&-x^{\pi-(D-1)}\tilde{e}_{1}&y^{\pi-(D-1)}\tilde{e }_{d}\\ -d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top}&\bar{0}^{\top}&0&zG&0\\ \bar{0}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}\tilde{e}_{d}^{ \top}&0&0&0\end{bmatrix}\end{array}\] \[\begin{array}{l}\\ =(-1)^{(\ell)+1}\\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{\ell}),(\hat{\ell})}&\mathbf{M}_{(\hat{ \ell}),(-)}&d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{\ell}}&\bar{0}_{ \hat{\ell}}&d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{\ell}}\\ -\left(\mathbf{M}_{(\hat{\ell}),(-)}\right)^{\top}&\mathbf{0}&\bar{0}&-x^{\pi-(D-1)} \tilde{e}_{1}&y^{\pi-(D-1)}\tilde{e}_{d}\\ -d\lambda_{D}y^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{\ell}}^{\top}&\bar{0}^{\top}& 0&zG&0\\ \bar{0}_{\hat{\ell}}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{\ell}}^{\top}&-y^{\pi-(D-1)} \tilde{e}_{d}^{\top}&0&0&0\end{bmatrix}\end{array}\] \[\begin{array}{l}\\ =-(-1)^{\ell}(-1)^{(1)+(2d)+H((2d)-(1))}(d\lambda_{D}y^{\pi-(D-1)})\\ \\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{\ell})}& \mathbf{M}_{(\hat{1}\hat{\ell}),(-)}&\bar{0}_{\hat{1}\hat{\ell}}&d\lambda_{D}x^{ \pi-(D-1)}(\tilde{e}_{d})_{\hat{1}\hat{\ell}}\\ -\left(\mathbf{M}_{(\hat{\ell}),(-)}\right)^{\top}&\mathbf{0}&-x^{\pi-(D-1)}\tilde{e}_ {1}&y^{\pi-(D-1)}\tilde{e}_{d}\\ \bar{0}_{\hat{1}\hat{\ell}}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}\hat{\ell}}^{\top}&-y^{\pi-( D-1)}\tilde{e}_{d}^{\top}&0&0\end{bmatrix}\end{array}\] \[\begin{array}{l}\\ =-(-1)^{\ell}d\lambda_{D}y^{\pi-(D-1)}(-1)^{(d-1)+(2d-1)+H((2d-1)-(d-1))}(-x^ {\pi-(D-1)})\\ \\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{\ell})}& \mathbf{M}_{(\hat{1}\hat{\ell}),(1)}&d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{ \hat{1}\hat{\ell}}\\ -\left(\mathbf{M}_{(\hat{1}\hat{\ell}),(1)}\right)^{\top}&\mathbf{0}&y^{\pi-(D-1)} \tilde{e}_{d}\\ -d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{\ell}}^{\top}&-y^{\pi-(D-1)} \tilde{e}_{d}^{\top}&0\end{bmatrix}\end{array}\] \[= -(-1)^{\ell}d\lambda_{D}(xy)^{\pi-(D-1)}(-1)^{(d-2)+(2d-2)+H((2d-2)-(d-2) )}\left(d\lambda_{D}x^{\pi-(D-1)}\right)\] \[\operatorname{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}\hat{\ell}),( \hat{1}\hat{d})}&\boldsymbol{M}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{d})},(\hat{1 })\\ -\left(\boldsymbol{M}_{(\hat{1}\hat{\ell}\hat{d}),(\hat{1})}\right)^{\top}& \mathbf{0}_{(\hat{1}),(\hat{1})}\end{bmatrix}\] \[-(-1)^{\ell}d\lambda_{D}(xy)^{\pi-(D-1)}(-1)^{(2d-3)+(2d-2)+H((2d- 2)-(2d-3))}\left(y^{\pi-(D-1)}\right)\] \[\operatorname{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}\hat{\ell}),( \hat{1}\hat{d})}&\boldsymbol{M}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{d})}\\ -\left(\boldsymbol{M}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{d})}\right)^{\top}& \mathbf{0}_{(\hat{1}\hat{d}),(\hat{1}\hat{d})}\end{bmatrix}\] \[-(-1)^{\ell}zG(-1)^{(d-1)+(2d)+H((2d)-(d-1))}(d\lambda_{D}x^{\pi -(D-1)})\operatorname{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{\ell}\hat{d}),( \hat{\ell}\hat{d})}&\boldsymbol{M}_{(\hat{\ell}\hat{d}),(-)}\\ -\left(\boldsymbol{M}_{(\hat{\ell}\hat{d}),(-)}\right)^{\top}&\mathbf{0} \end{bmatrix}\] \[-(-1)^{\ell}zG(-1)^{(2d-1)+(2d)+H((2d)-(2d-1))}\left(y^{\pi-(D-1 )}\right)\operatorname{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{\ell}),(\hat{\ell} )}&\boldsymbol{M}_{(\hat{\ell}),(\hat{d})}\\ -\left(\boldsymbol{M}_{(\hat{\ell}),(\hat{d})}\right)^{\top}&\mathbf{0}_{( \hat{d}),(\hat{d})}\end{bmatrix}\] \[= -(-1)^{\ell}d\lambda_{D}(xy^{2})^{\pi-(D-1)}\left(\left(-1\right) ^{D-1}\det\left(\boldsymbol{M}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{d})}\right)\right)\] (vii) \[-(-1)^{\ell}y^{\pi-(D-1)}zG\left(\left(-1\right)^{D-1}\det\left( \boldsymbol{M}_{(\hat{\ell}),(\hat{d})}\right)\right)\] \[= (-1)^{\ell+D}y^{\pi-(D-1)}\left(d\lambda_{D}(xy)^{\pi-(D-1)}\det \left(\boldsymbol{M}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{d})}\right)+zG\det \left(\boldsymbol{M}_{(\hat{\ell}),(\hat{d})}\right)\right),\] where line (vii) is true by Lemma 2.6 and Remark 2.5. By Remark 2.13, \(\det\left(\boldsymbol{M}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{d})}\right)\in P_{2D -2}\) and \(\det\left(\boldsymbol{M}_{(\hat{\ell}),(\hat{d})}\right)\in P_{2D-1}\), which means \[(xy)^{\pi-(D-1)}\det\left(\boldsymbol{M}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{d })}\right)\in P_{2(\pi-(D-1))+(2D-2)}=P_{2\pi}\] and \[zG\det\left(\boldsymbol{M}_{(\hat{\ell}),(\hat{d})}\right)\in P_{1+2(\pi-D)+( 2D-1)}=P_{2\pi}.\] Therefore \[\operatorname{Pf}_{\ell}\boldsymbol{\partial}_{2}\] \[= (-1)^{\ell+D}y^{\pi-(D-1)}\left(d\lambda_{D}(xy)^{\pi-(D-1)}\det \left(\boldsymbol{M}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{d})}\right)+zG\det \left(\boldsymbol{M}_{(\hat{\ell}),(\hat{d})}\right)\right)\] \[\in P_{(\pi-(D-1))+2\pi}=P_{s/2+1},\] so \(\operatorname{Pf}_{\ell}\boldsymbol{\partial}_{2}\) has degree \(s/2+1\). Let \(0<\ell<d\). Using Definition 2.7 and the cofactor expansion formula in Definition 2.2, we write \[\begin{array}{l}\mathrm{Pf}_{d+\ell}\,\mathbf{\partial}_{2}\\ \\ =\mathrm{Pf}_{d+\ell}\begin{bmatrix}\mathbf{0}&\mathbf{M}&d\lambda_{D}y^{\pi-(D-1)}\tilde{e }_{1}&\bar{0}&d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}\\ -\mathbf{M}^{\top}&\mathbf{0}&\bar{0}&-x^{\pi-(D-1)}\tilde{e}_{1}&y^{\pi-(D-1)}\tilde{e }_{d}\\ -d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top}&\bar{0}^{\top}&0&zG&0\\ \bar{0}^{\top}&x^{\pi-(D-1)}\tilde{e}_{1}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}\tilde{e}_{d}^{ \top}&0&0&0\end{bmatrix}\\ \\ =(-1)^{(d+\ell)+1}\\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}&\mathbf{M}_{(-),(\hat{\ell})}&d\lambda_{D}y^{ \pi-(D-1)}\tilde{e}_{1}&\bar{0}&d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}\\ -\left(\mathbf{M}_{(-),(\hat{\ell})}\right)^{\top}&\mathbf{0}_{(\hat{\ell}),(\hat{ \ell})}&\bar{0}_{\hat{\ell}}&-x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{\ell}}&y^{ \pi-(D-1)}(\tilde{e}_{d})_{\hat{\ell}}\\ -d\lambda_{D}y^{\pi-(D-1)}\tilde{e}_{1}^{\top}&\bar{0}^{\top}&0&zG&0\\ \bar{0}^{\top}&x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{\ell}}^{\top}&-zG&0&0\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}(\tilde{e}_{d})_ {\hat{\ell}}^{\top}&0&0&0\end{bmatrix}\\ \\ =-(-1)^{\ell}(-1)^{(1)+(2d)+H((2d)-(1))}(d\lambda_{D}y^{\pi-(D-1)})\\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}),(\hat{1})}&\mathbf{M}_{(\hat{1}),( \hat{\ell})}&\bar{0}_{\hat{1}}&d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat {1}}\\ -\left(\mathbf{M}_{(\hat{1}),(\hat{\ell})}\right)^{\top}&\mathbf{0}_{(\hat{\ell}),( \hat{\ell})}&-x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{\ell}}&y^{\pi-(D-1)}(\tilde{ e}_{d})_{\hat{\ell}}\\ \bar{0}_{\hat{1}}^{\top}&x^{\pi-(D-1)}(\tilde{e}_{1})_{\hat{\ell}}^{\top}&0&0 \\ -d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}^{\top}&-y^{\pi-(D-1)}( \tilde{e}_{d})_{\hat{\ell}}^{\top}&0&0\end{bmatrix}\\ \\ -(-1)^{\ell}(-1)^{(2d+1)+(2d)+H((2d)-(2d+1))}(-zG)\\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}&\mathbf{M}_{(-),(\hat{\ell})}&d\lambda_{D}x^{ \pi-(D-1)}\tilde{e}_{d}\\ -\left(\mathbf{M}_{(-),(\hat{\ell})}\right)^{\top}&\mathbf{0}_{(\hat{\ell}),(\hat{ \ell})}&y^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{\ell}}\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}(\tilde{e}_{d})_ {\hat{\ell}}^{\top}&0\end{bmatrix}\\ \\ =-(-1)^{\ell}d\lambda_{D}y^{\pi-(D-1)}(-1)^{(d)+(2d-1)+H((2d)-(d))}(-x^{\pi-(D- 1)})\\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}),(\hat{1})}&\mathbf{M}_{(1),(\hat{1})}& d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}\\ -\left(\mathbf{M}_{(\hat{1}),(\hat{1})}\right)^{\top}&\mathbf{0}_{(\hat{1}),(\hat{ \ell})}&y^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}\hat{\ell}}\\ -d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}^{\top}&-y^{\pi-(D-1)}( \tilde{e}_{d})_{1\hat{\ell}}^{\top}&0\end{bmatrix}\\ \\ -(-1)^{\ell}(-1)^{(2d+1)+(2d)+H((2d)-(2d+1))}(-zG)\\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}&\mathbf{M}_{(-),(\hat{\ell})}&d\lambda_{D}x^{ \pi-(D-1)}\tilde{e}_{d}\\ -\left(\mathbf{M}_{(-),(\hat{\ell})}\right)^{\top}&\mathbf{0}_{(\hat{\ell}),(\hat{ \ell})}&y^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{\ell}}\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}(\tilde{e}_{d})_ {\hat{\ell}}^{\top}&0\end{bmatrix},\\ \\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}),(\hat{1})}&\mathbf{M}_{(1),(\hat{1})}&d \lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}\\ -\left(\mathbf{M}_{(-),(\hat{\ell})}\right)^{\top}&\mathbf{0}_{(\hat{\ell}),(\hat{ \ell})}&y^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}\hat{\ell}}\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}(\tilde{e}_{d})_ {\hat{\ell}}^{\top}&0\end{bmatrix}\\ \\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}),(\hat{1})}&\mathbf{M}_{(1),(\hat{1})}&d \lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}\\ -\left(\mathbf{M}_{(-),(\hat{\ell})}\right)^{\top}&\mathbf{0}_{(\hat{\ell}),(\hat{ \ell})}&y^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}(\tilde{e}_{d})_ {\hat{\ell}}^{\top}&0\end{bmatrix}\\ \\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}),(\hat{1})}&\mathbf{M}_{(1),(\hat{1})}&d \lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}\\ -\left(\mathbf{M}_{(-),(\hat{\ell})}\right)^{\top}&\mathbf{0}_{(\hat{\ell}),(\hat{ \ell})}&y^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}(\tilde{e}_{d})_ {\hat{\ell}}^{\top}&0\end{bmatrix}\\ \\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}),(\hat{1})}&\mathbf{M}_{(1),(\hat{1})}&d \lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}\\ -\left(\mathbf{M}_{(-),(\hat{\ell})}\right)^{\top}&\mathbf{0}_{(\hat{\ell}),(\hat{ \ell})}&y^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}\\ -d\lambda_{D}x^{\pi-(D-1)}\tilde{e}_{d}^{\top}&-y^{\pi-(D-1)}(\tilde{e}_{d})_ {\hat{\ell}}^{\top}&0\end{bmatrix}\\ \\ \mathrm{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}),(\hat{1})}&\mathbf{M}_{(1),(\hat{ \ell})}&\mathbf{0}_{(\hat{1})}&d\lambda_{D}x^{\pi-(D-1)}(\tilde{e}_{d})_{\hat{1}}\\ -\left(\mathbf{M}_{(-),(\hat{\ell})}\right)^{\top}&\mathbf{0}_{(\hat{\ell}),(\hat{ \ell})}&y^{\pi-(D-1)}(\tilde{e}_{d})_{\ \[= (-1)^{\ell}d\lambda_{D}(xy)^{\pi-(D-1)}(-1)^{(d-1)+(2d-2)+H((2d-2)-(d -1))}(d\lambda_{D}x^{\pi-(D-1)})\] \[\operatorname{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}d),(\hat{1}d) }&\boldsymbol{M}_{(\hat{1}d),(\hat{1}\hat{\ell})}\\ -\left(\boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{\ell})}\right)^{\top}& \mathbf{0}_{(\hat{1}\hat{\ell}),(\hat{1}\hat{\ell})}\end{bmatrix}\] \[+(-1)^{\ell}d\lambda_{D}(xy)^{\pi-(D-1)}(-1)^{(2d-3)+(2d-2)+H((2d- 2)-(2d-3))}(y^{\pi-(D-1)})\] \[\operatorname{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{1}),(\hat{1}d) }&\boldsymbol{M}_{(\hat{1}),(\hat{1}\hat{\ell})}\\ -\left(\boldsymbol{M}_{(\hat{1}),(\hat{1}\hat{\ell})}\right)^{\top}&\mathbf{0 }_{(\hat{1}\hat{\ell}),(\hat{1}\hat{\ell})}\end{bmatrix}\] \[-(-1)^{\ell}zG(-1)^{(d)+(2d)+H((2d)-(d))}(d\lambda_{D}x^{\pi-(D- 1)})\operatorname{Pf}\begin{bmatrix}\mathbf{0}_{(\hat{d}),(\hat{d})}& \boldsymbol{M}_{(\hat{d}),(\hat{\ell})}\\ -\left(\boldsymbol{M}_{(\hat{d}),(\hat{\ell})}\right)^{\top}&\mathbf{0}_{( \hat{\ell}),(\hat{\ell})}\end{bmatrix}\] \[-(-1)^{\ell}zG(-1)^{(2d-1)+(2d)+H((2d)-(2d-1))}(y^{\pi-(D-1)}) \operatorname{Pf}\begin{bmatrix}\mathbf{0}&\boldsymbol{M}_{(-),(\hat{\ell} \hat{d})}\\ -\left(\boldsymbol{M}_{(-),(\hat{\ell}\hat{d})}\right)^{\top}&\mathbf{0}_{( \hat{\ell}\hat{d}),(\hat{\ell}\hat{d})}\end{bmatrix}\] \[= (-1)^{\ell}(d\lambda_{D})^{2}(x^{2}y)^{\pi-(D-1)}\left((-1)^{D-1} \det\left(\boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{\ell})}\right)\right)\] (viii) \[+(-1)^{\ell}d\lambda_{D}x^{\pi-(D-1)}zG\left((-1)^{D-1}\det \left(\boldsymbol{M}_{(\hat{d}),(\hat{\ell})}\right)\right)\] \[= (-1)^{\ell+D-1}d\lambda_{D}x^{\pi-(D-1)}\left(d\lambda_{D}(xy)^{ \pi-(D-1)}\det\left(\boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{\ell})} \right)+zG\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{\ell})}\right)\right),\] where line (viii) is true by Lemma 2.6 and Remark 2.5. By Remark 2.13, \(\det\left(\boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{\ell})}\right)\in P_{2D -2}\) and \(\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{\ell})}\right)\in P_{2D-1}\), which means \[(xy)^{\pi-(D-1)}\det\left(\boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{\ell })}\right)\in P_{2(\pi-(D-1))+(2D-2)}=P_{2\pi}\] and \[zG\det\left(\boldsymbol{M}_{(d),(\hat{\ell})}\right)\in P_{1+2(\pi-D)+(2D-1)}= P_{2\pi}.\] Therefore \[\operatorname{Pf}_{d+\ell}\boldsymbol{\partial}_{2}\] \[= (-1)^{\ell+D-1}d\lambda_{D}x^{\pi-(D-1)}\left(d\lambda_{D}(xy)^{ \pi-(D-1)}\det\left(\boldsymbol{M}_{(\hat{1}\hat{d}),(\hat{1}\hat{\ell})} \right)+zG\det\left(\boldsymbol{M}_{(\hat{d}),(\hat{\ell})}\right)\right)\] \[\in P_{(\pi-(D-1))+2\pi}=P_{s/2+1},\] so \(\operatorname{Pf}_{d+\ell}\boldsymbol{\partial}_{2}\) has degree \(s/2+1\). **Proposition 5.10**.: _The last three maximal order Pfaffians of \(\boldsymbol{\partial}_{2}\) are as follows:_ * \(\operatorname{Pf}_{2d+1}(\boldsymbol{\partial}_{2})=ux^{q}\)_,_ * \(\operatorname{Pf}_{2d+2}(\boldsymbol{\partial}_{2})=uy^{q}\)_, and_ * \(\operatorname{Pf}_{2d+3}(\boldsymbol{\partial}_{2})=uz^{q}\)_._ Proof.: We have \[\operatorname{Pf}_{2d+\ell}(\boldsymbol{\partial}_{2})= \operatorname{Pf}_{\ell}\left(\boldsymbol{\psi}^{\top}\boldsymbol{ \varphi}^{\vee}\boldsymbol{\psi}+\operatorname{Pf}(\boldsymbol{\varphi}) \boldsymbol{\Phi}\right) \tag{1}\] \[=\operatorname{Pf}_{\ell}\left(\boldsymbol{\psi}^{\top}\boldsymbol {\varphi}^{\vee}\boldsymbol{\psi}+(uf)\boldsymbol{\Phi}\right)\] (2) \[=\operatorname{Pf}_{\ell}(u\boldsymbol{X}) \tag{3}\] for \(\ell=1,2,3\), where line (1) follows from Lemma 2.11, \(\operatorname{Pf}\boldsymbol{\varphi}=uf\) in line (2) by Lemma 5.5, and \(\boldsymbol{\psi}^{\top}\boldsymbol{\varphi}^{\vee}\boldsymbol{\psi}+uf \boldsymbol{\Phi}=u\boldsymbol{X}\) for line (3) by Lemma 5.6. Therefore, * \(\operatorname{Pf}_{2d+1}(\boldsymbol{\partial}_{2})=\operatorname{Pf}_{1} \left(u\begin{bmatrix}0&z^{q}&-y^{q}\\ -z^{q}&0&x^{q}\\ y^{q}&-x^{q}&0\end{bmatrix}\right)=(-1)^{(1)+1}\operatorname{Pf}\left( \begin{bmatrix}0&ux^{q}\\ -ux^{q}&0\end{bmatrix}\right)=ux^{q}\), * \(\operatorname{Pf}_{2d+2}(\boldsymbol{\partial}_{2})=\operatorname{Pf}_{2} \left(u\begin{bmatrix}0&z^{q}&-y^{q}\\ -z^{q}&0&x^{q}\\ y^{q}&-x^{q}&0\end{bmatrix}\right)=(-1)^{(2)+1}\operatorname{Pf}\left(u \begin{bmatrix}0&-y^{q}\\ y^{q}&0\end{bmatrix}\right)=uy^{q}\), and * \(\operatorname{Pf}_{2d+3}(\boldsymbol{\partial}_{2})=\operatorname{Pf}_{3} \left(u\begin{bmatrix}0&z^{q}&-y^{q}\\ -z^{q}&0&x^{q}\\ y^{q}&-x^{q}&0\end{bmatrix}\right)=(-1)^{(3)+1}\operatorname{Pf}\left(u \begin{bmatrix}0&z^{q}\\ -z^{q}&0\end{bmatrix}\right)=uz^{q}\). **Theorem 5.11**.: _If \(\operatorname{char}k=p\), where \(p>2D-1\) is an odd prime, then \(f=\left(xy-z^{2}\right)^{D}\) is link-\(q\)-compressed for all powers \(q>1\) of \(p\)._ Proof.: Fix a power \(q>1\) of \(p\). Theorem 5.7 showed that \(\mathfrak{m}^{[q]}:f\) is generated by the maximal order Pfaffians of \(\boldsymbol{\partial}_{2}\), and so by Propositions 5.9 and 5.10, \((\mathfrak{m}^{[q]}:f)/\mathfrak{m}^{[q]}\) is generated by \(x^{q},y^{q},z^{q}\) (which all become zero) and polynomials of degree \(\frac{8}{2}+1\). By Lemma 2.20, this means that \(f\) is link-\(q\)-compressed. Since the set of link-q-compressed polynomials is Zariski open, most polynomials are link-\(q\)-compressed. We use the terminology from Remark 2.27 to write the following: **Theorem 5.12**.: _Let \(P=k[x,y,z]\) be a standard graded polynomial ring over \(k\), a field of odd prime characteristic \(p\). Let \(d<p+1\) be an even number._ * _A general choice of degree_ \(d\) _homogeneous polynomial_ \(f\in P\) _is link-_\(q\)_-compressed for a fixed power (or finitely many powers)_ \(q>1\) _of_ \(p\)_._ * _A very general choice of degree_ \(d\) _homogeneous polynomial_ \(f\in P\) _is link-_\(q\)_-compressed for all fixed powers_ \(q>1\) _of_ \(p\)_._ Proof.: For a power \(q>1\) of \(p\), consider the set \(S_{q}\) of all homogeneous degree \(d\) link-\(q\)-compressed polynomials in \(P\). In Remark 2.27, we established that \(S_{q}\) is Zariski open for any \(q\), and in Theorem 5.11 we showed that \((xy-z^{2})^{d/2}\in S_{q}\) no matter what \(q\) is, so \(S_{q}\neq\emptyset\). This means that for a fixed \(q\), a general choice of \(f\in P\) that is homogeneous and of degree \(d\) is link-\(q\)-compressed. In fact for any fixed finite set \(Q\) of values of \(q\), \((xy-z^{2})^{d/2}\in\bigcap_{q<Q}S_{q}\), and so \(\bigcap_{q>1}S_{q}\neq\emptyset\). Thus a general choice of \(f\in P\) that is homogeneous and of degree \(d\) is link-\(q\)-compressed for all \(q\in Q\). Lastly, we have \((xy-z^{2})^{d/2}\in\bigcap_{q>1}S_{q}\), and so \(\bigcap_{q>1}S_{q}\neq\emptyset\). Therefore a very general choice of \(f\in P\) that is homogeneous and of degree \(d\) is link-\(q\)-compressed for all \(q>1\). We use Theorem 5.12 and the results for link-\(q\)-compressed polynomials in Section 2.3 from [10] to conclude the following: **Theorem 5.13**.: _Let \(P=k[x,y,z]\) be a standard graded polynomial ring over a field \(k\) of odd prime characteristic \(p\), with \(\mathfrak{m}=(x,y,z)\) the homogeneous maximal ideal of \(P\). Let \(d<p+1\) be an even number._ _Fix a power \(q\geq d+3\) of \(p\). For a general choice of homogeneous \(f\in P\) with \(\deg f=d\), the following hold:_ * _The minimal graded_ \(R=P/(f)\)_-free resolution of_ \(R/\mathfrak{m}^{[q]}\) _has the following eventually 2-periodic form_ \[...\xrightarrow{\boldsymbol{\varphi}}R^{2d}(-b-d)\xrightarrow{\boldsymbol{ \varphi}^{\vee}}R^{2d}(-b-1)\xrightarrow{\boldsymbol{\varphi}}R^{2d}(-b) \xrightarrow{\boldsymbol{\psi}^{\uparrow}\boldsymbol{\varphi}^{\vee}}R^{3}(-q) \xrightarrow{\vec{c}^{\uparrow}}R\to 0\] _where_ \(\boldsymbol{\varphi}\) _is a_ \(2d\times 2d\) _linear skew-symmetric matrix with Pfaffian equal to_ \(uf\) _for some unit_ \(u\in k\)_,_ \(\boldsymbol{\psi}\) _is a_ \(2d\times 3\) _matrix with entries of degree_ \(\frac{1}{2}(q-d+1)\)_,_ \(\vec{c}^{\uparrow}=\begin{bmatrix}x^{q}&y^{q}&z^{q}\end{bmatrix}\)_, and_ \(b=\frac{1}{2}(3q+d-1)\)_._ * _The minimal graded resolution over_ \(P=k[x,y,z]\) _of_ \(P/(\mathfrak{m}^{[q]}+(f))=R/\mathfrak{m}^{[q]}\) _has the form_ \[0\to P^{2d}(-b-1)\xrightarrow{\begin{bmatrix}\boldsymbol{\varphi}\\ -\boldsymbol{\psi}^{\uparrow}\end{bmatrix}}\bigoplus_{P^{3}(-q-d)}^{P^{2d}(-b )}\xrightarrow{\begin{bmatrix}\boldsymbol{\psi}^{\uparrow}\boldsymbol{\varphi }^{\vee}&uf\boldsymbol{f}\\ -\vec{w}^{\uparrow}&-\vec{c}^{\uparrow}\end{bmatrix}}\bigoplus_{P(-d)}^{P^{3} (-q)}\xrightarrow{\begin{bmatrix}\vec{c}^{\uparrow}&uf\end{bmatrix}}P\to 0\] _where_ \(\vec{w}^{\uparrow}=\begin{bmatrix}w_{1}&w_{2}&...&w_{2d}\end{bmatrix}\) _consists of_ \(2d\) _many degree_ \(\frac{s}{2}+1\) _elements of_ \(P\)_, which together with_ \(x^{q},y^{q},z^{q}\) _generate_ \(\mathfrak{m}^{[q]}:f\)_._ * _The Castelnuovo-Mumford regularity is given by_ \(\operatorname{reg}(R/\mathfrak{m}^{[q]})=\frac{1}{2}(3q+d-5)\)_._ * _The Hilbert-Kunz function of_ \(R\) _at_ \(q\) _is_ \(HK_{R}(q)=\frac{3}{4}dq^{2}-\frac{1}{12}(d^{3}-d)\)_._ * _The socle module_ \(\operatorname{soc}\left(R/\mathfrak{m}^{[q]}\right)\) _has generators that lie only in degree_ \(s_{2}=\frac{1}{2}(3q+d-5)\) _and has dimension_ \(\dim_{k}\operatorname{soc}\left(R/\mathfrak{m}^{[q]}\right)_{s_{2}}=2d\)_._ Proof.: We know that a general choice of \(f\) is link-\(q\)-compressed by Theorem 5.12. For link-\(q\)-compressed polynomials, the results (from [10]) of Propositions 2.23 and 2.22, Theorems 2.24 and 2.26, Corollary 2.28, Proposition 2.29, and Theorem 2.30, all hold. Some of these require that \(3(q-1)-d\) be even, which holds because \(d\) is even and \(q\) (is a power of an odd number and thus) is odd. A general choice of \(f\) is in fact link-\(q\)-compressed for multiple \(q\) values, which allows us to note the following: **Corollary 5.14**.: _Let \(P=k[x,y,z]\) be a standard graded polynomial ring over a field \(k\) of odd prime characteristic \(p\), with \(\mathfrak{m}=(x,y,z)\) the homogeneous maximal ideal of \(P\). Also let \(d<p+1\) be an even number._ _Fix two powers \(q_{1}>q_{0}\geq d+3\) of \(p\). For a general choice of homogeneous \(f\in P\) with \(\deg f=d\), the graded Betti numbers in high homological degree \(2\) and higher of the \(R=P/(f)\)-modules \(R/\mathfrak{m}^{[q_{0}]}\) and \(R/\mathfrak{m}^{[q_{1}]}\) are the same up to a constant shift of \(\frac{3}{2}(q_{1}-q_{0})\)._ We list here some examples of polynomials, one that is never link-\(q\)-compressed (meaning link-\(q\)-compressed for no \(q\) values) and one that is sometimes link-\(q\)-compressed (meaning link-\(q\)-compressed for some \(q\) values and not others). While Theorem 5.12 shows that being always link-\(q\)-compressed (meaning link-\(q\)-compressed for all \(q\) values) holds for very general choices of polynomials, these examples show that isn't true of all polynomials. _Example 5.15_.: Let \(d\geq 2\). The polynomial \(x_{1}^{q-d}\) generates \((\mathfrak{m}^{[q]}:x_{1}^{d})/\mathfrak{m}^{[q]}\) and has degree \(q-d\). Thus by Lemma 2.20, \(x_{1}^{d}\in k[x_{1},\ldots,x_{n}]\) is link-\(q\)-compressed if and only if \(q-d>\frac{n(q-1)-d}{2}\), which is true if and only if \(d<n-(n-2)q\). So if \(n\geq 2\), \(x_{1}^{d}\) is never link-\(q\)-compressed because \(n-(n-2)q\leq n-(n-2)=2\), but we assume that \(d=\deg f\geq 2\). _Example 5.16_.: In \(\mathbb{Z}/(3)[x,y,z]\), \(f=x^{4}+x^{3}y+x^{3}z+y^{2}z^{2}\) is link-\(q\)-compressed for \(q=9\), but not \(q=27\). The ideal \((\mathfrak{m}^{[27]}:f)/\mathfrak{m}^{[27]}\) has four generators, two of degree \(38\) and two of degree \(37=\frac{3(27-1)-4}{2}=\frac{\delta}{2}\), and thus \(f\) is not link-\(27\)-compressed. This can be shown using Macaulay2. This example indicates that link-\(p^{e}\)-compressed polynomials aren't necessarily link-\(p^{e+1}\)-compressed. After determining whether a polynomial \(f\) is link-\(q\)-compressed or not, we know that all polynomials of the form \(u\overline{T}(f)\) share that property, where \(u\) is a unit and \(\overline{T}\) is a linear isomorphism: **Theorem 5.17**.: _Let \(P=k[x_{1},\ldots,x_{n}]\) with \(k\) a field of characteristic \(p>0\). Fix a power \(q\) of \(p>0\)._ _The link-\(q\)-compressed property is unaffected by invertible scaling and linear isomorphism. In other words, a homogeneous polynomial \(f\in P\) is link-\(q\)-compressed if and only if \(u\overline{T}(f)\) is link-\(q\)-compressed, where \(u\in k\) is nonzero and \(\overline{T}\) is a linear isomorphism on \(k[x_{1},\ldots,x_{n}]\), meaning it is generated by mappings \(x_{i}\mapsto\sum_{j=1}^{n}\boldsymbol{T}_{i,j}x_{j}\) for \(1\leq i\leq n\) where \(\boldsymbol{T}\) is an invertible \(n\times n\) matrix with entries in \(k\)._ Proof.: We can investigate invertible scaling and linear isomorphisms separately. Let \(f\) be a homogeneous polynomial. Let \(u\in k\) be nonzero. Since \((f)=(uf)\), \(\mathfrak{m}^{[q]}:(f)=\mathfrak{m}^{[q]}:(uf)\), and thus they have the same generators. By Lemma 2.20, \(f\) will be link-\(q\)-compressed if and only if \(uf\) is. Let \(\overline{T}\) be a linear isomorphism on \(P\), corresponding to mappings \(x_{i}\mapsto\sum_{j=1}^{n}\boldsymbol{T}_{i,j}x_{j}\) for \(1\leq i\leq n\), where \(\boldsymbol{T}\) is an invertible \(n\times n\) matrix with entries in \(k\). These mappings correspond to the matrix equation \(\begin{bmatrix}\overline{T}(x_{1})\\ \vdots\\ \overline{T}(x_{n})\end{bmatrix}=\boldsymbol{T}\begin{bmatrix}x_{1}\\ \vdots\\ x_{n}\end{bmatrix}\). Note that applying \(\overline{T}\) to any element of \(P\) preserves degree. For any \(1\leq i\leq n\), we have \[\overline{T}(x_{i}^{q})=\left(\overline{T}(x_{i})\right)^{q}=\left(\sum_{j=1} ^{n}\boldsymbol{T}_{i,j}x_{j}\right)^{q}=\sum_{j=1}^{n}\left(\boldsymbol{T}_{ i,j}x_{j}\right)^{q}=\sum_{j=1}^{n}\boldsymbol{T}_{i,j}^{q}x_{j}^{q}.\] If we define the \(n\times n\) matrix \(\boldsymbol{T}^{[q]}\) entry-wise as \(\left(\boldsymbol{T}^{[q]}\right)_{i,j}=\boldsymbol{T}_{i,j}^{q}\) for all \(i,j\), then this means \(\begin{bmatrix}\overline{T}(x_{1}^{q})\\ \vdots\\ \overline{T}(x_{n}^{q})\end{bmatrix}=\boldsymbol{T}^{[q]}\begin{bmatrix}x_{1}^ {q}\\ \vdots\\ x_{n}^{q}\end{bmatrix}\). Note that \(\det\left(\boldsymbol{T}^{[q]}\right)=(\det\boldsymbol{T})^{q}\): \[(\det\boldsymbol{T})^{q}= \left(\sum_{\sigma}\left(\operatorname*{sgn}\sigma\prod_{i=1}^{n} \boldsymbol{T}_{i,\sigma(i)}\right)\right)^{q}\] \[= \sum_{\sigma}\left(\operatorname*{sgn}\sigma\prod_{i=1}^{n} \boldsymbol{T}_{i,\sigma(i)}\right)^{q}\] \[= \sum_{\sigma}\left((\operatorname*{sgn}\sigma)^{q}\left(\prod_{i=1 }^{n}\boldsymbol{T}_{i,\sigma(i)}\right)^{q}\right)\] \[= \sum_{\sigma}\left(\operatorname*{sgn}\sigma\prod_{i=1}^{n} \boldsymbol{T}_{i,\sigma(i)}^{q}\right)\] (*) \[= \sum_{\sigma}\left(\operatorname*{sgn}\sigma\prod_{i=1}^{n} \left(\boldsymbol{T}^{[q]}\right)_{i,\sigma(i)}\right)\] \[= \det\left(\boldsymbol{T}^{[q]}\right).\] Here the sums are over all permutations \(\sigma\) on the set \(\{1,\ldots,n\}\). For (*) we have \((\operatorname{sgn}\sigma)^{q}=\operatorname{sgn}\sigma\) by Fermat's little theorem. Since \(\det\boldsymbol{T}\neq 0\) because \(\boldsymbol{T}\) is invertible, we have \(\det\big{(}\boldsymbol{T}^{[q]}\big{)}=(\det\boldsymbol{T})^{q}\neq 0\), thus \(\boldsymbol{T}^{[q]}\) is invertible. Because \(\boldsymbol{T}^{[q]}\) is invertible, \(\overline{T}(x_{1}^{q}),\ldots,\overline{T}(x_{n}^{q})\) and \(x_{1}^{q},\ldots,x_{n}^{q}\) are linear combinations of each other, and therefore we have \(\overline{T}(\mathfrak{m}^{[q]})=(\overline{T}(x_{1}^{q}),\ldots,\overline{T} (x_{n}^{q}))=(x_{1}^{q},\ldots,x_{n}^{q})=\mathfrak{m}^{[q]}\). Set \(g=\overline{T}(f)\). Let \(a\in\mathfrak{m}^{[q]}:(f)\). Then \(af\in\mathfrak{m}^{[q]}\), and thus \[\overline{T}(a)g=\overline{T}(a)\overline{T}(f)=\overline{T}(af)\in\overline{T }(\mathfrak{m}^{[q]})=\mathfrak{m}^{[q]}.\] This means \(\overline{T}(a)\in\mathfrak{m}^{[q]}:(g)\). Thus \(\overline{T}(\mathfrak{m}^{[q]}:(f))\subseteq\mathfrak{m}^{[q]}:(g)\). Let \(b\in\mathfrak{m}^{[q]}:(g)\). Then \[\overline{T}\Big{(}f\ \overline{T}^{-1}(b)\Big{)}=\overline{T}(f)\ \overline{T}\Big{(} \overline{T}^{-1}(b)\Big{)}=g\ b\in\mathfrak{m}^{[q]}=\overline{T}(\mathfrak{m }^{[q]}),\] and so \(f\ \overline{T}^{-1}(b)\in\mathfrak{m}^{[q]}\) because \(\overline{T}\) is invertible. Thus \(\overline{T}^{-1}(b)\in\mathfrak{m}^{[q]}:(f)\), which means that \(b\in\overline{T}(\mathfrak{m}^{[q]}:(f))\). Therefore \(\mathfrak{m}^{[q]}:(g)\subseteq\overline{T}(\mathfrak{m}^{[q]}:(f))\) as well, so \(\overline{T}(\mathfrak{m}^{[q]}:(f))=\mathfrak{m}^{[q]}:(g)\). Since \(\overline{T}\) preserves degree, the degrees of elements of \((\mathfrak{m}^{[q]}:(f))/\mathfrak{m}^{[q]}\) are the same as those of \(\overline{T}((\mathfrak{m}^{[q]}:(f))/\mathfrak{m}^{[q]})=(\mathfrak{m}^{[q]}: (g))/\mathfrak{m}^{[q]}\). Therefore, by Lemma 2.20, \(f\) is link-\(q\)-compressed if and only if \(g\) is. **Corollary 5.18**.: _If \(k\) is a field of odd characteristic \(p\), the polynomial \(x^{2}-y^{2}-z^{2}\) in \(P=k[x,y,z]\) is link-\(q\)-compressed for all powers \(q>1\) of \(p\). If \(k\) includes an element \(i\) such that \(i^{2}=-1\), the polynomial \(x^{2}+y^{2}+z^{2}\) in \(P=k[x,y,z]\) is link-\(q\)-compressed for all powers \(q>1\) of \(p\)._ Proof.: Let \(q>1\) be a power of \(p\). If \(\overline{T}_{1}:P\to P\) is the linear isomorphism induced by \(x\mapsto x+y\), \(y\mapsto x-y\), and \(z\mapsto z\), then \(\overline{T}_{1}(xy-z^{2})=x^{2}-y^{2}-z^{2}\). By Theorem 5.17, \(x^{2}-y^{2}-z^{2}\) is link-\(q\)-compressed because \(xy-z^{2}\) is link-\(q\)-compressed. If \(\overline{T}_{2}:P\to P\) is the linear isomorphism induced by \(x\mapsto x\), \(y\mapsto iy\), and \(z\mapsto iz\), then \(\overline{T}_{2}(x^{2}-y^{2}-z^{2})=x^{2}+y^{2}+z^{2}\). By Theorem 5.17, \(x^{2}+y^{2}+z^{2}\) is link-\(q\)-compressed because \(x^{2}-y^{2}-z^{2}\) is link-\(q\)-compressed. This Corollary aligns with results from [10], which stated that if \(R=k[x,y,z]/(x^{2}+y^{2}+z^{2})\), then either \(\operatorname{pd}_{R}(R/\mathfrak{m}^{[q]})=\infty\) for all \(q\) or \(\operatorname{pd}_{R}(R/\mathfrak{m}^{[q]})<\infty\) for all \(q\), depending on \(k\).
2301.10246
Compact representation for electroweak lepton sector
A new representation of electroweak lepton sector is proposed. It consists of two Weyl spinors per one lepton family. It is shown that proposed representation is fully equivalent to the conventional left-handed iso-doublet. New type of plane wave solutions can be found under certain additional assumptions.
Peter Porshnev
2023-01-24T16:57:08Z
http://arxiv.org/abs/2301.10246v1
# Compact representation for electroweak lepton sector ###### Abstract A new representation of electroweak lepton sector is proposed. It consists of two Weyl spinors per one lepton family. It is shown that proposed representation is fully equivalent to the conventional left-handed iso-doublet. New type of plane wave solutions can be found under certain additional assumptions. keywords: Electroweak left-handed Lagrangian, Dirac and Majorana masses, seesaw relation + Footnote †: journal: ## 1 Introduction Sometimes a new representation of well-established formalisms might lead to new insights or calculational benefits. The remarkable example is the spinor-helicity method [1; 2] which greatly simplifies calculations of scattering amplitudes. In this work, we propose a new representation which combines two Weyl spinors, one for a left-handed charged lepton and one for its neutrino. Both our and Dirac representations are reducible, since they include two Weyl spinors each. Out of three Weyl spinors per lepton family, the Dirac one combines both electron spinors into one quantity, while we combine two SU2 spinors into one quantity \(\psi\), exactly as the iso-doublet \(L=(\psi_{\nu},\psi_{L})\) does. The only difference is we removed the trivial zeros from \(L\); hence our construct \(\psi\) is as artificial as the iso-doublet \(L\) itself. In any way, these alternative combinations are equivalent to one another if proper mapping and corresponding representation-dependent operators are used. We have been inspired to come up with this new representation by two sources. Specifically, quoting the first source - [3, p. 704] - _"Since the left- and right-handed fermions live in different representations of the fundamental gauge group, it is often useful to think of these components as distinct particles, which are mixed by the fermion mass terms"_. Similarly, modeling neutrinos as Majorana particles and quoting [4] - _"If lepton number is not conserved, one can treat the left-handed neutrino and right-handed antineutrino as two different helicity states of one particle, and combine them to make a massive spin 1/2 particle"_. However, if the lepton number is not conserved, then the neutrino and antineutrino cannot be viewed as two different states of one particle [4]. Taking into account these ideas and extending them, we combine both left-handed electron and its right-handed antineutrino into our \(\psi\), since they live in the iso-SU2 gauge representations (\(L\) and \(\bar{L}\)), as opposite to their corresponding particles with opposite chiralities which live in iso-singlets. From this angle, two very different particles from the electroweak iso-doublets \(L\) and \(\bar{L}\), both left-handed electron and its right-handed antineutrino, must be somewhat "closer" to one another, than left- and right-handed electrons to one another 1. This work intends to clarify what such a "closeness" might mean from the physical point of view. The practical benefit is also that this new representation is more compact compared to the conventional weak iso-doublets. Footnote 1: The second relevant quote from [3, p. 704] says that “The solution of this problem will reinforce the idea that the left- and right-handed fermion fields are fundamentally independent entities, mixed to form massive fermions by some subsidiary process.” Inspired by these insights, we rewrite the conventional left-handed part of EW formalism of massless leptons into an alternative, but fully equivalent, formalism by switching to the bispinor representation that combines two SU2 spinors of one lepton family. We show that no features of conventional EW model related to iso-doublets is lost. We then discuss how a new type of plane wave solutions can be obtained under certain additional assumptions. In this study, we focus only on the lepton sector. However, the proposed method is really a gateway to constructing a broader framework which can be applied to quarks and bosons; possible extensions of this model will be reviewed in a future paper. The extension to quarks is straightforward but lengthier while the boson sector requires some additional assumptions; this is the reason they will not be covered here. We use the chiral representation of gamma matrices in this paper. The terminology and notations follow [3] and [5]. ## 2 Compact EW formalism ### Lagrangian and equations of motion In this subsection, we briefly state the conventional definitions which will be used later on. The lepton-boson part of EW Lagrangian without right-handed particle states is given by \[\left(\mathcal{L}_{int}\right)^{L}=\frac{g}{2\sqrt{2}}(J_{+}W_{+}+J_{-}W_{-})-eAJ _{e}+\frac{g}{2\cos\theta}\Big{(}J_{n}-J_{e}\cos 2\theta\Big{)}Z\,, \tag{1}\] where the four currents are defined as \[\begin{array}{ccc}J_{+}&=2\bar{\psi}_{\nu}\gamma^{\mu}\psi_{L}\,,&J_{e}&=\bar {\psi}_{L}\gamma^{\mu}\psi_{L}\,,\\ J_{-}&=2\bar{\psi}_{L}\gamma^{\mu}\psi_{\nu}\,,&J_{n}&=\bar{\psi}_{\nu}\gamma^{ \mu}\psi_{\nu}\,.\end{array} \tag{2}\] Since the electromagnetic current \(J_{e}=J_{em}^{L}\) in this work is always given by its left-handed part, we simplified its notation by dropping the superscript \(L\). The first three currents couple only to their corresponding potentials, meaning that there seems to be one-to-one correspondence between three EW currents and three EW potentials. The \(Z\)-field however couples to both \(J_{n}\) and \(J_{e}\) with different but comparable strength (\(\cos 2\theta\approx 0.5\) where \(\theta\) is the mixing angle). Two bispinors 2, that are used in (2), are given by Footnote 2: Alternatively, a Dirac bispinor \(\psi\) is called by a four-component Dirac spinor [6, p. 6]. \[\psi_{L}=\begin{pmatrix}u_{L}\\ 0\end{pmatrix},\qquad\qquad\psi_{\nu}=\begin{pmatrix}v_{L}\\ 0\end{pmatrix}, \tag{3}\] They correspond to both massless charged lepton and neutrino that are described by two Weyl spinors \(u_{L}\) and \(v_{L}\) respectively. The covariant derivative for the massless left-handed doublet \(L\) of electron and its neutrino is \[D_{\mu}L=(\partial_{\mu}-\frac{i}{2}g^{\prime}YB_{\mu}-igt\mathbf{\bar{W}}_{\mu })L=(\partial_{\mu}+\frac{i}{2}g^{\prime}B_{\mu}-i\frac{g}{2}\sigma_{i}W_{\mu} ^{i})L\,, \tag{4}\] where the corresponding weak charge \(t=1/2\) and hypercharge \(Y=-1\). Expanding now the doublet \(L\), we obtain \[D_{\mu}\begin{pmatrix}\psi_{\nu}\\ \psi_{L}\end{pmatrix}=\begin{pmatrix}\partial_{\mu}\psi_{\nu}+\frac{i}{2}g^{ \prime}B_{\mu}\psi_{\nu}\\ \partial_{\mu}\psi_{L}+\frac{i}{2}g^{\prime}B_{\mu}\psi_{L}\end{pmatrix}-i \frac{g}{2}\begin{pmatrix}W_{\mu}^{3}&W_{\mu}^{1}-iW_{\mu}^{2}\\ W_{\mu}^{1}+iW_{\mu}^{2}&-W_{\mu}^{3}\end{pmatrix}\begin{pmatrix}\psi_{\nu} \\ \psi_{L}\end{pmatrix}\\ =\begin{pmatrix}\partial_{\mu}\psi_{\nu}+\frac{i}{2}g^{\prime}B_{ \mu}\psi_{\nu}-i\frac{g}{2}W_{\mu}^{3}\psi_{\nu}\\ \partial_{\mu}\psi_{L}+\frac{i}{2}g^{\prime}B_{\mu}\psi_{L}+i\frac{g}{2}W_{\mu }^{3}\psi_{L}\end{pmatrix}-i\frac{g}{2}\begin{pmatrix}W_{\mu}^{+}\psi_{L}\\ W_{\mu}^{-}\psi_{\nu}\end{pmatrix}. \tag{5}\] where the last term mixes the electron and neutrino contributions. Using the orthogonal combinations of two neutral bosons \(W_{\mu}^{3}\) and \(B_{\mu}\), the covariant derivatives are re-cast as \[D_{\mu}\psi_{\nu}=\partial_{\mu}\psi_{\nu}+\frac{i}{2}\big{(}g^{ \prime}B_{\mu}-gW_{\mu}^{3}\big{)}\psi_{\nu}-i\frac{g}{2}W_{\mu}^{+}\psi_{L}\\ =\partial_{\mu}\psi_{\nu}-\frac{i}{2}\frac{g}{\cos\theta}Z_{\mu} \psi_{\nu}-i\frac{g}{2}W_{\mu}^{+}\psi_{L}\,,\] \[D_{\mu}\psi_{L}=\partial_{\mu}\psi_{L}+\frac{i}{2}\big{(}g^{\prime} B_{\mu}+gW_{\mu}^{3}\big{)}\psi_{L}-i\frac{g}{2}W_{\mu}^{-}\psi_{\nu}\] \[=\partial_{\mu}\psi_{L}+\frac{i}{2}\bigg{[}2eA_{\mu}+\frac{g\cos 2 \theta}{\cos\theta}Z_{\mu}\bigg{]}\psi_{L}-i\frac{g}{2}W_{\mu}^{-}\psi_{\nu}\,. \tag{6}\] Therefore, two equations of motion are given by \[\begin{array}{ll}i\not{D}\psi_{\nu}&=\Big{(}i\not{\partial}+\frac{g}{2\cos \theta}\not{Z}\Big{)}\psi_{\nu}+\frac{g}{2}\not{W}^{+}\psi_{L}&=0\,,\\ i\not{D}\psi_{L}&=\Big{(}i\not{\partial}-e\not{A}-\frac{g\cos 2\theta}{2 \cos\theta}\not{Z}\Big{)}\psi_{L}+\frac{g}{2}\not{W}^{-}\psi_{\nu}&=0\,,\end{array} \tag{7}\] where we see again that the neutrino is not influenced by the EM potential while the EM term for electrons has the correct sign of electric charge. ### Redundancy A single lepton family is described with two Dirac bispinors in the Lagrangian (1) and the equations of motion (7). However, since the Lagrangian \((\mathcal{L}_{int})^{L}\) includes only left-handed electrons \(\psi_{L}\), two complex-valued components per charged lepton are all that is really needed here. Now taking into account that massless neutrinos are also described by two-component spinor \(v_{L}\) \[\psi_{L}=P_{L}\psi=\begin{pmatrix}u_{L}\\ 0\end{pmatrix},\qquad\qquad\psi_{\nu}=P_{L}\psi^{\prime}=\begin{pmatrix}v_{L} \\ 0\end{pmatrix}, \tag{8}\] two general bispinors \(\psi\) and \(\psi^{\prime}\) have twice more degrees of freedom than what is really required in the conventional approach to describe the left-handed part of one lepton generation. Here, \(P_{L/R}=(1\pm\gamma^{5})/2\) are two chiral projectors. The use of Dirac bispinors in describing neutrinos is mostly for convenience [7; p. 114], though the accommodation of neutrino nonzero mass might change it; we ignore this complication for now. The most economical way is to directly use two-component spinors \((u_{L},v_{L})\) for all lepton types, see examples in [6] and [8]. We will consider however an alternative approach. Having used only the left-handed part in \(P_{L}\psi\) to describe a charged lepton, we are left with two extra degrees of freedom in \(\psi\) which otherwise would be associated with the right-handed charged lepton. Instead of introducing the second bispinor \(\psi_{\nu}\), can we use these extra degrees of freedom in \(\psi\) to describe neutrinos? Specifically, we would like to connect the neutrino wave function \(\psi_{\nu}\) with a properly transformed bottom component of \(\psi\) \[\psi=\begin{pmatrix}u_{L}\\ u_{R}\end{pmatrix},\qquad\qquad\psi_{\nu}=U\,P_{R}\psi=\begin{pmatrix}u_{R}^{ \prime}\\ 0\end{pmatrix}, \tag{9}\] where \(U\) is some transformation operator, and \(u_{R}\) is the lower part of bispinor \(\psi\). It is important to clarify the following. No new physics which might stem from such an association is implied at this point. Instead, we focus on achieving the mathematical equivalency with the conventional left-handed part of EW theory while removing any redundancy from its description. Effectively, we search for a way to package two parts of EW iso-doublet into the single quantity \(\psi\) which will be the direct sum of spinors \(u_{L}\) and \(v_{L}\) transformed in some way. The equivalency stems from the fact that such a procedure is fully reversible and is one-to-one. Examples of transformations that swap left- and right-handed states are well known [3, p. 44]. The Dirac bispinor representation is reducible that is clearly seen in the chiral gamma basis where the Lorentz generators are \(2\cross 2\) block-diagonal. A general matrix \(U\) which swaps chirality states must then be of purely off-diagonal form in the chiral basis \[U\sim\begin{pmatrix}0&X\\ Y&0\end{pmatrix}\qquad\to\qquad\begin{cases}\gamma^{\mu}\\ \gamma^{\mu}\gamma^{5}\end{cases}\,, \tag{10}\] where \(X\) and \(Y\) are some \(2\cross 2\) matrices. There are eight basis matrices \((\gamma^{\mu},\gamma^{\mu}\gamma^{5})\) that are purely off-diagonal in the chiral basis. Since we are dealing with chiral states only and the matrix \(\gamma^{5}\) is absorbed by chiral projectors \[\gamma^{5}P_{R/L}=\pm P_{R/L}\,, \tag{11}\] we need to consider only \(\gamma^{\mu}\) matrices. Therefore, a chirality-swapping transformation can be taken in the following form \[\psi^{\prime}_{R/L}=U\psi_{L/R}=c^{\mu}\gamma_{\mu}\psi^{*}_{L/R}\,, \tag{12}\] where \(c^{\mu}\) are some constant coefficients, and the complex conjugation might be also added. It is actually required, since the bottom spinor is associated with antiparticles which are right-handed. The charge conjugation transformation has exactly this form [7, p. 96] \[\psi^{C}(x)=i\gamma^{2}\psi^{*}(x)\,. \tag{13}\] Checking it out, we see that the neutrino spinor is obtained from \(\psi\) as \[\psi_{\nu}\sim P_{L}\psi^{C}=P_{L}i\gamma^{2}\binom{u_{L}}{u_{R}}^{*}=P_{L} \binom{i\sigma_{2}u_{R}^{*}}{-i\sigma_{2}u_{L}^{*}}\sim\binom{v_{L}}{0}\,, \tag{14}\] if \(v_{L}=i\sigma_{2}u_{R}^{*}\) is associated with the transformed right-handed part of bispinor \(\psi\). The conventional iso-doublet \(L\) is then replaced with the following bispinor \[L=\binom{\psi_{\nu}}{\psi_{L}}\qquad\leftrightarrow\qquad\psi=\binom{u_{L}}{ i\sigma_{2}v_{L}^{*}}\,, \tag{15}\] where both of them have the equal number of nontrivial components. This representation is reversible and one-to-one, if the zero entries in \(L\) are ignored. Accordingly, all four EW currents (2) are unambiguously recovered as \[\begin{array}{llll}J_{+}&=2\bar{\psi}_{\nu}\gamma^{\mu}\psi_{L}&=\bar{\psi}^{C }\gamma^{\mu}\psi\,,\\ J_{-}&=2\bar{\psi}_{L}\gamma^{\mu}\psi_{\nu}&=(\bar{\psi}^{C}\gamma^{\mu}\psi)^ {*}\,,\\ J_{e}&=\bar{\psi}_{L}\gamma^{\mu}\psi_{L}&=\frac{1}{2}(k^{\mu}-s^{\mu})&=\bar{ \psi}\gamma^{\mu}P_{L}\psi\,,\\ J_{\nu}&=\bar{\psi}_{\nu}\gamma^{\mu}\psi_{\nu}&=\frac{1}{2}(k^{\mu}+s^{\mu})&= \bar{\psi}\gamma^{\mu}P_{R}\psi\,,\end{array} \tag{16}\] where the bilinears \(k_{\mu}=\bar{\psi}\gamma_{\mu}\psi\) and \(s_{\mu}=\bar{\psi}\gamma_{\mu}\gamma^{5}\psi\) are defined as usual. The conventional EW currents on the left side are equal to the currents on the right side identically, component-by-component. The electromagnetic and neutral currents are given as left- and right-handed parts of total current \(k_{\mu}\) respectively if the representation (15) is used. Transformations:The key support for the compact formalism we develop here comes from the way such a combined quantity \(\psi\) transforms between inertial frames. The boost and rotation transformations are given by block-diagonal matrices \(S\) in the chiral representation of gammas \[S=\begin{pmatrix}s_{L}&0\\ 0&s_{R}\end{pmatrix}, \tag{17}\] which means that shifting from one inertial frame to another does not mix the left-handed electron states with right-handed antineutrino ones \[\psi^{\prime}=\begin{pmatrix}e^{\prime}\\ \bar{\nu}^{\prime}\end{pmatrix}=\begin{pmatrix}s_{L}&0\\ 0&s_{R}\end{pmatrix}\begin{pmatrix}e\\ \bar{\nu}\end{pmatrix}=\begin{pmatrix}s_{L}e\\ s_{R}\bar{\nu}\end{pmatrix}. \tag{18}\] This property is critical to avoid non-sensible results in the proposed framework. New representation:A word of caution should be shared. We have defined here the new representation \(\psi\) which corresponds to the left-handed EW iso-doublet \(L\). A problem might arise if one tries to apply the conventional operators, let us pick the charge conjugation operator \(C\) for example, which is typically defined in the Dirac representation, to our construct \(\psi\) which is defined in the different representation. Since any specific expression of \(C\) is representation-dependent, such a definition must be used in a representation it is defined for. It should not be a problem, if one consistently uses the representation-dependent operators in their corresponding representations, since physical results must be representation-independent. Effectively, we have exploited the freedom in choosing a representation that is more convenient for the given task. The standard definition \(C\) can still be applied to the chiral projections \(P_{L}\psi\) where \(\psi\) is defined in the new representation. The projection \(P_{L}\psi\) is the left-handed electron, which turns into the state with opposite charge upon applying the conventionally defined \(C\). Since our representation is one-to-one with two Weyl spinors, nothing is lost by combining two spinors into one \(\psi\). The electron and neutrino Weyl spinors can always be extracted from our \(\psi\) at any stage of calculations, and the standard charge conjugation operator can be applied to them individually. The price to pay for using our representation is that definitions for some operators now might look more cumbersome and include chiral projectors. However, there is always an identical mapping to the standard representation. It includes two Weyl spinors from our \(\psi\) (left handed-electron and neutrino for SU2 iso-doublet), and one right-handed electron (SU2 iso-singlet) per one lepton family. We did not consider the latter ones in our approach which however does not conflict with right-handed electrons if terms with them are added to the Lagrangian. Both our and Dirac representations are reducible, since they include two Weyl spinors each. Out of three Weyl spinors per lepton family, the Dirac one combines both electron spinors into one quantity, while we combine two SU2 spinors into one quantity, exactly as the iso-doublet \(L=(\psi_{\nu},\psi_{L})\) does. The only difference is we removed the trivial zeros from \(L\); hence this construct \(\psi\) is as artificial as the iso-doublet \(L\) itself. In any way, these alternative combinations are equivalent to one another if proper mapping and corresponding representation-dependent operators are used. We have been inspired to come up with this new representation by the two sources [3, p. 704] and [4], as it is discussed in the introduction in detail. It motivated us to combine both left-handed electron and its neutrino into our \(\psi\), since they live in the same iso-SU2 gauge representation. The benefit is that this new representation leads to new type of plane waves under certain assumptions. Summarizing, the iso-doublet \(L=(\psi_{\nu},\psi_{L})\) is replaced with the fully equivalent bispinor \(\psi=(u_{L},\,i\sigma_{2}v_{L}^{*})\) which is the direct sum of left-handed \(u_{L}\) and right-handed \(i\sigma_{2}v_{L}^{*}\) spinors respectively. At this point, no new physics has been introduced, even if the association of transformed left-handed neutrino with right-handed part of \(\psi\) is suggestive. Simply, we compressed the left-handed iso-doublet \(L\), half of which elements are zeros anyway, into the bispinor \(\psi\) which does not have trivial entries. In doing so, nothing major has been lost or gained yet. Though, we might have gained in efficiency of describing the EW left-handed states by eliminating the redundant entries in iso-doublet \(L\). ### Equation of motion and current conservation Can two conventional equations of motion (7) be written as an evolution equation for the single bispinor \(\psi\)? Remember that in both equations (7), only the left-handed parts \(\psi_{\nu}\) and \(\psi_{L}\) participate. Substituting the definition (14) into the first equation in (7), we obtain \[\Big{(}i\partial\!\!\!/+\frac{g}{2\cos\theta}\not{\!\!Z}\Big{)}(P_{L}i\gamma^ {2}\psi^{*})+\frac{g}{2}\not{\!\!W}^{+}P_{L}\psi=0\,. \tag{19}\] Next, we complex conjugate it and multiply with \(i\gamma^{2}\) \[i\gamma^{2}\Big{[}-i(\gamma^{\mu})^{*}\partial_{\mu}+\frac{g}{2 \cos\theta}(\gamma^{\mu})^{*}Z_{\mu}\Big{]}\left(P_{L}i\gamma^{2}\psi^{*} \right)^{*}+i\gamma^{2}\frac{g}{2}(\gamma^{\mu})^{*}W_{\mu}^{-}(P_{L})^{*}\psi ^{*}\\ =\Big{[}i\gamma^{\mu}\partial_{\mu}-\frac{g}{2\cos\theta}\gamma^ {\mu}Z_{\mu}\Big{]}i\gamma^{2}(P_{L}i\gamma^{2}\psi)-\frac{g}{2}\gamma^{\mu} W_{\mu}^{-}i\gamma^{2}P_{L}\psi^{*}\\ =\Big{[}i\gamma^{\mu}\partial_{\mu}-\frac{g}{2\cos\theta}\gamma^ {\mu}Z_{\mu}\Big{]}P_{R}\psi-\frac{g}{2}\gamma^{\mu}W_{\mu}^{-}P_{R}i\gamma^{ 2}\psi^{*}\,. \tag{20}\] Two motion equations are then given as \[\begin{array}{ll}&i\partial\!\!\!/(P_{R}\psi)-\frac{g}{2\cos\theta}\not{\! \!Z}(P_{R}\psi)-\frac{g}{2}\not{\!\!W}^{-}P_{R}\psi^{C}&=0\,,\\ &i\partial\!\!\!/(P_{L}\psi)-e\not{\!\!A}(P_{L}\psi)-\frac{g\cos 2\theta}{2 \cos\theta}\not{\!\!Z}(P_{L}\psi)+\frac{g}{2}\not{\!\!W}^{-}P_{L}\psi^{C}&=0 \,.\end{array} \tag{21}\] Now, two motion equations are re-written for the left- and right-handed parts of single bispinor \(\psi\). We see that these two parts evolve differently under the EW forces, as expected. Let us next try to get rid of chiral projections of derivatives to obtain a single motion equation for \(\psi\). For this purpose, we add the above equations together to obtain \[i\partial\!\!\!/\psi-e\not{\!\!A}P_{L}\psi-\frac{g}{2\cos\theta} \not{\!\!Z}(P_{R}+\cos 2\theta P_{L})\psi-\frac{g}{2}\not{\!\!W}^{-} \gamma^{5}\psi^{C}\\ =i\partial\!\!\!/\psi-\frac{g}{2\cos\theta}\not{\!\!Z}\psi- \bigg{(}e\not{\!\!A}-\frac{g\sin^{2}\theta}{\cos\theta}\not{\!\!Z}\bigg{)}P_{L }\psi-\frac{g}{2}\not{\!\!W}^{-}\gamma^{5}\psi^{C}=0\,. \tag{22}\] The last term makes this equation fundamentally different from the Dirac one. The neutral boson fields \(A_{\mu}\) and \(Z_{\mu}\) can be seen as influencing the particle momentum and spin which is similar to the regular Dirac equation, since they do not mix the upper and lower spinors. The last term which includes \(\psi^{C}\) and charged EW bosons couples the left- and right-handed spinors. It is quite similar to the conventional mass term in this regard. Its role will be discussed in more detail in the next section. What happens if we subtract one equation from another instead of adding them together? We obtain then from (21) \[i\not{\partial}\gamma^{5}\psi+e\not{A}P_{L}\psi-\frac{g}{2\cos \theta}\not{Z}(P_{R}-\cos 2\theta P_{L})\psi-\frac{g}{2}\not{W}^{-}\psi^{C}\\ =\gamma^{5}\Big{[}-i\not{\partial}\psi+\gamma^{5}e\not{A}P_{L} \psi-\frac{g}{2\cos\theta}\gamma^{5}\not{Z}(P_{R}-\cos 2\theta P_{L})\psi-\frac{g}{2} \gamma^{5}\not{W}^{-}\psi^{C}\Big{]}\\ =\gamma^{5}\Big{[}-i\not{\partial}\psi+e\not{A}P_{L}\psi+\frac{g }{2\cos\theta}\not{Z}(P_{R}+\cos 2\theta P_{L})\psi+\frac{g}{2}\not{W}^{-} \gamma^{5}\psi^{C}\Big{]}\\ =\gamma^{5}\Big{[}-i\not{\partial}\psi+\frac{g}{2\cos\theta} \not{Z}\psi+\bigg{(}e\not{A}-\frac{g\sin^{2}\theta}{\cos\theta}\not{Z}\bigg{)} P_{L}\psi+\frac{g}{2}\not{W}^{-}\gamma^{5}\psi^{C}\Big{]}=0\,, \tag{23}\] which is identical to (22) since \(\gamma^{5}\) is non-singular. The equation (22) for \(\psi\) can be obtained from (21) in yet another way. Moving the chiral projectors in all terms to the left, we obtain \[P_{L}\Big{(}i\not{\partial}\psi-\frac{g}{2\cos\theta}\not{Z} \psi-\frac{g}{2}\not{W}^{-}\psi^{C}\Big{)} =0\,, \tag{24}\] \[P_{R}\Big{(}i\not{\partial}\psi-e\not{A}\psi-\frac{g\cos 2\theta} {2\cos\theta}\not{Z}\psi+\frac{g}{2}\not{W}^{-}\psi^{C}\Big{)} =0\,.\] Now taking into account the orthogonality of chiral projectors, we can make the round brackets identical \[P_{L}\Big{(}i\not{\partial}\psi-eP_{R}\not{A}\psi-[P_{L}+\cos 2 \theta P_{R}]\frac{g\not{Z}\psi}{2\cos\theta}-\frac{g}{2}[P_{L}-P_{R}]\not{W}^ {-}\psi^{C}\Big{)} =0\,, \tag{25}\] \[P_{R}\Big{(}i\not{\partial}\psi-eP_{R}\not{A}\psi-[P_{L}+\cos 2 \theta P_{R}]\frac{g\not{Z}\psi}{2\cos\theta}-\frac{g}{2}[P_{L}-P_{R}]\not{W}^ {-}\psi^{C}\Big{)} =0\,.\] Hence, the neutrino equation is obtained by the left projection of common equation while the electron one is obtained with the right projection. It does not contradict with the previous statement that the electrons and neutrinos are the left- and right-handed projections of \(\psi\) respectively. In all terms of both motion equations (25), the wave function is protected with \(\gamma^{\mu}\) from the left side; the projectors are flipped if moved across \(\gamma^{\mu}\). This consideration is helpful to avoid a possible confusion. Current conservation:Remarkably, the equation (22) satisfies the current conservation in exactly the same way as the regular Dirac equation does. Taking (22) and its conjugate version followed by multiplication with \(\bar{\psi}\) and \(\psi\) respectively \[i\bar{\psi}\gamma^{\mu}(\partial_{\mu}\psi)-\frac{g}{2\cos\theta} \bar{\psi}\not{\!\!Z}\psi-\bar{\psi}\bigg{(}e\not{\!\!A}-\frac{g\sin^{2}\theta} {\cos\theta}\not{\!\!Z}\bigg{)}P_{L}\psi-\frac{g}{2}\bar{\psi}\not{\!\!W}^{-} \gamma^{5}\psi^{C} =0\,,\] \[i(\partial^{\mu}\bar{\psi})\gamma^{\mu}\psi+\frac{g}{2\cos\theta} \bar{\psi}\not{\!\!Z}\psi+\bar{\psi}P_{R}\bigg{(}e\not{\!\!A}-\frac{g\sin^{2} \theta}{\cos\theta}\not{\!\!Z}\bigg{)}\psi+\frac{g}{2}\bar{\psi}^{C}\gamma^{5} \not{\!\!W}^{+}\psi =0\,, \tag{26}\] and then adding them together, we obtain \[i\partial_{\mu}(\bar{\psi}\gamma^{\mu}\psi)-\frac{g}{2}\big{(}W_{\mu}^{-}\bar {\psi}\gamma^{\mu}\gamma^{5}\psi^{C}-W_{\mu}^{+}\bar{\psi}\gamma^{5}\gamma^{ \mu}\psi^{C}\big{)}=i\partial_{\mu}(\bar{\psi}\gamma^{\mu}\psi)=0\,. \tag{27}\] The only difference with the Dirac case is that we had to use the following identities \[P_{R}\gamma^{\mu}=\gamma^{\mu}P_{L} \bar{\psi}\gamma^{\mu}\gamma^{5}\psi^{C}=\bar{\psi}^{C}\gamma^{5} \gamma^{\mu}\psi=0\,, \tag{28}\] which are universal in the sense that they do not depend on the representation of gammas, the EW potentials, and are valid for an arbitrary bispinor \(\psi\). Therefore, the current conservation strictly follows from the motion equation (22) without any additional constraints or assumptions. Summary:We have showed that the left-handed part of EW theory, which is defined by using two two-component spinors \(u_{L}\sim\psi_{L}\) and \(v_{L}\sim\psi_{\nu}\), can be re-written by using the single four-component bispinor \(\psi=(u_{L},u_{R})\), if we assign \(v_{L}=i\sigma_{2}u_{R}^{*}\). Two conventional equations of motion (7) are then translated into the single equation (22) which is convenient to give in the following form \[i\not{\partial}\psi-\frac{g}{2\cos\theta}\not{\!\!Z}\psi_{R}-\bigg{(}e\not{ \!\!A}+\frac{g\cos 2\theta}{2\cos\theta}\not{\!\!Z}\bigg{)}\psi_{L}-\frac{g}{2} \not{\!\!W}^{-}\gamma^{5}\psi^{C}=0\,. \tag{29}\] It is fully equivalent to the left-handed part of conventional EW formalism for leptons. This single equation describes the evolution of both electron and neutrino spinors. Its left-handed chiral projection gives the equation of motion for transformed neutrino spinor \(i\sigma_{2}v_{L}^{*}\), while the right-handed projection describes the evolution of electron spinor \(u_{L}\). ## 3 Plane waves The conventional (and trivial) plane waves solutions are obtained from equation (29) if the electroweak potentials \(Z_{\mu}\), \(A_{\mu}\), and \(W_{\mu}\) are set to zero. The equation (29) then becomes \(i\not{\partial}\psi=0\) which solution is two Weyl plane waves that are independent from each other; they describes the left-handed election and its right-handed anti-neutrino correspondingly in the case of (29). It is, of course, in the strict agreement with the conventional approach, as it is shown in the previous section. Less trivial solutions in the form of plane waves can be obtained from (29) if we assume that the electroweak potentials are nonzero even for free-moving leptons. A charged particle carries its Coulomb field (and corresponding charge) even in the absence of external fields. The conventional theory is somewhat controversial here. From the one hand, the conventional plane waves that are used in evaluating invariant amplitudes are obtained as solutions of Dirac or Weyl equations by setting electromagnetic fields to zero. So, they are effectively taken as finite ones for free-moving particles. From the other hand, the Coulomb or Uehling potentials that are carried by free particles are infinite at \(r\to 0\). Choosing the middle ground, we assume instead that the fields of free-moving leptons are finite and do not necessarily vanish, as it is taken in the conventional approach. In the approximation of free-propagating plane waves that we use here, finite values of electroweak potentials (\(A_{0}\), \(Z_{0}\), and \(W_{0}\)) at location of bare charge are just a set of numbers, with no space-time dependence 3. We will not speculate here what form a more fundamental theory (with less infinities and leading to finite self-potentials) might have; all we need for our phenomenological approach to proceed is the assumption that finite values of four electroweak potentials at bare charge origin exist and that they are not necessarily equal to zero. Footnote 3: Later, the superscript \(self\) will be added to these potentials to indicate that they are viewed as potentials of free-moving particles. One interesting class of solutions of (29) can be obtained under several additional constraints. First, even if two left-handed leptons from the same lepton family are combined into iso-doublets, physically they propagate as separate and distinct particles, per the conventional view. This class of solution is obtained from (29) by setting the fields to zero. Second, in the compact EW formalism, we propose here, the iso-doublets are replaced with single bispinors which left- and right-handed parts are associated with charged leptons and corresponding antineutrinos respectively. Extending the conventional case, we can attempt to find solutions as superpositions of upper and bottom spinors \[\psi=\sqrt{m_{e}}\binom{\chi}{0}+\delta\sqrt{m_{\nu}}\binom{0}{\xi}\,, \tag{30}\] where \(m_{e}\) and \(m_{\nu}\) are electron and neutrino masses respectively, \(\chi\) and \(\xi\) are two arbitrary spinors, and \(\delta\) is some small parameter. The linear composition (30) is given in the rest frame of both electron and antineutrino. It assumes that a hypothetical lepton state, which is described by (30), is left-handed at rest with an infinitesimally small amount of right-handed antineutrino component. Saying otherwise, the state (30) is predominantly left-handed and negatively charged at rest. We have already showed in subsection 2.3 that the current which is originated by this combined quantity \(\psi\) is conserved absolutely, so there should not be any concern related to charge conservation. The form (30) is predominantly left-handed in the rest frame; it can however acquire an arbitrarily chirality under boosts. The unique feature of our represention is the direct connection between the state chirality and its charge which is given by the ratio of its upper and lower components. Boosting (30) in the \(z\)-direction, we obtain \[\psi^{\prime}=\begin{pmatrix}e^{\frac{\eta}{2}}P_{d}+e^{-\frac{\eta}{2}}P_{u}& 0&\\ 0&e^{\frac{\eta}{2}}P_{u}+e^{-\frac{\eta}{2}}P_{d}&0\end{pmatrix}\begin{pmatrix} \sqrt{m_{e}}\chi\\ \delta\sqrt{m_{\nu}}\xi\end{pmatrix}, \tag{31}\] where \(P_{u/d}=(1\pm\sigma_{3})/2\) are the spin \(z\)-projection projectors. Now, with the upper spinor in spin-down state \(\chi=\begin{pmatrix}0&\\ 1&\end{pmatrix}\), the upper component remains dominant if the rapidity \(\eta\to\infty\) independently of the neutrino spin orientation. However, if both upper and lower components are spin-up, then the left-handed charged component will become smaller than the right-handed neutral one at sufficiently high rapidity. Since the spin projection can also be changed by rotations, this behavior means that the superposition (30) does not have invariant values of chirality or charge. It is however possible to apply further refinements to (30) to ensure that its degree of chiral polarization and charge will remain unchanged under boosts and rotations. Let us consider a general boost \(S(\Lambda)\) in the chiral representation of gammas. We do not need to consider rotations, since they do not change magnitudes of both (dotted and undotted) spinors. The bispinor \(\psi\) changes under the boost with rapidity \(\eta\) in direction \(\vec{\mathbf{m}}\) as \[\psi^{\prime}=S(\Lambda)\begin{pmatrix}\chi\\ \xi\end{pmatrix}=\begin{pmatrix}\cosh\frac{\eta}{2}-\sinh\frac{\eta}{2}\,\vec{ \mathbf{m}}\cdot\boldsymbol{\sigma}&0\\ 0&\cosh\frac{\eta}{2}+\sinh\frac{\eta}{2}\,\vec{\mathbf{m}}\cdot\boldsymbol{ \sigma}\end{pmatrix}\begin{pmatrix}\chi\\ \xi\end{pmatrix}\\ =\begin{pmatrix}\chi_{1}\cosh\frac{\eta}{2}-\sinh\frac{\eta}{2}(\chi_{1}m_{ 3}+\chi_{2}m_{1}-i\chi_{2}m_{2})\\ \chi_{2}\cosh\frac{\eta}{2}+\sinh\frac{\eta}{2}(\chi_{2}m_{3}-\chi_{1}m_{1}-i \chi_{1}m_{2})\\ \xi_{1}\cosh\frac{\eta}{2}+\sinh\frac{\eta}{2}(\xi_{1}m_{3}+\xi_{2}m_{1}-i \xi_{2}m_{2})\\ \xi_{2}\cosh\frac{\eta}{2}-\sinh\frac{\eta}{2}(\xi_{2}m_{3}+\xi_{1}m_{1}+i \xi_{1}m_{2})\end{pmatrix}. \tag{32}\] This representation is given in the frame where the bispinor components are given by \(\chi_{i}\) and \(\xi_{i}\) respectively. We must find certain spinor polarizations that do not change the chirality of \(\psi\) under boosts. Let us assume that in the given frame both \(\chi_{2}=\xi_{1}=0\) which turns (32) into \[\psi^{\prime}=\begin{pmatrix}\chi_{1}(\cosh\frac{\eta}{2}-\sinh\frac{\eta}{2}m_{3 })\\ 0\\ 0\\ \xi_{2}(\cosh\frac{\eta}{2}-\sinh\frac{\eta}{2}m_{3})\end{pmatrix}, \tag{33}\] where we also set \(m_{1}=m_{2}=0\). It is then immediately clear that such a bispinor will not change its degree of chirality (the ratio of magnitudes of the upper spinor to the bottom one) independently of either rapidity or boost direction. While the expression (33) is given for the boost in \(z\)-direction, the fact that magnitudes of boosted upper and bottom spinors will not change relative to one another under a general boost can be easily from (32). Saying otherwise, if a bispinor is given by such a form in one frame and it is pre-dominantly left- or right-handed (\(|\chi_{1}|\gg|\xi_{2}|\) or \(|\chi_{1}|\ll|\xi_{2}|\) ), it will not change its chirality polarization under boosts or rotations. Ditto for the case \(\chi_{1}=\xi_{2}=0\). The split of any bispinor (it is equivalent to the iso-doublet EW in our representation) into these two configurations of spinor components is given by the rank-two projectors \(S_{\pm}=(1\pm\gamma^{0}\gamma^{3})/2\) which filter spinor states in the chiral representation as \[\psi_{s_{z}=-1/2}=S_{-}\psi=\begin{pmatrix}0\\ \chi_{2}\\ \xi_{1}\\ 0\end{pmatrix},\qquad\psi_{s_{z}=+1/2}=S_{+}\psi=\begin{pmatrix}\chi_{1}\\ 0\\ \xi_{2}\end{pmatrix}. \tag{34}\] Therefore, the pure lepton states \(\psi_{l}\) are obtained by applying the spin projector to the iso-doublet \(\psi\) \[\psi_{l}=S_{\pm}\psi\,. \tag{35}\] Depending on which spinor component is dominant, it could be associated with a charged lepton-like or its corresponding antineutrino-like states. Summarizing, new class of plane waves in the proposed framework is given as the fully spin- and predominantly chirality-polarized state of the EW iso-doublet in some chosen frame; it is convenient to choose the rest frame for such a purpose to stay consistent with the conventional way of particle classification. Currents at rest:If a bispinor \(\psi\) is given by the forms (34) at rest, the four EW currents (16) that can be obtained from single \(\psi\) are evaluated as \[\begin{array}{llll}J_{+}&=2\bar{\psi}_{\nu}\gamma^{\mu}\psi_{L}&=\bar{\psi} ^{C}\gamma^{\mu}\psi&=2(i\chi\sigma_{2}\xi)\,V\,,\\ J_{-}&=2\bar{\psi}_{L}\gamma^{\mu}\psi_{\nu}&=(\bar{\psi}^{C}\gamma^{\mu}\psi) ^{*}&=2(i\chi\sigma_{2}\xi)^{*}\,V\,,\\ J_{e}&=\bar{\psi}_{L}\gamma^{\mu}\psi_{L}&=\bar{\psi}\gamma^{\mu}P_{L}\psi&=( |\xi|^{2}+|\chi|^{2})\,V\,,\\ J_{\nu}&=\bar{\psi}_{\nu}\gamma^{\mu}\psi_{\nu}&=\bar{\psi}\gamma^{\mu}P_{R}\psi&= (|\xi|^{2}-|\chi|^{2})\,V\,,\end{array} \tag{36}\] where \(V^{\mu}=(1,0,0,-1)\) is the light-like vector. Hence, all four EW currents that are generated by a single free lepton state \(\psi\) are parallel and light-like. This consideration is important for two reasons at least. First, the four-momentum of massive lepton at rest is given as \(p_{\mu}=(p_{0},0,0,0)\). Therefore, a free lepton-like state in our framework at rest ends up with only two independent four-vectors: the energy momentum \(p_{\mu}\) and the spin vector \(s_{\mu}=(0,0,0,s_{3})\) which can be obtained as a linear combination of \(p_{\mu}\) with any current from (36). Keep in mind however that by choosing forms (34) we chose the rest frame with spin directed along \(z\)-axis. No any other independent and non-null vectors can be obtained in this case. It is in the full agreement with observations that massive fermions at rest possess only energy-momentum and spin. If the EW currents in (36) would be pointing in different directions, more independent and non-null vectors can be obtained for a free lepton which would lead to the clear contradiction with the conventional theory and experiment. Second, it places certain restrictions on possible forms of self-induced potentials \(A^{self}\), \(Z^{self}\), or \(W^{self}_{\pm}\). As we discussed before and in more detail in the Appendix, we do not assume that the EW potentials of free-moving leptons at charge origins are describable by the Maxwell or Klein-Gordon equations. Instead, we assume that in the framework of free-propagating plane waves, these self-potentials are either proportional to particle momentum \(p_{\mu}\) or particle spin \(s_{\mu}\), or some combination of these two vectors. Therefore, potentials of free-moving leptons, let us pick \(Z^{self}_{\mu}\), could be given as \[Z^{self}_{\mu}=(Z_{0},0,0,Z_{3})\,, \tag{37}\] under the requirement that \(|Z_{0}|\neq|Z_{3}|\). Keep in mind again that by choosing forms (34) we chose the rest frame with spin directed along \(z\)-axis.7 For example, the electromagnetic potential \(A_{\mu}\) of electron at rest is traditionally given as \((A_{0},0,0,0)\) where \(A_{0}\) is the electric potential; it might still have a small \(A_{3}\) component which is related to electron spin or induced magnetic field. The similar consideration applies to both \(Z^{self}\) and \(W^{self}_{\pm}\). Footnote 7: Having filtered spin states as in (34) once, we can move into any other inertial frame by boosts and rotations. ### EW plane waves Assuming that self-action fields \(A^{self}\), \(Z^{self}\), and \(W^{self}_{\pm}\) are originated and carried by a free-moving lepton, the corresponding motion equation (29) can be given as \[\not{p}\psi-\frac{g}{2\cos\theta}\not{Z}^{self}\psi_{R}-\left(e\not{A}^{self}+ \frac{g\cos 2\theta}{2\cos\theta}\not{Z}^{self}\right)\psi_{L}-\frac{g}{2}e^{2 ipx}\not{W}^{self}_{-}(x)\gamma^{5}\psi^{C}=0\,, \tag{38}\] where the plane wave ansatz \(\psi(x)=\psi e^{-ipx}\) was used; here \(p_{\mu}\) is the phase momentum to be found. The last term is explicitly phase-dependent while all other terms are phase-independent. For consistency, we must request that the phase factor is canceled out in the combination \(e^{2ipx}W^{self}_{-}(x)\). We cannot eliminate it by gauge-transforming three other gauge potentials, since then an extra term will be generated by the derivative. It is the clear indication that the phases of charged self-fields \(W^{self}_{\pm}(x)\) must be correlated with the phases of corresponding charged currents (36) which are also phase-dependent. It must be \(J_{+}(x)\) in the case of \(W^{self}_{-}(x)\), since then its phase dependence offsets the phase factor in (38) \[J_{+}(x)=\bar{\psi}^{C}(x)\gamma^{\mu}\psi(x)=e^{-2ipx}\underbrace{\bar{\psi} ^{C}\gamma^{\mu}\psi}_{J_{+}}\,. \tag{39}\] For the purposes of this study, we do not need to specify a functional dependence between the self-potentials \(W^{self}_{\pm}(x)\) and corresponding charged currents \(J_{\mp}(x)\). All is needed are the phase correlation between the charged potentials and charged currents, and the form (37) that defines the self-potentials in the chosen rest frame. A quick comment: after canceling the phase dependence (x-dependence in this case), the potential \(W_{-}\) is assumed to be spacetime-independent in the manipulations below. Expectations:As we discussed at the end of section 2.3, the equation (22) or (38) is defined for the quantity \(\psi=(u_{L},i\sigma_{2}v_{L}^{*})\). Remember that its two chiral projections \(P_{L}\psi=(u_{L},0)\) and \(P_{R}\psi=(0,i\sigma_{2}v_{L}^{*})\) correspond to conventional left-handed charged lepton and its antineutrino respectively. If we accept that these chiral projections can propagate independently of each other, we end up with exactly the conventional case, with no new physics. It is straightforward to demonstrate that (22) or (38) describes the motion of two parts of EW iso-doublet in the exact correspondence with the conventional case. Instead, we can accept the view that two parts of EW iso-doublet might not be always separable in which case the iso-doublet (which is represented by \(\psi\) in our framework) propagates as a whole entity, with both nonzero upper and lower spinors. However, to match the reality, it must then have two modes: neutrino-like and charged lepton-like. In solving the plane wave equation (38), two types of solutions are distinguished by having either \(p^{2}=0\) or \(p^{2}=m^{2}\) respectively for the first and second modes. For the second (massive) mode, both upper and lower spinors in \(\psi=(\chi,\xi)\) must be nonzero and have the form (34) at rest. The mass or inertia is generated by the lepton-boson interaction terms that are included into (38). The key expectation is that the magnitude of upper spinor is much larger than the magnitude of lower one (\(|\chi|\gg|\xi|\)). For the neutrino-like mode, with zero or near zero masses, we expect that \(|\chi|\ll|\xi|\). The corresponding eigenvalue for \(p^{2}\) must either be zero or be near zero. Our framework can easily accommodate nonzero neutrino masses, as it will be shown below. ### Solutions The equation (38) is the system of algebraic linear equations for components of \(\psi\). Finding a general solution is quite a daunting task, since it depends on four real four-vector coefficients. Compare it with the conventional Dirac case of plane waves for which the general form depends only on \(p_{\mu}\) and a unit spinor. We will start with finding solutions in the rest frame first. Having eliminated the phase factor from (38) for the plane wave motion, it is convenient to introduce the following notations \[M\psi=\not{p}\psi-\not{e}\psi_{R}-\not{b}\psi_{L}-\not{d}\gamma^{5}\psi^{C}=0\,, \tag{40}\] where the vector coefficients are defined as \[\begin{array}{ll}c_{\mu}&=\frac{g}{2\cos\theta}Z_{\mu}^{self}\,,\\ b_{\mu}&=eA_{\mu}^{self}+\frac{g\cos 2\theta}{2\cos\theta}Z_{\mu}^{self}\,, \\ d_{\mu}&=\frac{g}{2}W_{-}^{self}\,,\end{array} \tag{41}\] here both \(c_{\mu}\) and \(b_{\mu}\) are real while \(d_{\mu}\) is complex-valued. Since the self-induced potentials are viewed as unknown, the four-vectors \(c_{\mu}\), \(b_{\mu}\), and \(d_{\mu}\) are unknown as well. Our goal is to check how they must be restricted to obtain lepton masses and physically acceptable solutions. If the charged boson field \(W_{-}=0\), then the equation (40) splits into two Weyl-like equations which are similar to the conventional ones (except that \(p_{\mu}\) is replaced by \(p_{\mu}-b_{\mu}\) and \(p_{\mu}-c_{\mu}\) for left- and right-handed parts of \(\psi\) respectively). It is the last term in (40) that makes the problem nontrivial. To see the connection between two parts of \(\psi=(\chi,\xi)\), let us re-write the main equation in the spinor representation \[\begin{cases}(p-c)\bar{\sigma}\,\xi+d\bar{\sigma}\,\chi^{C}&=0\,,\\ (p-b)\sigma\,\chi+d\sigma\,\xi^{C}&=0\,,\end{cases} \tag{42}\] where \(d\sigma=d^{\mu}\sigma^{\mu}\), \(d\bar{\sigma}=d_{\mu}\sigma^{\mu}\), \(\sigma^{\mu}\) are four Pauli matrices, and the conjugate spinors are defined as \[\chi^{C} =i\sigma_{2}\chi^{*}\,, \xi^{C} =i\sigma_{2}\xi^{*}\,. \tag{43}\] In the regular Dirac equation, it is the mass term that couples two spinors. In the extended equation (40), it is the term with charged boson field \(W_{-}\). Rest frame:The rest frame is defined by setting \(\vec{\mathbf{p}}=0\) and choosing one of two spin-polarized forms (34) for \(\psi\). By selecting \(s_{z}=+1/2\) for now, we set \(\chi_{2}=\xi_{1}=0\). The system of linear equations for two remaining components \(\chi_{1}\) and \(\xi_{2}\) splits into two subsystems. The first one depends only on components \(c_{1/2}\), \(b_{1/2}\), and \(d_{1/2}\) of vector coefficients; it does not include \(p_{0}\) at all. In the rest frame with the spin projection aligned along the \(z\)-axis, the self-induced potentials (thus the coefficients \(c_{\mu}\), \(b_{\mu}\), and \(d_{\mu}\)) are expected to have the very specific form (37). Therefore, the first subsystem turns zero in such a rest frame. Instead, the second subsystem is nontrivial and has the following determinant \[\sqrt{\det M_{0}}=(b_{0}+b_{3}-p_{0})(c_{0}+c_{3}-p_{0})+\left|d_{0}+d_{3} \right|^{2}, \tag{44}\] where \(M_{0}\) is the matrix \(M\) evaluated in the rest frame. This expression is valid for both spin projections or both spinor forms (34). The eigenvalues depend only on the time- and \(z\)-components of self-induced potentials, see the discussion in subsection 3. Two values of \(p_{0}\) are given as \[2p_{0}=b_{0}+b_{3}+c_{0}+c_{3}\pm\sqrt{(b_{0}+b_{3}-c_{0}-c_{3})^{2}-4\left|d_{ 0}+d_{3}\right|^{2}}\,. \tag{45}\] One eigenvalue turns zero (\(m_{1}=0\)) if \[(b_{0}+b_{3})(c_{0}+c_{3})=-\left|d_{0}+d_{3}\right|^{2}, \tag{46}\] in which case the second eigenvalue becomes \[m_{2}=b_{0}+b_{3}+c_{0}+c_{3}\,. \tag{47}\] We will deal with small neutrino masses shortly; the constraint (46) is satisfied only approximately in such a case. One can immediately see the seesaw-like relation in (46) which makes one eigenvalue large if we force the second one to be small. First however, we have to show how these eigenvalues can be associated with effective masses of freely-propagating lepton states. In any way, the relation (46) naturally appears within the proposed formalism. One should not be overly concerned with the explicitly non-covariant form of expressions (44) - (47). They are derived from the equation \(\det M=0\) which is Lorentz-invariant; ditto for its roots. Having evaluated them in one inertial frame (the rest one, for example), the invariant eigenvalues are the same for all other inertial frames. The solutions for both eigenvalues are given as \[\psi_{1}=N_{1}\!\left(\begin{smallmatrix}\chi_{1}\\ 0\\ -\frac{d_{0}+d_{3}}{c_{0}+c_{3}}\chi_{1}^{*}\end{smallmatrix}\right), \psi_{2}=N_{2}\!\left(\begin{smallmatrix}\chi_{1}\\ 0\\ \frac{d_{0}+d_{3}}{b_{0}+c_{3}}\chi_{1}^{*}\end{smallmatrix}\right), \tag{48}\] where the normalization constants \(N_{1/2}\) are not specified yet. Following our previous discussion, we expect that the magnitude of upper spinor must be much smaller (larger) than the magnitude of lower spinor for the neutrino-like (charged lepton-like) mode. If we request that \[\left|c_{0}+c_{3}\right|\ll\left|d_{0}+d_{3}\right|\ll\left|b_{0}+b_{3}\right|, \tag{49}\] then the first and second solutions can be associated with the neutrino-like and charged lepton-like states respectively. Since the second eigenvalue which is associated with electron mass is positive, it immediately follows that \[b_{0}+b_{3}>0\,,\qquad c_{0}+c_{3}<0\,. \tag{50}\] Remembering the definitions (41) and speaking in relative terms, one can say the following about the self-potentials \(A^{self}\) and \(Z^{self}\). The combination of components \((A_{0}+A_{3})\) is large positive, while \((Z_{0}+Z_{3})\) is small negative. To introduce a nonzero neutrino mass, we have to expand the square root in (45) over two small parameters \((c_{0}+c_{3})/(b_{0}+b_{3})\) and \(\left|d_{0}+d_{3}\right|/(b_{0}+b_{3})\) which leads to \[\begin{split} m_{1}&=c_{0}+c_{3}+\frac{\left|d_{0}+ d_{3}\right|^{2}}{b_{0}+b_{3}}+\mathcal{O}(\ldots)\,,\\ m_{2}&=b_{0}+b_{3}-\frac{\left|d_{0}+d_{3}\right|^{2 }}{b_{0}+b_{3}}+\mathcal{O}(\ldots)\,.\end{split} \tag{51}\] Therefore, the neutrino-like mass \((m_{1})\) is determined by the self-induced potentials \(Z^{self}\) and \(W_{-}^{self}\), as expected. The electromagnetic self-potential \(A^{self}\) is the largest one among all four EW fields. Using the definitions (41) and assuming that the time components are much larger than the \(z\)-ones in the rest frame under consideration, the above expressions are rewritten as \[\begin{split} m_{\nu}&=\frac{g}{2\cos\theta}Z_{0}+ \frac{g}{4\sin\theta}\frac{\left|W_{0}\right|^{2}}{A_{0}}\,,\\ m_{e}&=eA_{0}+\frac{g\cos 2\theta}{2\cos\theta}Z_{0}- \frac{g}{4\sin\theta}\frac{\left|W_{0}\right|^{2}}{A_{0}}\,,\end{split} \tag{52}\] where we also dropped the superscript from potentials to reduce clutter. The above expressions give the mass eigenvalues for single lepton family by means of the EW self-potentials. Two terms in the expressions for \(m_{\nu}\) are probably close in magnitude, though the second term must be larger than the first one, since \((c_{0}+c_{3})<0\). The relative smallness of first mass eigenvalue might also come from the fact that two terms in the expression for \(m_{\nu}\) have signs opposite to each other. Instead, the first term in the expression for \(m_{e}\) is much larger than the last two ones. Even if both neutral \(Z\) and charged \(W\) self-interactions contribute, the electromagnetic self-interaction is still the dominant contribution into \(m_{e}\). ## 4 Summary We derived the equation (29) which extends the Dirac- and Weyl-like equations to the EW case. It is given for the quantity \(\psi\) which represents one weak lepton iso-doublet (two Weyl spinors). We have also found its solutions in the form of plane waves under certain assumptions. Two types of solutions are distinguished by different values of mass eigenvalues; the connection to the conventional case, which corresponds to zero eigenvalues, is also given. We named these two types of new solutions as neutrino-like and charged lepton-like, since it remains open if they can represent real massive neutrinos and charged leptons. Though it will not be addressed in this work (our focus has been on developing the alternative and strictly equivalent description of conventional EW approach to the left-handed lepton sector), some additional considerations are given in the Appendix. The equations (29) and (40) do not have a mass term in the conventional sense, however the scale of phase momentum (energy-momentum vector \(p_{\mu}\)) is clearly set by the other vector coefficients \[[p_{\mu}]\sim[A_{\mu}^{self}]\sim[Z_{\mu}^{self}]\sim[W_{\mu}^{self}]\,, \tag{53}\] which are present in the main equation. Physically, it means that the mass value (its bare value) is determined by the interaction of free-moving lepton-like state with its EW self-potentials. The structure of self-interaction terms is given by the regular EW terms, however the magnitudes of these self-potentials are free parameters. The model we developed here is phenomenological, since it does not predict the values of these self-potentials, or why there are three lepton generations for which extensions of relevant gauge groups might be required [9]. However, the model does show how to define neutrino-like masses alternatively to the Dirac and Majorana models. If one insists on a scalar mass term, then the theory has to deal with either the inert right-handed sector or the nonconservation of lepton numbers [4, 10]. Instead, the mass eigenvalues in our model is determined by well-known lepton-boson interaction terms which however are not scalar. The proposed framework can be straightforwardly meshed with Higgs mechanisms by simply adding the right-handed Lagrangian part, since the left-handed Lagrangian has not been changed. Both mechanisms are truly complementary to each other. For example, the proposed mechanism can explain the bare masses in the lowest part of mass spectrum, while heavier fermion masses (which are closer to the Higgs mass) can be determined by the coupling to Higgs. It suggests a possible solution of why the coupling to scalar Higgs field does not lead to unrealistically high energy densities (which happens if everything couples to Higgs). It is also possible to extend the proposed model to higher order SMEFT terms by adding new interaction terms to the motion equations (7). Similarly, the model is straightforwardly extendable to the quark sector which will be tackled in a future work. Remarkably, we also managed to derive the analog of the seesaw relation, see the expression (46). Since the combination \(b_{0}+b_{3}\sim m_{e}\) and \(c_{0}+c_{3}\sim-m_{\nu}\) are shown to be measures of the electron and neutrino masses respectively, expression (46) can be re-written as \[m_{e}\,m_{\nu}\sim\left|W_{0}^{self}\right|^{2}, \tag{54}\] where again the \(z\)-component of self-induced field \(W_{-}\) is neglected. Therefore, the scale of product \(m_{e}m_{\nu}\) is the magnitude of self-induced charged field squared. We managed to derive the seesaw-like relation between the lepton-like masses and self-potentials in the rest frame. However, no values (bare values) of masses were possible to find without some additional input. Obtaining the second relation, which connects the strength of these potentials with fermion field amplitudes will allow finding the lepton-like masses in the proposed framework. Following footsteps of the conventional theory, other next tasks would be to apply the framework to quarks and bosons. At this point, we do not see any fundamental obstacles for extending the proposed model. However, any of these extensions is a big task in itself that clearly takes us outside the boundaries of this manuscript. The solutions were however derived under certain additional assumptions which lead to nonzero mass eigenvalues for both types of solutions. In the proposed new representation, there also exists the direct link between the chirality polarization (ratio between upper and bottom spinors) and the electric charge of these lepton-like states. The question whether these new types of plane waves can represent real neutrinos and charged leptons is open 4 and will not be addressed in this work. Here however, we can share some additional considerations pertinent to this topic. Footnote 4: For this reason, we call these new solutions as neutrino-like and charge lepton-like states. In the electroweak theory, the fermion masses are generated by coupling to scalar fields. The question regarding lepton masses is effectively replaced by the question of why a given lepton type has a specific Yukawa coupling strength to the Higgs field [11]. Since the Higgs couplings of leptons vary by twelve orders of magnitude, the question is whether some other mechanisms of mass generation exist. The idea of radiative fermion masses was recently reviewed in [12]. In such models, the heaviest fermions still receive their mass by coupling to the Higgs at tree level. Lighter fermions, however, acquire their masses in higher-order loops with virtual heavy particles. Taking into account that amplitudes of high-order loops are severely suppressed, this mechanism might potentially explain the vastness of mass spectrum. In any way, nearly all of numerous mechanisms of lepton mass generation, proposed in the literature, rely on coupling to scalar fields, even in the multidimensional extensions of SM [13]. Is it possible to introduce an alternative (and complementary) way of generating masses which would not contradict the well established results? Even if the Higgs is the key and proven part of the mainstream theory, it does not mean that there is no additional mechanism that can contribute into the masses of lightest leptons, which are many orders of magnitudes lighter than the Higgs. It is still the very active research area where different modifications of the original Higgs mechanism are proposed and investigated, see examples of recent works [14, 15, 16, 17, 18, 19]. Analogously, this new representation can be leveraged into a consistent extension to the conventional theory, without contradicting the existing, well proven results. The biggest challenge for an alternative mechanism is how to introduce mass scales into field equations in a covariant and non-controversial way, similar to the Higgs mechanism. In the latter one, the mass terms are scalar, thus they must be of Dirac or Majorana type with their own sets of challenges [4, 10, 20, 21]. The framework that is proposed in this study allows generating effective masses without scalar terms. Hence it avoids the necessity of choosing between the Dirac and Majorana models of massive neutrinos. It might potentially open new possibilities in search for new physics beyond SM, in addition to what is already been actively discussed, see selected examples in [22, 23, 24, 25, 26, 27]. Additionally, the amount of literature on neutrinos-related topics is immense and will not be reviewed here; however, we would like to highlight several recent reviews [28, p. 285], [29], [30] and references therein. **Relation to Higgs and symmetry breaking:** It is important to clarify the relation of proposed mechanism to the Higgs and the symmetry breaking that leads to the mass generation in the conventional approach. The relevant terms of electroweak Lagrangian are given as \[\mathcal{L}_{ew}=i\bar{\psi}_{L}\gamma^{\mu}\partial_{\mu}\psi_{L}-f(\lambda+ \chi)\bar{\psi}_{R}\psi_{L}-eA_{\mu}\,\sum_{l}\bar{\psi}_{L}\gamma^{\mu}\psi_{ L}+\ldots\,,\] (A.1) where \(\psi_{L}\) and \(\psi_{R}\) are left- and right-handed leptons, \(f\) is the coupling strength to Higgs field \(\chi\) which vacuum value is \(\lambda\), and \(A_{\mu}\) is the electromagnetic (EM) field. The shown terms are the kinetic energy of left-handed lepton field \(\psi_{L}\), the lepton-Higgs interaction term, and the lepton-EM interaction one correspondingly. The dots represent the corresponding terms for right-handed states, neutrinos, charged and neutral bosons, and Higgs. They are not shown explicitly in (**A.1**), since their roles remain unchanged in the proposed approach. The second term in (**A.1**), which includes the right-handed states of leptons, breaks the SU2 symmetry of electroweak Lagrangian. As a result of this symmetry breaking and the chosen Higgs vacuum state, particles (leptons, electroweak bosons \(W_{\mu}\), \(Z_{\mu}\), and quarks) all acquire masses while photons remain massless. Next, taking into account the radiative corrections into the self-energy, the effective mass of a lepton \(m_{l}\) can be given as \[m_{l}=m_{H}+m_{A}^{rad}+\ldots\,,\] (A.2) where \(m_{H}=f\lambda\) is the mass due to Higgs coupling and \(m_{A}^{rad}\) is the contribution into lepton self-energy by virtual photons which are represented by field \(A_{\mu}\). The dots represent radiative corrections generated by interactions with other types of virtual particles. We are still squarely within the conventional framework in which the techniques for evaluating these radiative corrections are well established however complicated they might be, especially in the case of hadronic vacuum polarization. How can an additional effective mass appear in the equations of motion without violating the translation invariance and Lorentz covariance? Briefly, it can be outlined as follows. Let us assume that the potential \(A_{\mu}\) has three parts \[A_{\mu}=A_{\mu}^{ext}+\partial_{\mu}\chi+A_{\mu}^{self}\,,\] (A.3) where \(\chi\) is an arbitrary scalar function (gauge), \(A_{\mu}^{ext}\) is an external field while \(A_{\mu}^{self}\) is the field of free-moving charged particle. In the rest frame, it would be given by the Coulomb or Uehling potential \(\phi(r)\) which is the former one modified by the vacuum polarization around bare charges. We will consider only free-moving charges in this work, for which case \(A_{\mu}^{ext}\) is zero. Even after the renormalization, the part \(A_{\mu}^{self}\sim\phi(r)\) remains infinite at the location of bare charge (\(r\to 0\)) while the contributions of virtual photons into the self-energy (which depend on the regularization scale) are captured by the radiative corrections. This infinite value of static potential \(A_{\mu}^{self}\sim\phi(r\to 0)\) at the location of bare charge is largely ignored in the conventional field theory, at least in regard to particle masses, even if it has been frequently speculated in the classical case that it should be connected to masses of charged particles (the electromagnetic origin of mass). In this work, we develop the phenomenological approach within which a more fundamental theory of electroweak interactions can be applied to predict experimental results. We then assume that such a theory would lead to finite values of electroweak potentials (\(A_{0}\), \(Z_{0}\), and \(W_{0}\)) at location of bare charge. In the approximation of free-propagating plane waves that we use here, they are just a set of numbers, with no space-time dependence. We will not speculate here what form such a theory might have; all we need for our phenomenological approach to proceed is the assumption that finite values of four electroweak potentials at bare charge origin exist. Next, if we are allowed assuming that the electroweak potentials of free-moving leptons, including the EM one \(A_{\mu}^{self}\sim\phi(r\to 0)\) for example, are finite at charge origin, we can derive several non-trivial relations between masses of charged leptons and their neutrinos by using the conventional electroweak Lagrangian. Since the terms that describe the lepton-Higgs coupling and lepton-EM interaction enter the Lagrangian independently of each other, their contributions into the effective mass are also additive, in which case, the mass equation (**A.2**) changes as \[m_{l}^{\prime}=m_{H}+m_{A}^{rad}+m_{A}^{fin}+\ldots\,,\] (A.4) here the first and second terms in (**A.4**) keeps its original meaning as in (**A.2**). The third term gives the additional mass due to the finite self-potential \(A_{\mu}^{self}\) which is not describable by the Maxwell physics. This is the key detail, since otherwise one would be able to immediately object to (**A.4**) by pointing that the term \(m_{A}^{rad}\) already describes the contributions due to the electromagnetic field. However, the conventional theory calculates radiative corrections by using the propagators which are solutions of Maxwell or Klein-Gordon equations which do not lead to finite values of potentials at charge origin. Therefore, assuming that such potentials are finite at \(r\to 0\) necessarily imply the deviation from the Maxwell and Klein-Gordon physics (quantized or not). Since the contributions in (**A.4**) due to Higgs and finite self-potentials are additive, the proposed mechanism is truly complementary to the Higgs one. It is possible that the proposed mechanism determines masses of lightest leptons (that are much smaller than Higgs) which would make the Higgs couplings for heavier leptons constrained to a much narrow range. Assuming then that masses of lightest leptons are described by the proposed mechanism, we are able to show that the charged lepton mass \(m_{e}\sim|A_{0}|\) is mostly given by the interaction with its EM field (but it is still not a purely electromagnetic one!) while the mass \(m_{\nu}\sim|Z_{0}|\) of its corresponding neutrino is determined by the interaction with its neutral potential. Remarkably, the seesaw-like relation \(m_{e}m_{\nu}\sim|W_{0}|^{2}\) between the masses of particles from the same EW iso-doublet also follows naturally from the applied framework. Hence, while the coupling strengths for a charged lepton and its neutrino are not connected in any way in the Higgs-based model, they are naturally connected by the seesaw relation in our approach. Seesaw-like relations with neutrinos have been discussed before, see for example [31], in the framework of gauged family unification. An application of the proposed phenomenological model which would allow inferring additional information regarding the electroweak potentials of free-moving particles could also shed more light on possible deviation from the conventional theory to help address the known challenges with existing infinities. **Lepton-like states** The purpose of this manuscript is not to offer a complete or even a substantial partial solution of lepton mass problem, like a reduction in number of free parameters in SM (couplings to Higgs field, for example). It would be an unrealistic expectation at this stage. Instead, our goals are much more modest. We attempt to expose a potential gateway from which a consistent extension of conventional EW theory, which does not contradict with any of well proven experimental support and theory constructs, can be found. A massive particle must have both left- and right-handed components [4], as can be seen by considering its behavior under general boosts and rotations. Specifically, the Dirac plane wave has the left- and right-handed components of equal magnitude in the rest frame, see for example, (3.49) from [3]. Such a state does not have any left-right asymmetry per se, opposite to what we see in the EW interactions (nearly 100% chirally polarized). The mass of such a particle is a scalar m that is introduced into the Lagrangian by hand (or by using the Higgs field in a certain vacuum state). In the new representation, a massive lepton-like state also has both left- and right-handed components. However, the left-right asymmetry is incorporated into the state description: a massive charged lepton-like state has a large left-handed component and very small right-handed one. In return, an antilepton-like state has a large right-handed component and very small left-handed one. To avoid any issues with the behavior under boosts and rotations, such states must be fully spin-polarized in their respective rest frames, see (41) for a general boost and its specific version (42) for the boost in z-direction. As we showed, we can then obtain several scalars, which are interpreted as contributions into masses, for free-moving leptons from the EW currents for such states and finite self-potentials, without invoking the Higgs mechanism. **Effective mass:** Conventionally, mass terms are introduced into field equations as products of constant \(m\) with scalars made out of field amplitudes. For this reason, right-handed leptons are required in conventional Lagrangians to introduce scalar Dirac-like mass terms (and for anomaly cancellations which also require contributions from corresponding quarks [9]). It is truly remarkable that right-handed states of particles and left-handed ones of antiparticles do not interact with charged EW bosons. Since the proposed framework allows generating non-zero values of four-momentum at rest without such scalar terms, it would be helpful to elaborate more on how it can be achieved. How can effective masses appear in the equations of motion without violating the translation invariance and Lorentz covariance? Consider, for example, a free-motion without external fields which is described as \[i\not{\partial}\psi-\underbrace{e\not{A}^{self}\psi-(\dots)^{self}}_{S^{self} \psi}=0\,,\] (A.5) here \(A^{self}_{\mu}\) is not yet known self potential, or the potential of free-moving charge. The goal is to find whether the inclusion of terms \(S^{self}\) can be done in a consistent way. It would be impossible to generate masses in a consistent way with having only one gauge potential \(A_{\mu}\). However, the EW theory with four gauge potentials provides a sufficiently advanced framework for obtaining non-trivial results, as we will see below. We also do not expect that it would be possible to reduce the self-interaction \(S^{self}\) to some kind of Dirac or Majorana mass terms, since it will create well-known challenges. For example, a Dirac mass term requires right-handed states that do not interact with the charged EW fields (enabling the scalar mass term seems to be their only role in theory), while a Majorana term does not conserve either lepton number or charge. Specifically, we show in this work that for free-moving leptons \(\psi(x)=\psi e^{-ipx}\), the evolution equation reduces to a system of algebraic equations for components of \(\psi\) \[i\not{\partial}\psi-S^{self}\psi=0\qquad\rightarrow\qquad\not{p}\psi-S^{self} \psi=0\] (A.6) where the operator \(S^{self}\) is not a scalar one. The mass eigenvalue is then obtained by squaring the momentum vector: \(p_{\mu}p^{\mu}=m^{2}\). The proposed mechanism could be used as a complement to the Higgs one to address the question why Higgs coupling constants span such a wide range. The outlined task is different from the known approaches, either the Dirac-Maxwell system or self-energy evaluations in the QFT. The latter one is the standard step which requires multi-loop calculations followed by renormalization. It is well known that fermion masses run for this reason; hence different values are given at different energy scales. For example, the standard procedure takes into account multiple acts of emission and re-absorption of virtual photons which however are assumed to propagate as solutions of Maxwell equation. Instead, we do not assume that \(A^{self}_{\mu}\) obeys the Maxwell equation; effectively, we attempt to evaluate the self-induced mass in the _zero_ order by relaxing requirements that are not strictly proven. Then, the mass value can be refined in next orders by standard QFT procedures to take into account the residual self-energy as the radiation correction. Clearly, it is the hypothesis that must be proven by obtaining consistent results.
2301.08409
Adaptive Resource Allocation for Workflow Containerization on Kubernetes
In a cloud-native era, the Kubernetes-based workflow engine enables workflow containerized execution through the inherent abilities of Kubernetes. However, when encountering continuous workflow requests and unexpected resource request spikes, the engine is limited to the current workflow load information for resource allocation, which lacks the agility and predictability of resource allocation, resulting in over and under-provisioning resources. This mechanism seriously hinders workflow execution efficiency and leads to high resource waste. To overcome these drawbacks, we propose an adaptive resource allocation scheme named ARAS for the Kubernetes-based workflow engines. Considering potential future workflow task requests within the current task pod's lifecycle, the ARAS uses a resource scaling strategy to allocate resources in response to high-concurrency workflow scenarios. The ARAS offers resource discovery, resource evaluation, and allocation functionalities and serves as a key component for our tailored workflow engine (KubeAdaptor). By integrating the ARAS into KubeAdaptor for workflow containerized execution, we demonstrate the practical abilities of KubeAdaptor and the advantages of our ARAS. Compared with the baseline algorithm, experimental evaluation under three distinct workflow arrival patterns shows that ARAS gains time-saving of 9.8% to 40.92% in the average total duration of all workflows, time-saving of 26.4% to 79.86% in the average duration of individual workflow, and an increase of 1% to 16% in CPU and memory resource usage rate.
Chenggang Shan, Chuge Wu, Yuanqing Xia, Zehua Guo, Danyang Liu, Jinhui Zhang
2023-01-20T03:21:25Z
http://arxiv.org/abs/2301.08409v1
# Adaptive Resource Allocation for Workflow Containerization on Kubernetes ###### Abstract In a cloud-native era, the Kubernetes-based workflow engine enables workflow containerized execution through the inherent abilities of Kubernetes. However, when encountering continuous workflow requests and unexpected resource request spikes, the engine is limited to the current workflow load information for resource allocation, which lacks the agility and predictability of resource allocation, resulting in over and under-provisioning resources. This mechanism seriously hinders workflow execution efficiency and leads to high resource waste. To overcome these drawbacks, we propose an adaptive resource allocation scheme named ARAS for the Kubernetes-based workflow engines. Considering potential future workflow task requests within the current task pod's lifecycle, the ARAS uses a resource scaling strategy to allocate resources in response to high-concurrency workflow scenarios. The ARAS offers resource discovery, resource evaluation, and allocation functionalities and serves as a key component for our tailored workflow engine (KubeAdaptor). By integrating the ARAS into KubeAdaptor for workflow containerized execution, we demonstrate the practical abilities of KubeAdaptor and the advantages of our ARAS. Compared with the baseline algorithm, experimental evaluation under three distinct workflow arrival patterns shows that ARAS gains time-saving of \(9.8\%\) to \(40.92\%\) in the average total duration of all workflows, time-saving of \(26.4\%\) to \(79.86\%\) in the average duration of individual workflow, and an increase of \(1\%\) to \(16\%\) in CPU and memory resource usage rate. Resource Allocation, Workflow Containerization, Kubernetes, Workflow Management Engine. ## 1 Introduction With the advent of a cloud-native era, the most popular virtualization solution is using Docker 1 for container encapsulation with Kubernetes (K8s) [1] for multi-host container orchestrating. Docker and K8s 2 have become mainstream tools for cloud resource management and dominated the whole cloud-native technology ecosystem [2]. Workflows have been widely applied in scientific computing communities such as astronomy, bioinformatics, material science, and earth science [3]. A scientific workflow is commonly formulated as a directed acyclic graph (DAG), which consists of dozens of workflow tasks (represented by nodes) and dependencies among tasks (indicated by directed edges). A DAG abstracts a particular scientific computing process through shared data files between tasks and predefined task dependencies [4, 5]. Powered by Docker and K8s, cloud infrastructure features the scalability and high availability of computational resources [6] and is especially suitable as a running platform for scientific workflows. Footnote 1: [http://docs.docker.com](http://docs.docker.com) Footnote 2: [https://kubernetes.io/](https://kubernetes.io/) Scientific workflows usually serve large-scale applications and require a considerable amount of resources to execute. Efficient resource allocation is a key issue in workflow execution. Existing workflow management engines like Nextflow [7], Pagasus [8, 9], Galaxy [10], and Argo workflow engine 3 can execute hundreds of workflows on cloud infrastructure and be responsible for assigning computational resources to workflow tasks [11]. When encountering continuous workflow requests and unexpected resource request spikes, the computational resource requirements of workflows can be highly dynamic. The ever-changing resource requirements of workflows bring a great administrative burden to the workflow engines for resource allocation and seriously decrease the execution efficiency of workflows. On the one hand, the permanent provision of fixed computational resources will cope with peak loads in a resource-intensive scenario but incur high costs and resource over-provisioning, as resources are not fully utilized during off-peak times. On the other hand, some workflows may not be executed at all and suffer from a poor Quality of Service (QoS) due to insufficient resource provisions. Footnote 3: [http://github.com/argoproj/argo](http://github.com/argoproj/argo) In order to avoid over and under-provisioning of resources, some existing works propose reasoning [12, 13], feedback [14], heuristics [15], learning and prediction models [16, 17, 18, 19] to cope with resource allocation in cloud environment. Although these solutions can partially address the cloud resource allocation problem, they commonly use prior knowledge of cloud systems to cope with resource allocation. As a result, these solutions might play to their strengths in a specific application scenario, but they are not fully adaptable to the K8s-based cloud environment with dynamic resource requirements. Besides, numerous iteration training may result in high computational complexity and resource overheads in learning and prediction mod els. Therefore, with the application platform and technology stack in mind, they do not fit with the K8s-based workflow management engines. The bottleneck here is the absence of a high-efficiency adaptive resource allocation scheme that can help the K8s-based workflow management engines to make appropriate resource provisions in response to continuous workflow requests and unexpected request resource spikes. In our former work [20, 21], we presented the customized K8s-based workflow management engine (KubeAdaptor), able to integrate workflow systems with K8s and implement workflow containerization on K8s cluster. In this paper, we present an adaptive resource allocation scheme (ARAS) that follows the Monitor-Analyse-Plan-Execute Knowledge (MAPE-K) model [22, 23]. The ARAS periodically responds to the task pod's resource request and uses the resource discovery algorithm, resource evaluation algorithm, and resource allocation algorithm to complete the resource allocation of this round task by the resource scaling strategy. We reconstruct and extend KubeAdaptor, and implement ARAS as the _Resource Manager_ component of KubeAdaptor, which consists of a _Resource Discovery_ module, _Resource Evaluator_ module, and _Allocator_ module. Three modules complement each other to achieve the adaptive resource allocation (refers to Fig. 2). First, the Resource Discovery module invokes the resource discovery algorithm to obtain the remaining resources (such as CPU and memory) of K8s cluster nodes and the resource usage of running task pods. Then the _Resource Evaluator_ module integrates the remaining resources of the K8s cluster and workflow workloads from the Redis database and evaluates resource adequacy for the K8s cluster nodes. Finally, the _Allocator_ module uses a resource scaling strategy (i.e., vertical autoscaling) [24] to make resource provisions for current active task pods in response to continuous workflow requests and sudden request spikes. We have open-sourced the proposed ARAS. The source code is publicly available on GitHub 4. Footnote 4: [https://github.com/CloudControlSystems/ResourceAllocation](https://github.com/CloudControlSystems/ResourceAllocation) This paper focuses on adaptation, that is, the adaptive adjustment of resource allocation in the context of changing workflow resource requirements. Compared with the baseline algorithm, experimental evaluation of running four scientific workflows under three different workflow arrival modes shows that ARAS gains time-saving of \(9.8\%\) to \(40.92\%\) in the average total duration of all workflows, time-saving of \(26.4\%\) to \(79.86\%\) in the average duration of individual workflow, and an increase of \(1\%\) to \(16\%\) in CPU and memory resource usage rate. The main contributions of this paper are summarized as follows. * _MAPE-K architecture._ With the MAPE-K mechanism as a core, we decouple and reconstruct the KubeAdaptor and integrate our ARAS into four phases of the MAPE-K model to equip the KubeAdaptor with self-healing and self-configuration abilities. * _A novel monitoring mechanism._ We devise and develop a resource discovery algorithm through the K8s resource characteristics and _Informer_ component. The _Resource Discovery_ module uses this algorithm to build a novel monitoring mechanism to collect all related data in K8s clusters. * _Automation deployment._ We modularize and implement the four steps of the proposed ARAS with loose coupling in mind so that the users can easily mount a newly designed algorithm module to replace an existing one with minimal intrusion into the workflow management engine. * _Better performance._ With the help of K8s and the MAPE-K mechanism, we use ARAS to conduct a wealth of experiments on four scientific workflows on K8s clusters. Our ARAS shows better performance compared to the baseline algorithm. The rest of the paper is organized as follows. Section 2 introduces related work. Section 3 elaborates on the system model and problem formulation, while Section 4 further describes our system architecture and components. Section 5 illustrates the implementation of our adaptive resource allocation scheme. Section 6 describes the experimental setup and discusses the evaluation results. Finally, Section 7 concludes this paper. ## 2 Related Work The resource allocation scheme in the workflow management engine is influenced by the virtualization technology of cloud infrastructure, which is directly related to whether workflow tasks are hosted by VM instances or containers. In this section, we review the development of resource allocation strategy and discuss three categories of resource allocation strategy from the perspective of the evolution of virtualization technology, namely VM-based, Container-based, and Cloud-native-based. Note that the analysis of each aspect is not completely limited to the scope of the workflow management engine. ### _VM-based resource allocation_ In the VM-based era, Lee et al. [25] propose an adaptive scheduling approach to adjust resource allocation and scheduling in the Pegasus workflow management system. This approach utilizes batch queues to assign jobs to cluster's VMs and optimizes job scheduling across cluster's VMs through the average queue time of each available VM. Islam et al. [26] develop prediction-based resource measurement and provisioning strategies using neural networks and linear regression to satisfy upcoming resource demands. The sliding window approach for predicting resource usage in the cloud fits with dynamic and proactive resource management for interactive e-commerce applications. As for Business Process Management System (BPMS) field, Hoenisch et al. [12] present a self-adaptive resource allocation approach to automatically lease and release cloud resources for workflow executions based on knowledge (resource usage in VMs) about current and future process landscape. This approach has been implemented as part of ViePEP, a BPMS able to manage and schedule workflows in the cloud. By monitoring resource usage of VMs and the QoS of individual service invocations in VMs, ViePEP uses a prediction model to provide resource provisioning for elastic process execution of workflows. Subsequently, Hoenisch et al. [13] extend ViePEP by dynamic workflow scheduling and resource allocation algorithms. The proposed algorithm not only provides a complete schedule plan based on their former predicting model but also moves the service invocations (workflow task) from one timeslot to another to fully utilize the acquired resources. Although these solutions present appropriate cloud resource allocation schemes to some extent, the predictive models commonly require the collection and modeling according to former data. These upfront preparations consume unnecessary resources and block the automatic operation flow of the workflow management engines. Besides, VMs-based resource allocation schemes are commonly limited to VM's features such as slow startup, clumsy deployment, and high resource consumption. Therefore, these schemes are not suitable for performing workflows with dynamic and ever-changing resource requirements in cloud infrastructures. ### _Container-based resource allocation_ In the Container era, container-based resource allocation schemes gradually become the mainstream of cloud resource management. Considering the absolute resource isolation and security features of VMs, most container-based resource allocation scenarios adopt the deployed model of VM hosting containers. For instance, Mao et al. [27] propose a differentiated quality of experience scheduler to adjust resource provisioning for deep learning applications. This scheduler is implemented into Docker Swarm 5 and can accept the targeted quality of experience specifications from clients and dynamically adjust resource limits of containers to approach performance targets. Abdullah et al. [16] introduce a new deep learning-based approach to estimate the execution time of the jobs through the collected performance traces. This approach also predicts the execution time for different CPU pins and uses the laws of diminishing marginal returns to provide optimal CPU allocation to Docker containers. In the fog computing community, Yin et al. [28] propose a container-based task-scheduling algorithm with task delay constraint in mind. Herein, a resource reallocation mechanism works to achieve resource-utilization maximization on fog nodes by modifying the resource quota of task containers. Hu et al. [29] propose _CEC_, a containerized edge computing framework for dynamic resource provisioning in response to multiple intelligent applications. The _CEC_ first makes resource provisioning for containers in advance based on the workload prediction for the edge cluster formed by Docker Swarm and then uses the idea of control theory to achieve dynamic resource adjustments (meaning a sufficient number of containers) for hosted service applications. Footnote 5: [https://docs.docker.com/engine/swarm/](https://docs.docker.com/engine/swarm/) Containers, an efficient and lightweight virtualization technology, bring significant technology change to VMs-based resource allocation strategies. But in fact, it needs a container orchestration tool (e.g., Docker Swarm) to manage a wealth of containers across cluster nodes in several scenarios. In practice, resource limits' adjustment and reallocation of resource quotas in running containers have brought significant administrative burdens to Docker Swarm. Also, the adjustments to the number of containers will cause a delay in the startup of new containers. In addition, the preparation for predictive models is also not conducive to the automation of workflow management engines. Due to the shortcomings of the above solutions and the task dependencies and high concurrency of workflow, these resource allocation strategies can not provide efficient ideas for container-based workflow management engines. ### _Cloud-native-based resource allocation_ As the first hosted project by Cloud Native Computing Foundation (CNCF) 6, K8s has become the de-facto standard container orchestration system. Docker and K8s are reshaping resource management strategies for cloud infrastructures in the cloud-native era. For example, Chang et al. [30] propose a generic platform to facilitate dynamic resource provisioning based on K8s. The platform employs open-source tools to retrieve all the resource utilization metrics (such as CPU and memory) while integrating the application QoS metrics into monitoring. The resource scheduler module in the platform makes dynamic resource provisioning by horizontal scaling of task pods according to the K8s cluster's workload. Mao et al. [31] investigate the performance of using cloud-native frameworks (Docker and K8s) for big data and deep learning applications from the perspective of resource management fields. Together with Prometheus 7 and Grafana 8, the authors build a container monitoring system to keep tracking the resource usage of each job on worker nodes. To address massive aggregate resource wastage, Google uses _Autopilot_ to configure resources automatically, adjusting both the number of concurrent tasks in a job (horizontal scaling) and the CPU/memory limits for individual tasks (vertical scaling) [24]. Subsequently, Bader et al. [11] propose _Tarema_, a system for allocating task instances to heterogeneous K8s cluster resources during the execution of scalable scientific workflows. Using a scoring algorithm to determine the best match between a task and the available resources, _Tarema_ provides the near-optimal task-resource allocation. Footnote 6: [https://www.cncf.io/](https://www.cncf.io/) Footnote 7: [https://github.com/prometheus/prometheus](https://github.com/prometheus/prometheus) However, most of these resource allocation solutions in the cloud-native era use open source tools (from the CNCF community) to build resource monitoring systems, obtain the required resource utilization of the cluster, and provide corresponding resource provisioning strategies. It brings high deployment costs to the workflow management engine, which is inconsistent with the simple deployment and automatic system characteristics. In addition, these tools put too much pressure on the K8s cluster because of frequent access to kube-apiserver 9 for acquiring cluster resources [32]. Footnote 8: [https://github.com/grafana/grafana](https://github.com/grafana/grafana) To summarize the related work, we can conclude that resource allocation policies change with the evolution of virtualization technologies to adapt to different application scenarios and technology platforms. The most typical example is ViePEP-C [33], which evolved from former work [12], [13, 34, 35] in the VMs era to a container-based resilient BPMS platform in the container era, using containers instead of VMs for the execution of business process activities. Considering automation and flexible deployment of the integrated platform, resource allocation technology in the cloud-native era is more focused on Docker and K8s platforms. The design of our workflow management engine follows this idea. K8s, with its unique technical advantages and ecology in scheduling, automatic recovery, horizontal scalability, resource monitoring, and other aspects, makes its integration with workflow management engines far beyond the capabilities of container-based workflow management engines. Inspired by the work in [24, 27, 31], our ARAS takes into account workflow loads in K8s clusters and uses vertical scaling technology of containers to cope with continuous workflow requests and sudden resource spikes. ## 3 System Model This section describes how to use our ARAS to cope with continuous workflow requests and unexpected resource request spikes and maximize resource utilization while meeting workflow Service Level Objectives (SLOs). ### _System description_ For the clarity of presentation, we consider the scenario of a single K8s cluster with a set of nodes (VMs), donated by \(V=\{v_{1},v_{2},...,v_{m}\}\), where \(m\) represents the number of K8s cluster nodes. As for \(m\) nodes, we have a set of available CPU cores \(C=\{c_{1},c_{2},...,c_{m}\}\) and a set of availble memory capacity \(M=\{mem_{1},mem_{2},...mem_{m}\}\) correspondingly. The workflow set injected into the KubeAdaptor is represented as \(W=\{w_{1},w_{2},...,w_{k}\}\), where \(k\) indicates the amount of workflows. Herein, a workflow is abstractly defined as \(w_{i}=\{sla_{w_{i}},s_{i,1},s_{i,2},...,s_{i,n}\}\), wherein \(i\) indicates the \(ID\) of a workflow, \(sla_{w_{i}}\) represents a Service Level Agreement (SLA) of a workflow and \(s_{i,1},s_{i,2},...,s_{i,n}\) indicates steps (i.e., tasks) of workflow \(w_{i}\). Each workflow task is defined as \[\begin{split}& s_{i,j}=\{sla_{s_{i,j}},id,image,cpu,mem,duration,\\ & min_{cpu},min_{mem}\},\ 1\leq i\leq k\ and\ 1\leq j\leq n. \end{split} \tag{1}\] Herein, \(id\) is the unique identifier of this workflow task in workflow \(w_{i}\), and \(image\) represents the Docker Image address of this workflow task. The \(cpu\) is the amount of CPU Milli cores required by the users, and \(mem\) is the amount of memory capacity required by the users. \(duration\) indicates the duration of the task pod running, and \(min_{cpu}\) and \(min_{mem}\) represent a minimum of CPU and memory resources required to run the task container of \(s_{i,j}\) in workflow \(w_{i}\), respectively. Generally, a workflow can have an optional SLA (\(sla_{i}\)) composed of several SLOs expressed by \(slo_{1},slo_{2},...,slo_{n}\) on workflow (\(sla_{w_{i}}\)) or workflow task (\(sla_{s_{i,j}}\)) as following: \[sla_{i}=\{slo_{1},slo_{2},...,slo_{n}\},i\in\{w_{i},s_{i,j}\}. \tag{2}\] Herein, we only consider the deadline as the single SLO, meaning that each task in the workflow must be completed before its respective deadlines. Likewise, this workflow is no exception. \[\begin{split}& sla_{w_{i}}=deadline_{w_{i}},\\ & sla_{s_{i,j}}=deadline_{s_{i,j}}.\end{split} \tag{3}\] Note that the deadline for the last task \(s_{i,last}\) in a workflow is identical to this workflow execution deadline: \[deadline_{s_{i,last}}=deadline_{w_{i}}. \tag{4}\] ### _Problem formulation_ We assume that SLAs and deadlines defined by users are valid and achievable, i.e., a properly completed workflow means that all of its tasks must be completed by the deadline. With maximizing the resource utilization in the K8s cluster as a goal, as long as a task request arrives, our ARAS uses the resource scaling method to provide computational resources to the task container. Herein, the resource provision of the task container must not be less than a minimum of running resources to ensure the smooth operation of the task container. Fig. 1 depicts an execution process of a small-scale Montage workflow. At \(t_{1}\) seconds, task request of \(T_{1}\) arrives, and our ARAS has abilities to acquire concurrent tasks within its lifecycle (from \(t_{1}\) to \(t_{2}\)) considering predefined deadlines. As can seen from Fig. 1, \(T_{2}\), \(T_{3}\) and \(T_{4}\) will be launched within \(T_{1}\)'s lifecycle and four workflow tasks will compete for computing resources each other. To ensure that four concurrent tasks have enough resources to run smoothly, our ARAS employs the resource scaling method to reasonably allocate resources, i.e., scaling down resource requirements of the current task \(T_{1}\) according to the ratio of the total resource requirements of four tasks to the remaining resources in the K8s cluster (refers to Eq. (9)). Similarly, \(T_{10}\) executes between \(t_{2}\) and \(t_{3}\), \(T_{16}\) executes between \(t_{4}\) and \(t_{5}\). Their respective lifecycle all contains several concurrent tasks. The arrival of each task request also requires the resource scaling method to allocate resources in line with Eq. (9). In the following, we elaborate on the optimal problem in our ARAS. The allocated CPU and memory resources for each requested task in workflow \(w_{i}\) are respectively defined as follows: \[\begin{split}& U=\{u_{i,1},u_{i,2},...,u_{i,n}\},\\ & R=\{r_{i,1},r_{i,2},...,r_{i,n}\}.\end{split} \tag{5}\] \(x_{y,z}^{i}\in\{0,1\}\) with workflow identifier \(i\) is adopted as a decision variable for task placement, where \(1\leq y\leq n\) and \(1\leq z\leq m\) and defined as \[x_{y,z}^{i}=\begin{cases}1&\text{if $y^{th}$ task in $w_{i}$ is scheduled on Node $v_{z}$},\\ 0&\text{if $y^{th}$ task in $w_{i}$ is not scheduled on Node $v_{z}$}.\end{cases}\] We assume that each node in the K8s cluster is always active and that workflows are continuously injected into our workflow management engine. \(Mem_{total}\) indicates the total number of remaining memory resources of the K8s cluster. Since CPU is a compressible resource and memory is an incompressible resource, we only consider memory to maximize resource allocation in the optimization model. So our objective function is as follows: \[Maximize:\quad\sum_{i=1}^{k}\sum_{j=1}^{n}r_{i,j}/Mem_{total} \tag{6}\] Subject to: \[\sum_{z=1}^{m}x_{y,z}^{i}=1 \tag{7}\] \[\sum_{i=1}^{k}\sum_{y=1}^{n}x_{y,j}^{i}\cdot u_{i,y}\leq c_{j}\] \[\sum_{i=1}^{k}\sum_{y=1}^{n}x_{y,j}^{i}\cdot r_{i,y}\leq mem_{j}.\] Eq. (6) shows the objective function in our model, which maximizes the resource utilization for the remaining memory of the K8s cluster at each moment of the task request. Eq. (7) shows three constraints of our model. The first constraint indicates that a task can be scheduled on only one cluster node. The last two constraints imply that the total CPU and memory resources consumed by all task pods on the hosted node \(v_{j}\) must be less than or equal to the amount of the respective available resources on that node. ## 4 Architecture This section presents the system architecture of KubeAdaptor in detail, including its framework, design logic, and key modules. Subsequently, the MAPE-K model is elaborated around the resource allocation mechanism of KubeAdaptor. ### _KubeAdaptor framework_ The KubeAdaptor for the ARAS is illustrated in Fig. 2. As a workflow management engine, it works to administrate, schedule, and execute containerized workflow tasks. Its core functionalities are as follows: * Provide an interface to the public or private cloud, allowing to customize workflows on demand * Implement the containerized execution of workflows following the precedence and dependency relationshipships * Adaptively allocate resource quotas for requested workflow tasks and maximize resource utilization when ensuring the SLAs of workflows * Provide the ability of flexible deployment and the automatic operation flow and integrate into the K8s platform With the assistance of ARAS in this paper, KubeAdaptor equips with the functionalities of a cloud resource management system to elegantly manage a potentially highly volatile cloud workflow application scenario. ### _KubeAdaptor modules_ As depicted in Fig. 2, KubeAdaptor consists out of three main top level entities, including a _Command Line Interface_ (CLI), a _Workflow Injection Module_ and a _Containerized Workflow Builder_. The _Containerized Workflow Builder_ comprises seven sub-components responsible for workflow reception, containerization, resource allocation, resource monitoring, and task container cleanup. We focus on the _Resource manager_ module related to ARAS. **CLI:** It aims to define SLAs-based workflows and offer configuration files of one-key deployment. In addition, the users may request many workflows consecutively or even simultaneously through the _CLI_ module. **Workflow Injection Module:** Its _Parser_ and _Packaging_ modules serve as an independent function pod and work to read variable configuration information of workflow definition from the mounted directory, parse and encapsulate workflows in response to generating request of the subsequent workflow from the _Interface Unit_. **Interface Unit** This module works on receiving the workflow generating request, decomposing the workflow tasks, watching the state changes of task pods or workflows from the _Task Container Cleaner_, invoking the _Containerized Executor_ to generate workflow namespaces and task pods, and writing workflow status into the Redis database. Once the creation of the task pod fails, this module turns to fault tolerance management [21], also known as _self-healing_, the ability of a system to detect and recover from potential problems and continue to operate smoothly. **Containerized Executor:** Its two subcomponents work on generating workflow namespaces and task pods. This module creates task pods through allocated resources from the _Resource Manager_. In addition, the states of workflows and task pods are timely written into the Redis database. **Resource Manager:** It contains three subcomponents, such as _Resource Discovery_, _Resource Evaluator_ and _Allocator_. The _Resource Discovery_ is responsible for acquiring the remaining resource of the overall K8s cluster from the _Informer_. The _Resource Evaluator_ obtains workflow resource requirements and workflow execution states from the Redis database, assesses the adequacy of the current remaining resources of the K8s cluster, and launches corresponding countermeasures, if necessary. The _Allocator_ module uses resource scaling strategy to make resource allocation for current active task pods in response to continuous workflow requests and sudden request spikes. It is also known as _self-configuration_, the ability of a system to reconfigure itself under changing and unpredictable circumstances. **Informer:** As a core toolkit in Client go 10, the _Informer_ is in charge of synchronizing resource objects and events Fig. 1: Resource allocation example. A small-scale Montage workflow with 21 tasks is used to illustrate the resource scaling method in our ARAS. The test environment uses our experimental setup in Section 6.1.1. between K8s core components and _Informer_ local cache. It provides the _Resource Discovery_ with the remaining resources of the K8s cluster and responds to the _State Tracker_ for the state changes of the resource objects. **State Tracker:** It hosts the monitoring program based on the List-Watch mechanism and responds state queries of various resource objects to each module anytime. **Task Container Cleaner:** It works on deleting the pods in a state of _Succeeded_ or _Failed_ or _OOMKilled_ and workflow namespaces without uncompleted task pods. Once receiving successful feedback on the just-deleted workflow or task pods, this module proceeds to _Interface Unit_ and triggers the following workflow or subsequent task. **Redis:** The Redis database is to be deployed within or outside the cluster in advance and is responsible for storing workflow execution status and predefined resource requirements of workflow tasks. KubeAdaptor is implemented by the Go language and provides an _CLI_ interface to K8s clusters. With just a few tweaks to the configuration file, the users can go out of the box and smoothly deploy the KubeAdaptor on K8s clusters. The deployment and uninstalling of KubeAdaptor are non-intrusive and clean to the cluster, and its workflow containerization execution works in an automated way. For further details about KubeAdaptor, we refer to [21]. ### _MAPE-K model_ The MAPE-K model [36], originated from the field of Automatic Computing, is an instrumental framework for systematic development of adaptive systems, including resource allocation and workflow adaptation. Adaptive strategy within the KubeAdaptor works to realize the self-optimization of resource utilization maximization. Herein, the self-optimization ability, along with self-healing and self-configuration (elaborated in 4.2), enables our KubeAdaptor to become a self-management system. To deal with persistent workflow requests and ever-changing resource requirements, we use the MAPE-K model to retrofit with minimal intervention to the KubeAdaptor, which forms an adaptive execution cycle as depicted in Fig. 3. In the following, we will briefly discuss the four steps of this cycle and how they influence the self-management capabilities of KubeAdaptor: **Monitoring:** The Monitoring functionalities stem from the _Informer_ and Redis database and work to provide workflow status and the remaining resources in K8s Clusters to the next step. Workflow status data includes which workflows have been executed, which tasks have been completed, resource requirements of workflow tasks, as well as the SLOs of the workflow and task. The remaining resources refer to the residual CPU and memory resources of the K8s cluster and each node in the K8s cluster. **Analysis:** The functionality of this step comes into play in the _Resource Evaluator_ and _Allocator_ within _Resource Manager_. For adaptive resource allocation, it is necessary to analyze the monitored data and reason on the general knowledge about the system. It is done so that we can adapt according to countermeasures to deal with dynamic workflow requests and SLA violations. **Planning:** The planning step takes full account of the resource requirements for future workflow tasks to be launched within the current task lifecycle and SLAs of workflows to carry out reasoning and generate a resource allocation plan. The planning results also provide sufficient prior knowledge for subsequent workflow input to the CLI module. **Execute:** The execution steps are put into practice through _Containerized Executor_, aiming to finish the creation of a new round of task pods combined with the analysis results of the MAPE-K model. Fig. 3: Resource allocation scheme based on MAPE-K model. Fig. 2: KubeAdaptor architecture. **Knowledge Base:** While not really a part of the cycle, the Knowledge Base stores the configuration information about the system and provides workflow execution status and the remaining resources in K8s clusters to the MAPE-K model running. In short, the KubeAdaptor needs to provide further analysis of the resource allocation scheme to self-optimize application scenarios in response to persistent workflow requests and sudden resource spikes. ## 5 Adaptive Resource Allocation Scheme Once _Resource Manager_ receives a resource request of workflow task from _Containerized Executor_, its three subcomponents immediately launch resource discovery, resource evaluation, and resource allocation in turn. The entire execution process responds to the workflow task's resource request iteratively. Next, we introduce the adaptive resource allocation algorithm, resource discovery algorithm, and resource evaluation algorithm. All notations used in our algorithm are illustrated in Table I. In addition, workflow execution states are represented as a set of state data for all tasks of workflow \(w_{i}\) (\(1\leq i\leq k\)) and a record of task-state data is defined as \[\begin{split} task_{i,j}^{redis}&=\{t_{start},duration,t_{end},cpu,mem,flag\},\\ &\quad 1\leq i\leq k,\;1\leq j\leq n\;and\;flag\in\{false,true\}.\end{split} \tag{8}\] Note that as long as KubeAdaptor starts, \(task_{i,j}^{redis}\) is stored in Redis database through _Interface Unit_, and then is continuously updated by the _Containerized Executor_. Herein, \(t_{start}\) is the start time of the current task pod in K8s cluster, \(duration\) is similar to the definition of \(s_{i,j}.duration\) and represents the duration of the current task pod's running, \(t_{end}\) is the completed time of the current task in the K8s cluster, \(cpu\) and \(mem\) are equivalent to the definition of \(s_{i,j}.cpu\) and \(s_{i,j}.mem\), and \(flag\) is a boolean variable that identifies the execution status of the current task pod. Herein, the boolean variable _false_ indicates that the current task is not complete. We use _Dictionary_ data structure \(Map<task_{i,j}.id,\;task_{i,j}^{redis}>\) to indicate state data of the current task, and \(task_{i,j}.id\) is the unique identifier passed by task \(s_{i,j}\) of workflow \(w_{i}\) (refers to Eq.(1)). Because the pod is the minimum execution unit of the K8s container orchestrator, coupled with KubeAdaptor's non-invasive automated execution process, _Resource Manager_ allocates resources only once throughout the requested task pod's lifecycle in response to task pod's resource request. The users can initially set \(min_{cpu}\) and \(min_{mem}\) of the task pod in _CLI_ module, by which this task pod ensures its hosted container runs smoothly. ``` 0:\(s_{i,j}\); 0:\(allocated_{cpu}\), \(allocated_{mem}\); 1: Initialization: \(request.cpu\), \(request.mem\), \(Rec_{max}^{cpu}\), \(Re_{max}^{mem}\), \(totalResidual.cpu\), \(totalResidual.mem\gets 0\); 2:for each task pod's resource request do 3:/* Access the Redis and get the total of requested resources of all pods to be launched within \(s_{i,j}\)'s lifecycle */ 4: Get \(task^{req}\) with respects to \(s_{i,j}\) of \(w_{i}\) from Redis; 5:\(request.cpu\gets task^{req}.cpu\); 6:\(request.mem\gets task^{req}.mem\); 7: Get all \(task_{i,j}^{redis}\) for all workflows from Redis; 8:for each \(task\in\{task_{i,j}^{redis}\}\)do 9:if\(task.t_{start}\in\)[\(task^{req}.t_{start},\;task^{req}.t_{end}\)) then 10:\(request.cpu\) += \(task.cpu\); 11:\(request.mem\) += \(task.mem\); 12:endif 13:endfor 14:/* Call the ResourceDiscoveryAlgorithm */ 15:\(ResidualMap\leftarrow\)ResourceDiscoveryAlgorithm; 16:for each \(item\in ResidualMap\)do 17:\(totalResidual.cpu\) += \(item.residual.cpu\); 18:\(totalResidual.mem\) += \(item.residual.mem\); 19:if\(item.residual.cpu\) >\(Rec_{max}^{cpu}\)then 20:\(Rem_{max}^{cpu}=item.residual.cpu\); 21:\(Rem_{max}^{mem}=item.residual.mem\); 22:endif 23:endfor 24:endfor 25:/* Call the ResourceEvaluationAlgorithm */ 26:\(allocated_{cpu}\), \(allocated_{mem}\leftarrow\) ResourceEvaluationAlgorithm; 27:if(\(allocated_{cpu}\geq s_{i,j}.min_{cpu}\)) and 28:\((allocated_{mem}\geq s_{i,j}.min_{mem}+\beta)\) 29: break; 30:endif then 31:endfor 32:return\(allocated_{cpu}\), \(allocated_{mem}\); ``` **Algorithm 1** AdaptiveResourceAllocationAlgorithm ### _Adaptive resource allocation algorithm_ Algorithm 1 presents an adaptive resource allocation algorithm. It initializes these parameters to be 0 in line 1 and takes workflow task \(s_{i,j}\) as input. Once the _Containerized Executor_ sends a task pod's resource request, the _AdaptiveResourceAllocationAlgorithm_ performs the following process. Lines 4-13 work to access the Redis database and get the total of requested resources of all task pods to be launched within \(s_{i,j}\)'s lifecycle. These task pods have resource competition with the currently requested task pod. Subsequently, Algorithm 1 uses _ResourceDiscoveryAlgorithm_ (5.2) to obtain the remaining resources about the K8s cluster and each node in the K8s cluster. Lines 16-23 traverse the remaining resource structure \(ResidualMap\) and accumulate to get the total remaining resources of all nodes across the K8s cluster. Meanwhile, the proposed algorithm obtains the maximal remaining CPU and memory resources. Herein, we assume that one node with the maximal remaining CPU resources also has the maximal remaining memory to facilitate the conditional comparison in _ResourceEvaluationAlgorithm_ (prioritize CPU resource for allocation). Then, the proposed algorithm calls _ResourceEvaluationAlgorithm_ to present the allocated resources (line 25). To ensure the task pod run properly in our experimental testbed, with the running program via _Stress_ tool within the task pod in mind, we add a constant \(\beta\) to the minimum running resource for the task pod. It is because that _Stress_ tool in the running program of task pod uses \(min_{mem}\) to allocate and release memory resources for resource loads. Resource amount \(min_{mem}+\beta\) as a minimum of memory resource is just enough to run the task pods. Finally, the allocated resources returned by the proposed algorithm meet the conditions of minimum resource demands (line 27). ``` 1:\(PodLister\), \(NodeLister\), \(ResidualMap\); 2:\(ResidualMap\); 3:Initialization: \(nodeReq.cpu\), \(nodeReq.mem\), \(allocatable.cpu\), \(allocatable.mem\), \(residual.cpu\), \(residual.mem\gets 0\); 4:Get the \(PodList\) from \(PodLister\) through _Informer_; 5:Get the \(NodeList\) from \(NodeLister\) through _Informer_; 6:for\(node\)\(v_{i}\in V\)do 7: /*obtain the total resource requests of all pods on \(v_{i}\)*/ 8:for each pod \(p_{i}\) in \(PodList\)do 9:if\(p_{i}\) is hosted in \(v_{i}\)then 10:if\(pod_{i}.phase\in\{Running,Pending\}\)then 11:\(nodeReq.cpu\) +=\(pod_{i}.request.cpu\); 12:\(nodeReq.mem\) += \(pod_{i}.request.mem\); 13:endif 14:endif 15:endfor 16: /*obtain the allocatable resources on each node \(v_{i}\)*/ 17: Obtain \(node_{i}\) from \(NodeList\) corresponding to \(v_{i}\). 18:\(allocatable.cpu\) = \(node_{i}.allocatable.cpu\); 19:\(allocatable.mem\) = \(node_{i}.allocatable.mem\); 20: /*acquire the remaining resources on node \(v_{i}\)*/ 21:\(residual.cpu\) = \(allocatable.cpu\) - \(nodeReq.cpu\); 22: \(residual.mem\) = \(allocatable.mem\) - \(nodeReq.mem\); 23: /*encapsulate Dictionary \(ResidualMap\)*/ 24: ResidualMap[\(v_{i}.ip\)]=\(\{residual.cpu,residual.mem\}\); 25:endfor ``` **Algorithm 2** ResourceDiscoveryAlgorithm ### _Resource discovery algorithm_ Algorithm 2 shows how our resource discovery algorithm acquires the remaining resources of the K8s cluster and returns the remaining resource MapReduce\(ResidualMap\). At first step, this algorithm initializes related parameters to be \(0\) in line 1 and respectively get the \(PodList\) and \(NodeList\) of K8s cluster from \(PodLister\) and \(NodeLister\) through _Informer_ component. Then the algorithm traverses all nodes in the K8s cluster and uses for-loop (lines 6-13) to acquire the total number of occupied resources for all pods with Running and Pending states on the current node \(v_{i}\). Lines 15-17 obtain the residual CPU and memory resources on the current \(v_{i}\). Next, the algorithm encapsulates the \(ResidualMap\) for each \(v_{i}\). Once the iteration is complete for all the K8s cluster nodes, the algorithm returns the residual resource \(ResidualMap\). ### _Resource evaluation algorithm_ Algorithm 3 elaborates the resource evaluation process in detail. The algorithm takes \(task_{req}\), \(Re_{max}^{cpu}\), \(Re_{max}^{mem}\), \(totalResidual.cpu\), \(totalResidual.mem\), \(request.cpu\) and \(request.mem\) as input and finally return the allocated CPU and memory resources. As mentioned in (5.1), some task pods to be launched during the requested task pod's lifecycle will compete for computational resources with the current task request \(task^{req}\), and we use a resource scaling method to provide resource allocation based on a propor \begin{table} \begin{tabular}{c l} \hline \hline Notation & Meaning \\ \hline \(v_{i}\) & a K8s node (VM) \(v_{i}\in V\) \\ \(s_{i,j}\) & the \(j\)th task in workflow \(w_{i,1}\leq i\leq k\) and \(1\leq j\leq n\) \\ \(allocated_{cpu}\) & the allocated CPU resource number through the Adaptive Resource Allocation Algorithm \\ \(allocated_{mem}\) & the allocated memory resource number through the Adaptive Resource Allocation Algorithm \\ \(request.cpu\) & the accumulated CPU resource number for many task requests \\ \(request.mem\) & the accumulated memory resource number for many task requests \\ \(Re_{max}^{pa}\) & the maximum remaining resource amount of CPU among K8s cluster nodes \\ \(Re_{max}^{mem}\) & the maximum remaining resource amount from memory among K8s cluster nodes \\ \(totalResidual.cpu\) & the total of residual CPU resource across K8s cluster nodes \\ \(totalResidual.mem\) & the total of residual memory resource across K8s cluster nodes \\ \(task^{req}_{j}\) & the current task request with respects to \(s_{i,j}\) \\ \(task^{req}_{j}\) & a record of task-state data from Redis database \\ \(PodLister\) & an interface of acquiring pod list in _Informer_ component \\ \(NodeLister\) & an interface of acquiring node list in _Informer_ component \\ \(ResidualMap\) & a data dictionary for storing remaining resources (CPU and memory) of each node \\ \(nodeReq.cpu\) & the accumulated CPU resource requirements for all pods on a node \\ \(nodeReg.mem\) & the accumulated memory resource requirements for all pods on a node \\ \(allocatable.cpu\) & the accumulated allocatable CPU resource across K8s cluster nodes \\ \(allocatable.mem\) & the accumulated allocatable memory resource across K8s cluster nodes \\ \(residual.cpu\) & the residual CPU resource in K8s cluster \\ \(residual.mem\) & the residual memory resource in K8s cluster \\ \(pod_{i}\) & a data struct from Podil, which contains many key fields about container’s features \\ \(cpu_{cut}\) & the allocated CPU resource amount for task request based on Eq. (9) \\ \(mem_{cut}\) & the allocated memory resource amount for task request based on Eq. (9) \\ \(\alpha\) & a proportional value derived from experience, \(\alpha\in(0,1)\) \\ \(\beta\) & a constant value derived from experience, \(\beta\geq 20\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Major notations used in adaptive resource allocation scheme tional value of the total remaining resources over the total amount of resource requests, defined as follows. \[\begin{split}& cpu_{cut}=(task^{req}.cpu)\cdot\frac{totalResidual.cpu}{ request.cpu},\\ & mem_{cut}=(task^{req}.mem)\cdot\frac{totalResidual.mem}{ request.mem}.\end{split} \tag{9}\] In addition, we define a resource allocation factor \(\alpha\) for each node with a maximum of residual resources (CPU or memory). Through lots of experimental evaluations, we use \(\alpha=0.8\), which means that the algorithm only allocates \(80\%\) of the remaining resources in response to insufficient residual resource scenario on the current node while ensuring \(20\%\) residual resources for its other loads. ``` 0:\(task^{req}\), \(Rec^{cpu}_{max}\), \(Rem^{mem}_{max}\), \(totalResidual.cpu\), \(totalResidual.mem\); request.cpu, \(request.mem\); 0:\(allocated.cpu\), \(allocated.mem\); 1: Get \(cpu_{cut}\) and \(mem_{cut}\) through Eq. (9); 2: define conditions \(request.cpu\)\(<\)\(totalResidual.cpu\) as \(A_{1}\), \(request.mem\)\(<\)\(totalResidual.mem\) as \(A_{2}\), \(task^{req}.cpu\)\(<\)\(Rec^{cpu}_{max}\) as \(B_{1}\), \(task^{req}.mem\)\(<\)\(Re^{mem}_{max}\) as \(B_{2}\), \(cpu_{cut}\)\(<\)\(Re^{cpu}_{max}\) as \(C_{1}\), and \(mem_{cut}\)\(<\)\(Re^{mem}_{max}\) as \(C_{2}\); 3: define the symbol \(\neg\) as the negation of a condition and the symbol \(\wedge\) as the logical and. 4: /* (1) The remaining resources are sufficient */ 5:if\(A_{1}\)\(\wedge\)\(A_{2}\)then 6:if\(B_{1}\)\(\wedge\)\(B_{2}\)then 7:\(allocated.cpu\) = \(task^{req}.cpu\) 8:\(allocated.mem\) = \(task^{req}.mem\) 9:else 10:if\(\neg B_{1}\)\(\wedge\)\(B_{2}\)then 11:\(allocated.cpu\) = \(Re^{cpu}_{max}\cdot\alpha\) 12:\(allocated.mem\) = \(task^{req}.mem\) 13:else 14:if\(B_{1}\)\(\wedge\)\(\neg B_{2}\)then 15:\(allocated.cpu\) = \(task^{req}.cpu\) 16:\(allocated.mem\) = \(Re^{mem}_{max}\cdot\alpha\) 17:else 18:\(allocated.cpu\) = \(Re^{mem}_{max}\cdot\alpha\) 19:\(allocated.mem\) = \(Re^{mem}_{max}\cdot\alpha\) 20:endif 21:endif 22:endif 23:endif 24:/* (2) The remaining cpu resource is unsufficient */ 25:if\(\neg A_{1}\)\(\wedge\)\(A_{2}\)then 26:if\(C_{1}\)\(\wedge\)\(B_{2}\)then 27:\(allocated.cpu\) = \(cpu_{cut}\) 28:\(allocated.mem\) = \(task^{req}.mem\) 29:else 30:if\(\neg C_{1}\)\(\wedge\)\(B_{2}\)then 31:\(allocated.cpu\) = \(Re^{cpu}_{max}\cdot\alpha\) 32:\(allocated.mem\) = \(task^{req}.mem\) 33:else 34:if\(C_{1}\)\(\wedge\)\(\neg B_{2}\)then 35:\(allocated.cpu\) = \(cpu_{cut}\) 36:\(allocated.mem\) = \(Re^{mem}_{max}\cdot\alpha\) 37:else 38:\(allocated.cpu\) = \(Re^{cpu}_{max}\cdot\alpha\) 39:\(allocated.mem\) = \(mem_{cut}\) 40:endif 41:endif 42:if\(\neg A_{1}\)\(\wedge\)\(A_{2}\)then 43:\(allocated.cpu\) = \(Re^{cpu}_{max}\cdot\alpha\) 44:if\(B_{1}\)\(\wedge\)\(C_{2}\)then 45:\(allocated.cpu\) = \(task^{req}.cpu\) 46:\(allocated.mem\) = \(mem_{cut}\) 47:else 48:\(allocated.cpu\) = \(Re^{mem}_{max}\cdot\alpha\) 49:\(allocated.mem\) = \(Rem^{mem}_{max}\cdot\alpha\) 50:endif 51:endif 52:endif 53:endif 54:if\(\neg A_{1}\)\(\wedge\)\(\neg A_{2}\)then 55:\(allocated.cpu\) = \(cpu_{cut}\) 56:\(allocated.mem\) = \(mem_{cut}\) [MISSING_PAGE_POST] 999 through the resource scaling method (Eq. (9)). In case of conditions \(C_{1}\) and \(B_{2}\), we allocate resources according to \(cpu_{cut}\) and \(task^{req}.mem\) (lines 26-28). When the maximum CPU remaining resources of a node fail to accommodate \(cpu_{cut}\), the algorithm adopts \(Re_{max}^{cpu}\cdot\alpha\) as the allocated CPU resource. Due to sufficient memory capacities, the algorithm suffices for the current task memory request \(task^{req}.mem\) (lines 30-32). In case of conditions \(C_{1}\wedge\neg B_{2}\), the algorithm allocates \(cpu_{cut}\) CPU resource and \(Re_{max}^{mem}\cdot\alpha\) memory resource, which is due to that the current task memory request is greater than the maximum residual memory resource on cluster node (lines 34-36). Instead, the algorithm allocates resources (CPU and memory) according to the \(\alpha\) scale factor of the largest node's remaining resources (lines 37-39). **Insufficient residual memory resource.** When the total residual memory resource across the K8s cluster cannot satisfy the total resource demand of concurrent tasks, conditions \(B_{1}\) and \(C_{2}\) are considered. The algorithm acquires the \(mem_{cut}\) through the resource scaling method (Eq. (9)). The operations under conditions \(B_{1}\wedge C_{2}\), \(\neg B_{1}\wedge C_{2}\), \(B_{1}\wedge\neg C_{2}\) and \(\neg B_{1}\wedge\neg C_{2}\) are similar to the above, except that here we are talking about memory resource (lines 45-63). **Insufficient residual CPU and memory resources.** In the case of \(\neg A_{1}\wedge\neg A_{2}\), meaning that the total remaining CPU and memory resources across the K8s cluster fail to suffice for the CPU and memory resource requests of concurrent tasks, the algorithm allocates CPU and memory resources according to \(cpu_{cut}\) and \(mem_{cut}\) obtained by resource scaling method (lines 65-67). Finally, \(ResourceEvaluationAlgorithm\) returns allocated resources \(allocated.cpu\) and \(allocated.mem\). ## 6 Experimental Evaluation In the following, we evaluate the proposed ARAS for different evaluation metrics and discuss the benefits of the proposed ARAS under three distinct arrival patterns compared with the baseline. ### _Experimental setup and design_ For the evaluation, we apply a setting employed in our former work [21] and make adaptations for the proposed ARAS within KubeAdaptor discussed here. In this subsection, we briefly introduce experimental scenarios, workflow examples, workflow instantiation, workflow arrival patterns, evaluation metrics, and baseline algorithm. #### 6.1.1 Experimental scenarios The K8s cluster used in our experiments consists of one Master node and six nodes. Each node equips with an 8-core AMD EPYC 7742 2.2GHz CPU and 16GB of RAM, running Ubuntu 20.4 and K8s v1.19.6 and Docker version 18.09.6. The Redis database v5.0.7 is installed on the Master node. _Workflow Injector Module_ and _Containerized Workflow Builder_ are containerized and deployed into the K8s cluster through _Service_11 and _Deployment_12. We explore the performances of the proposed ARAS and baseline by running four scientific workflows on the K8s cluster. Footnote 11: [https://kubernetes.io/docs/concepts/](https://kubernetes.io/docs/concepts/) Footnote 12: [https://kubernetes.io/docs/concepts/workloads/](https://kubernetes.io/docs/concepts/workloads/) #### 6.1.2 Workflow examples To verify the adaptations of our proposed ARAS within KubeAdaptor, four scientific workflows, such as Montage (astronomy), Epigenomics (genome sequence), CyberShake (earthquake science), and LIGO Inspiral (gravitational physics), are used to run on K8s cluster in a containerized manner [3]. We make a few tweaks to the workflow structure and add virtual entrance and exit nodes of workflows to form the workflow structure with the DAG diagram. As for each type of scientific workflow, we uniformly adopt a small-scale workflow (about 20 tasks) in our experiments, as shown in Fig. 4, derived from the Pegasus Workflow repository [37]. Structurally, four types of workflows cover all the structural features regarding composition and components (in-tree, out-tree, fork-join, and pipeline) that serve to illustrate the universality and complexity of workflows. Herein, we only consider the topologies of four scientific workflows and do not focus on the real-world data processing of the tasks, which does not affect verifying the adaptability of our ARAS. For ease of performance comparisons among resource allocation solutions, we assume that four classes of scientific workflows consist of the same tasks. Each node of workflow DAGs uses resource load (CPU and memory utilization) and service runtime to simulate workflow tasks in the experiments. Note that the KubeAdaptor schedules workflow tasks topologically in a top-down fashion according to task dependencies. #### 6.1.3 Workflow instantiation As for resource load in workflow tasks, we employ several parameters together with _Stress_ to work on simulating scientific workflow tasks. In each task program, we use the _Stress_13 tool to set several CPU forks, a memory of \(1000Mi\) (is equal to \(mem_{min}\) of Eq. (1)), and random duration (user's definition ahead of time in Eq. (1)). CPU forking and memory allocation operations in the task pod last twice as long as \(duration\). The total duration of each task pod is random and falls between \(10s\) and \(20s\). Then we pack the Python application with _Stress_ program into a task Image file through Docker Engine 14, store the task Image file in local Harbor 15 or remote Docker Hub repository 16. We can import container parameters (refer to Eq. (1)) defined in the ConfigMap file of in _Workflow Injection Module_ into the task container hosted in the task pod. For resource setting within task pod, we uniformly set the resource _requests_ and _limits_ to \(2000\) Milli cores (i.e., \(2000m\)) CPU and \(4000Mi\) memory. Note that the _requests_ field has the same parameter as the _limits_ field, which ensures that this task pod has the highest priority, namely _Guaranteed_[38]. Footnote 13: [https://linux.die.net/man/1/stress](https://linux.die.net/man/1/stress) Footnote 14: [https://github.com/IsaacKlop/task-emulator](https://github.com/IsaacKlop/task-emulator) #### 6.1.4 Workflow arrival patterns We make use of three distinct workflow request arrival patterns: **Constant Arrival Scenario:** In this scenario, workflow requests arrive in a constant manner. _Workflow Injector__Mailer_ together with _CLI_ sends \(5\) workflow requests simulat neously to the _Containerized Workflow Builder_ every \(300\) seconds, i.e., \(y=5\). Send six times for a total of \(30\) workflows. A graphical depiction of this arrival curve is depicted in Fig. 5(a). **Linear Arrival Scenario:** In this scenario, the workflow requests are injected into _Containerized Workflow Builder_ in a linear rising function, i.e., \(y=k*x+d\), where \(y\) is the amount of concurrent workflow request and \(d\) is the initial value \(2\). Concurrent workflow requests increase by \(k=2\) every \(300\) seconds. This linear arrival curve is depicted in Fig. 5(b). Send five times for a total of \(30\) workflows. **Pyramid Arrival Scenario:** In this scenario, workflow requests are sent to _Containerized Workflow Builder_ in line with a pyramid-like function. We start with a small number of concurrent workflow requests (is equal to \(2\)), till it grows to a randomly selected large number (is equal to \(6\) for each type of workflow), which can be seen in Fig. 5(c). Concurrent workflow requests grow every \(300\) seconds by \(2\) until the peak is reached. Once the peak reaches, we immediately reduce this number to the small initial value in the same manner and repeat this process until the total number of workflow requests is reached (herein, \(34\)). The deployment of these three scenarios aims to maximize coverage of the ever-changing resource needs and sudden peaks of workflow requests in a production environment. Even if there is some predictability in the Constant and Linear Arrival scenarios, the Pyramid function follows an unpredictable arrival pattern. #### 6.1.5 Evaluation metrics To evaluate our ARAS, we conduct each arrival pattern three times and analyze the results against the following quantitative metrics: **Total Duration of All Workflows (in Minutes):** This metric is the average total duration of all injected workflows, i.e., the elapsed time from the arrival of the first workflow request to the moment when the last workflow request is complete. **Average Workflow Duration (in Minutes):** This metric reflects the average execution time of individual workflow, which is the time that each workflow takes from the start of the first task to the end of the last. **Resource Usage:** Resource usage contains CPU and memory resource utilization, reflecting the average resource utilization throughout the total duration of all injected workflows across the K8s cluster. The greater the resource utilization, the closer to our optimization goal. The resource usage comparison covering four types of scientific workflows with the baseline algorithm further verifies the better performance of our ARAS solution. #### 6.1.6 Baseline In experiments, we used our recent resource allocation strategy [21] as a baseline method, which does not take into account the potential future task requests throughout the current task's lifecycle. It means that the resource allocation strategy in the baseline follows the First Come First Serve (FCFS) and relies on the adequacy of residual resources on cluster nodes. If enough, the resource allocation is complete. Otherwise, wait for other task pods to complete and release resources to meet the resource reallocation for the current task request. ### _Results and analysis_ To fully evaluate KubeAdaptor together with our ARAS, we present a general evaluation and the evaluation of resource allocation failure, and discuss the evaluation results. In order to minimize external influences, our K8s cluster has no other application load, and we execute each evaluation three times at different times of one day. #### 6.2.1 General evaluation In the following, we use the KubeAdaptor with our ARAS and the baseline to run four scientific workflows against three distinct workflow arrival patterns three times and compare our ARAS with the baseline on experimental results. We calculate the mean value and the standard deviation \(\delta\) for all metrics. Table II presents the resulting mean values and the standard deviation from the conducted evaluation runs. In general, the observed standard deviation is low and therefore indicates a low dispersion in the results of the different evaluations. As presented in Table II, "Adaptive" Fig. 4: The topology diagram of four scientific workflow applications. denotes our ARAS, while "Baseline" marks the application of the baseline algorithm (mentioned in 6.1.6). The interval between two workflow request bursts is set to \(300\) seconds for three arrival patterns, and the amount of injected workflows for three arrival patterns is set to \(30\), \(30\), and \(34\), respectively. Generally, our ARAS is superior to the baseline algorithm on each observation metric against four different workflow types under distinct workflow arrival patterns. In addition, The CPU and memory resources set in the task pod are constant, the allocatable cluster resources are fixed, and the resource scaling method scales down resources according to Eq. (9). So no matter in each arrival pattern and each resource allocation algorithm, the utilization rate of CPU and memory resources are the same, and both resource usage curves in each workflow arrival pattern are similar. In the following, we elaborate on the evaluation metrics of each workflow arrival pattern in the light of workflow types. From Fig. 5 to Fig. 8 illustrate the presentation of the average evaluation results by depicting the arrival patterns (Workflow Requests) over time and the number of used computational resources (CPU and memory) until all workflow requests have been served. Note that the used resource curve in each workflow arrival pattern usually ends later than the workflow request curve. It can be traced to the fact that each workflow has a deadline in the future, and some workflows are still waiting in queue for execution. **Montage**: In our experimental setup, a small-scale Montage workflow consists of \(21\) tasks (refers to Fig. 4(a)). Compared with the baseline, as for the total duration of all workflows in Table II, our ARAS leads to time savings of \(9.8\%\) for the constant arrival pattern and time savings of \(26.06\%\) for the linear arrival pattern, while in the pyramid arrival pattern, time savings amounts to \(9.8\%\). Similarly, as for average workflow duration, our ARAS, in comparison to the baseline, gains a time saving of \(26.4\%\), time savings of \(52.3\%\), and time savings of \(38.5\%\) for three different workflow arrival patterns from left to right, respectively. Fig. 5 broadly reflects the consistency of the above data with the total duration of all injected workflows, even with a set of \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{**Workflow**} & \multirow{3}{*}{\begin{tabular}{c} **Metrics** \\ \end{tabular} } & \multicolumn{2}{c|}{Constant Arrival} & \multicolumn{2}{c|}{Linear Arrival} & \multicolumn{2}{c|}{Pyramid Arrival} \\ \cline{2-9} & & Adaptive & Baseline & Adaptive & Baseline & Adaptive & Baseline \\ \cline{2-9} & \multicolumn{2}{c|}{Number of Workflow Requests} & \multicolumn{2}{c|}{30} & \multicolumn{2}{c|}{30} & \multicolumn{2}{c|}{34} \\ \cline{2-9} & \multicolumn{2}{c|}{Interval between two Requests Bursts (in Seconds)} & \multicolumn{2}{c|}{300} & \multicolumn{2}{c|}{300} & \multicolumn{2}{c|}{300} \\ \hline \multirow{5}{*}{\begin{tabular}{c} Montage \\ \end{tabular} } & Total Duration of All Workflows in Minutes & \((\delta=0.21)\) & \((\delta=0.26)\) & \(26.95\) & \((\delta=0.38)\) & \((\delta=6.31)\) & \((\delta=2.46)\) & \((\delta=1.74)\) \\ \cline{2-9} & \multicolumn{2}{c|}{Average Workflow Duration in Minutes} & \(5.74\) & \(7.80\) & \(5.41\) & \(11.33\) & \(7.22\) & \(11.73\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=0.49)\) & \((\delta=0.36)\) & \((\delta=0.26)\) & \((\delta=4.28)\) & \((\delta=1.36)\) & \((\delta=0.88)\) \\ \cline{2-9} & \multicolumn{2}{c|}{CPU resource Usage} & \(0.28\) & \(0.27\) & \(0.35\) & \(0.31\) & \(0.26\) & \(0.20\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=0.00)\) & \((\delta=0.02)\) & \((\delta=0.01)\) & \((\delta=0.07)\) & \((\delta=0.03)\) & \((\delta=0.01)\) \\ \cline{2-9} & \multicolumn{2}{c|}{Memory resource Usage} & \(0.28\) & \(0.27\) & \(0.35\) & \(0.31\) & \((\delta=0.26)\) & \(0.20\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=0.00)\) & \((\delta=0.13)\) & \((\delta=0.01)\) & \((\delta=0.07)\) & \((\delta=0.03)\) & \((\delta=0.01)\) \\ \hline \multirow{5}{*}{\begin{tabular}{c} Epigenomics \\ \end{tabular} } & Total Duration of All Workflows in Minutes & \(30.55\) & \(39.06\) & \((\delta=1.84)\) & \((\delta=7.29)\) & \((\delta=4.37)\) & \((\delta=4.28)\) & \((\delta=4.32)\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=4.24)\) & \(9.35\) & \(9.81\) & \(16.53\) & \(9.65\) & \(19.41\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=0.05)\) & \((\delta=1.56)\) & \((\delta=5.11)\) & \((\delta=4.41)\) & \((\delta=3.33)\) & \((\delta=6.04)\) \\ \cline{2-9} & \multicolumn{2}{c|}{CPU resource Usage} & \(0.34\) & \(0.27\) & \(0.32\) & \(0.25\) & \(0.21\) & \(0.20\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=0.02)\) & \((\delta=0.01)\) & \((\delta=0.06)\) & \((\delta=0.00)\) & \((\delta=0.01)\) & \((\delta=0.01)\) \\ \hline \multirow{5}{*}{\begin{tabular}{c} CyberShake \\ \end{tabular} } & Total Duration of All Workflows in Minutes & \(38.30\) & \(50.29\) & \(34.06\) & \(49.46\) & \(46.76\) & \(66.41\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=3.72)\) & \((\delta=5.29)\) & \((\delta=6.16)\) & \((\delta=1.18)\) & \((\delta=4.02)\) & \((\delta=6.56)\) \\ \cline{2-9} & \multicolumn{2}{c|}{Average Workflow Duration in Minutes} & \(9.19\) & \(17.29\) & \(9.41\) & \(20.61\) & \(4.94\) & \(19.47\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=3.72)\) & \((\delta=2.89)\) & \((\delta=4.27)\) & \((\delta=0.86)\) & \((\delta=0.207)\) & \((\delta=6.50)\) \\ \cline{2-9} & \multicolumn{2}{c|}{CPU resource Usage} & \(0.26\) & \(0.24\) & \(0.27\) & \(0.24\) & \((\delta=0.01)\) & \((\delta=0.03)\) & \((\delta=0.01)\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=0.03)\) & \((\delta=0.02)\) & \((\delta=0.04)\) & \((\delta=0.01)\) & \((\delta=0.03)\) & \((\delta=0.01)\) \\ \hline \multirow{5}{*}{ \begin{tabular}{c} LIGO \\ \end{tabular} } & Total Duration of All Workflows in Minutes & \(30.82\) & \(52.17\) & \(44.02\) & \((\delta=10.9)\) & \(53.87\) & \(45.26\) & \(63.56\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=0.09)\) & \((\delta=0.44)\) & \((\delta=7.68)\) & \((\delta=7.88)\) & \((\delta=0.15)\) & \((\delta=1.33)\) \\ \cline{2-9} & \multicolumn{2}{c|}{CPU resource Usage} & \(0.40\) & \(0.24\) & \(0.28\) & \(0.23\) & \(0.31\) & \(0.23\) \\ & \multicolumn{2}{c|}{(Standard Deviation)} & \((\delta=0.00)\) & \((\delta=0.02)\) & \((\delta=0.08)\) & \((\delta=0.02)\) & \((\delta=0.01)\) & \((\delta=0.00)\) \\ \cline{2-9} & \multicolumn{2}{c|}{Memory resource Usage} & \(0.40\) & \(0.24\) & \(0. evaluation data. Our ARAS achieves better performances in total workflow duration and average workflow duration in linear arrival patterns. It can be deduced that the concurrent degree of received workflow requests is directly related to the total duration of all injected workflows and the average duration of a single workflow. The higher the concurrent degree of received workflow requests, the more workflow tasks are executed per unit time, so the shorter the total duration of all injected workflows and the average duration of individual workflow. Regarding the CPU and memory resource usage in Fig. 5, our ARAS outperforms the baseline for each arrival pattern. Herein, the linear arrival pattern features a maximum value of \(35\%\) for our ARAS, \(4\%\) higher than the baseline algorithm. Our ARAS outperforms the baseline algorithm by \(1\%\) and \(6\%\), respectively, in the other two patterns. It can be traced back to the fact that over time the linear arrival pattern requests more task pods to be performed in parallel in response to more and more workflow requests and gains a maximum resource usage rate. Looking at the resource usage curves (CPU and memory) of three workflow arrival patterns in Fig. 5, the resource usage peak of our ARAS is higher than that of the baseline algorithm for most of the time. It can be further observed that the peak of the resource usage curve is consistent with the centralized arrival of workflow requests. It is because our ARAS can use a resource scaling strategy to adjust the resource limits of potential future task requests within the current task's lifecycle. This scheme launches task pods as many as possible on the premise of the smooth operation of task pods, thus speeding up the execution efficiency of workflows. However, the baseline algorithm depends on the adequacy of residual resources on cluster nodes. In high concurrency scenarios, the insufficient remaining resources of nodes will make the baseline algorithm lead to endless waiting and much time-wasting and prolong the total duration of workflows and the average duration of a single workflow. **Epigenomics**: We adopt a small-scale Epigenomics workflow with \(20\) tasks in experimental evaluations. As can be seen from Fig. 4(b), the topology of Epigenomics workflows is mostly pipeline structure. As for the total duration of all workflows, our ARAS obtains time savings of \(21.8\%\) for the constant arrival pattern, time savings of \(21.4\%\) for the linear arrival pattern, and time savings of \(17.2\%\) for the pyramid arrival pattern compared with the baseline. As for average workflow duration, our ARAS, in comparison to the baseline, gains time savings of \(54.65\%\), time savings of \(40.65\%\), and time savings of \(50.28\%\) for three arrival patterns from left to right, respectively. Note that the Epigenomics workflow is substantially more significant in performance improvement than the Montage workflow in terms of average total workflow duration and average duration of individual workflow for three arrival patterns. Because the pipeline topology in the Epigenomics workflow is better suited for high concurrency scenarios, our ARAS scheme saves more time than the baseline in response to continuous workflow requests. Regarding the resource usage in Fig. 6, the constant arrival pattern features a maximum value of \(34\%\) for our ARAS, \(7\%\) higher than the baseline. The linear arrival pattern features a value of \(32\%\) for our ARAS, which is also \(7\%\) higher than the baseline. The higher resource utilization of these two patterns can be attributed to the fact that \(30\) workflows were injected in the first \(25\) minutes, totaling more than \(600\) tasks. The higher density of workflow requests results in higher CPU and memory resource utilization. In addition, the resource scaling method enables our ARAS to adjust the resource limits of the task pods in time and cope with the continuous workflow requests on the premise Fig. 5: The CPU and memory resource usage rate under three distinct arrival patterns for Montage workflows. of the normal execution of workflow tasks. It also shows that the peak time of resource usage in our ARAS is longer than that of the baseline. The baseline algorithm waits for resource release in response to resource shortage on cluster nodes, so it consumes too much time and results in a longer total workflow duration and average duration of a single workflow. **CyberShake**: A small-scale CyberShake workflow in our experiments comprises \(22\) tasks (refers to Fig. 4(c)). Our ARAS, in comparison to the baseline, leads to time savings of \(23.8\%\) (for the constant arrival pattern), time savings of \(31.1\%\) (for the linear arrival pattern), and time savings of \(29.6\%\) (for the pyramid arrival pattern) for the total duration of all workflows. Similarly, for average workflow duration, our ARAS, in comparison to the baseline, gains time savings of \(46.85\%\), time savings of \(54.34\%\), and time savings of \(74.63\%\) for three arrival patterns from left to right, respectively. Due to the topology structure of the CyberShake workflow with smaller depth and greater width, the CyberShke workflow features a higher degree of inherent parallelism, which is easier to take advantage of our ARAS in response to continuous workflow request arrivals. Compared with the baseline algorithm, the ARAS has prominent performance advantages on metrics of total workflow duration and duration of a single workflow. As for CPU and memory resource usage, our ARAS obtains \(26\%\), \(27\%\), and \(22\%\) for three distinct arrival patterns, respectively, and slightly higher than the baseline. Combined with the resource utilization curve in Fig. 7, it can be observed that our ARAS benefiting from the resource scaling method outperforms the baseline in all performance metrics under three different workflow arrival patterns. **LIGO**: A small-scale LIGO workflow in our experiments consists of \(23\) tasks (refers to Fig. 4(d)). Compared with the baseline, our ARAS gains time savings of \(40.92\%\) (for the constant arrival pattern), time savings of \(18.28\%\) (for the linear arrival pattern), and time savings of \(28.79\%\) (for the pyramid arrival pattern) for the total duration of all workflows. Similarly, for average workflow duration, our ARAS, in comparison to the baseline, gains time savings of \(79.86\%\), time savings of \(42.21\%\), and time savings of \(70.15\%\) for three arrival patterns from left to right, respectively. With unique concurrent topology, LIGO workflows, like Epigenomics and CyberShake workflows, enable our ARAS to perform better in the total workflow duration and individual workflow duration metrics than the baseline algorithm under three different arrival patterns. As for resource usage in Fig. 8, our ARAS obtained \(40\%\) for the constant arrival pattern, \(28\%\) for linear arrival pattern and \(31\%\) for pyramid arrival pattern, respectively, much higher than the baseline algorithm. In combination with the resource utilization curve trend, resource scaling strategy and workflow's unique concurrency topology once again help our ARAS outperform the baseline under three different workflow arrival patterns. #### 6.2.2 Evaluation of resource allocation failure In this evaluation, we analyze the behavior of the KubeAdaptor in a failure situation of resource allocation. This situation means that our ARAS allocates resource quotas less than \(min_{mem}+\beta\) through the resource scaling method against a high-concurrency scenario. So the task pods cannot smoothly execute and turn to OOMKilled status due to insufficient memory resources. Accordingly, the OOMKilled task pods make the workflow running get stuck. The source code of evaluation of resource allocation failure is available at 17. Footnote 17: [https://github.com/CloudControlSystems/OOM-Test](https://github.com/CloudControlSystems/OOM-Test) In the following, we investigate how KuberAdaptor responds to OOMKilled task pods, reallocates resources to ex Fig. 6: The CPU and memory resource usage rate under three distinct arrival patterns for Epigenomics workflows. cute task pods, and resumes workflow execution under our ARAS. For this evaluation, we inject \(10\) Montage workflows into our K8s cluster (mentioned in 6.1.1) at a time under the constant arrival pattern. We fine-tune \(min_{cpu}\) and \(min_{mem}\) to be less than the amount of memory required by the Stress tool in the task pod. Subsequently, our ARAS tries to reduce the allocated resource quota by the resource scaling method in response to continuous workflow requests. When the allocated resource is less than \(min_{mem}+\beta\), OOMKilled task pods will appear due to running resource shortage. Fig. 9 depicts the results of this evaluation. In Fig. 9 the first annotation marker, labeled OOMKilled, signalizes when the current task pod encounters the OOM (Out of Memory) event, and the other annotation marker, labeled Reallocation, signalizes when the current task pod is regenerated by using the reallocated computational resources. As can be seen in Fig. 9, at the beginning (second 0), our ARAS uses the resource scaling method to allocate CPU of \(1048\) Milli cores and memory of \(2009Mi\). In this evaluation, the minimum memory for a task pod to run, i.e., the amount of memory operated by the _Stress_ tool in the task pod is set to \(2000Mi\). Herein, we only focus on memory resources because memory resources are incompressible resources, and insufficient memory resources will trigger the task pod OOMKilled, while CPU resources as compressible resources do not. Once the allocated memory resource fails to reach \(min_{mem}+\beta\) (i.e., \(2000Mi+20Mi\)), the current task pod turns to OOMKilled at \(66s\). Meanwhile, the workflow with the current OOMKilled task pod also terminates execution. KubeAdaptor can capture the OOMKilled task pod and delete the task pod at \(66s\). In our experimental evaluation, up to \(210\) task pods (\(10\) Montage workflows) possess frequent created and deleted operations, leading to operation delay of deleting OOMKilled task pod. At \(97s\), KubeAdaptor triggers the regeneration of the current task pod, reallocates computational resources, and launches the task pod. Since the second allocation of resources is sufficient for the smooth execution of the task pod (\(1849m\) CPU and \(3560Mi\) memory), the task pod is completed at \(181s\). At \(258s\), KubeAdaptor deletes the completed task pod. KubeAdaptor equipped with our ARAS in this paper can watch OOMKilled events, delete these OOMKilled task pods, reallocate computational resources, and regenerate these OOMKilled task pods, ensuring continuous execution of workflows. In production practice, the users inevitably misestimate the resource quota of the main program inside workflow tasks, resulting in a large number of OOMKilled task pods and termination of workflow execution. These countermeasures ensure continuous executions of workflows and keep the KubeAdaptor stable and robust. It also reflects the self-healing and self-configuration abilities of the KubeAdaptor (mentioned in 4.3). #### 6.2.3 Concluding discussion Finally, it can be observed that the KubeAdaptor with our ARAS always achieves better results regarding each metric (Sections 6.1.5). From Montage workflows to LIGO workflows, our ARAS outperforms the baseline against different metrics under three distinct workflow arrival patterns. Most of the time savings from the total workflow duration and average duration of individual workflow result from the fact that the resource scaling method enables our ARAS to maximize resource utilization on cluster nodes according to our optimized functions while ensuring the smooth running of workflow pods. In addition, the workflow topology with concurrent characteristics also plays a positive role. Concerning resource allocation failure and workflow recovery after termination, we have shown in Section 6.2.2 that the KubeAdaptor has abilities to watch the state Fig. 7: The CPU and memory resource usage rate under three distinct arrival patterns for CyberShake workflows. changes of task pods in real-time, delete the OOMKilled task pods, and reallocate computational resources for the task pod, followed by the re-creation of this OOMKilled task pod and recover of workflow executing. ## 7 Conclusion In this paper, we propose an ARAS for our tailored workflow management engine. With the novel architecture of KubeAdaptor and the integration between KubeAdaptor and K8s, our ARAS enables the KubeAdaptor to maximize resource utilization through the resource scaling method in response to complex and changing workflow requests. Experimental evaluations show that our ARAS, ranging from Montage to LIGO workflows, obtain better performances than the baseline algorithm for various metrics under different workflow arrival patterns (Table II). Furthermore, we have shown that the KubeAdaptor detects and handles failure situations of resource allocation in Section 6.2.2. The self-healing and self-configuration abilities of the KubeAdaptor (mentioned in 4.3) are also fully verified. In our future work, we intend to use KubeAdaptor to analyze different resource allocating algorithms and try to use deep reinforcement learning method to investigate cloud resource allocation for cloud workflows. In addition, we will study resource allocation strategies suitable for a cloud-edge cooperation environment and provide a practical solution for cloud-edge task scheduling. ## Acknowledgments This work was supported in part by the National Natural Science Foundation of China (Grant No. 61873030 and No. 62002019), and the Beijing Institute of Technology Research Fund Program for Young Scholars.
2302.12101
Lessons learned from the NEAR experiment and prospects for the upcoming mid-IR HCI instruments
The mid-infrared (IR) regime is well suited to directly detect the thermal signatures of exoplanets in our solar neighborhood. The NEAR experiment: demonstration of high-contrast imaging (HCI) capability at ten microns, can reach sub-mJy detection sensitivity in a few hours of observation time, which is sufficient to detect a few Jupiter mass planets in nearby systems. One of the big limitations for HCI in the mid-IR is thermal sky-background. In this work, we show that precipitate water vapor (PWV) is the principal contributor to thermal sky background and science PSF quality. In the presence of high PWV, the HCI performance is significantly degraded in the background limited regime.
Prashant Pathak, Markus Kasper, Olivier Absil, Gilles Orban de Xivry, Ulli Käufl, Gerd Jakob, Ralf Siebenmorgen, Serban Leveratto, Eric Pantin
2023-02-23T15:37:21Z
http://arxiv.org/abs/2302.12101v1
# Lessons learned from the NEAR experiment and prospects for the upcoming mid-IR HCI instruments ###### Abstract The mid-infrared (IR) regime is well suited to directly detect the thermal signatures of exoplanets in our solar neighborhood. The NEAR experiment: demonstration of high-contrast imaging (HCI) capability at ten microns, can reach sub-mJy detection sensitivity in a few hours of observation time, which is sufficient to detect a few Jupiter mass planets in nearby systems. One of the big limitations for HCI in the mid-IR is thermal sky-background. In this work, we show that precipitate water vapor (PWV) is the principal contributor to thermal sky background and science PSF quality. In the presence of high PWV, the HCI performance is significantly degraded in the background limited regime. exoplanets, instrumentation, adaptive optics, coronagraphy, data analysis Further author information: (Send correspondence to.) E-mail: [email protected], Telephone: +32 497039597 ## 1 Introduction The direct imaging of habitable exoplanets is challenging due to angular resolution and high-contrast requirements. The field of high-contrast imaging (HCI) is rapidly advancing to achieve such a goal. Most of the current HCI instruments operate in the near-infrared (IR) regime. The near-IR regime is more sensitive to self-luminous planets and the mid-IR (8-13\(\mu\)m) is more sensitive to colder planets and less massive planets[1]. The big limitations for HCI in the mid-IR are reduced angular resolution and large thermal sky-background for ground-based observations. Therefore, the mid-IR is best suited to look for exoplanets around nearby stars. The NEAR (New Earths in the Alpha Cen Region) experiment was an outcome of a collaboration between Breakthrough and ESO (European Southern Observatory). The goal of the NEAR experiment is to enable HCI capability at 10 \(\mu m\) and to look for low mass exoplanets in the \(\alpha\) Cen A/B binary system[2, 3]. The project involved upgrading the existing VISIR (Very Large Telescope Imager and Spectrometer for the mid-InfraRed) instrument[4] with Shack-Hartmann based AO system (\(>95\%\) Strehl ratio in the science band) and high-performance coronagraph such as annular groove phase mask (AGPM) at 10 \(\mu m\). The NEAR was able to reach a final contrast of \(\approx 3\times 10^{-6}\) at 1\({}^{\prime\prime}\) (3.5 \(\lambda\)/D) sufficient for the detection of Neptune mass planets in the habitable zone of \(\alpha\) Cen A. A candidate with an SNR of 3 was found whose nature (e.g., planet, part of a zodiacal disk, image artifact) remains to be confirmed by follow-up observations[5]. Recently, using the Keplerian-stacker algorithm same signal was confirmed with a higher SNR of 5[6]. The NEAR was also offered for observations under ESO's science demonstration program. One such program involved searching for exoplanets around \(\epsilon\) Indi A, \(\epsilon\) Eri, \(\tau\) Ceti, Sirius A, and Sirius B. No new planets were found, but a new upper limit was found for all the targets in direct imaging[7]. In this work, we use \(96.2\) hrs. of data collected under \(\alpha\) Cen campaign to understand HCI performance affected in the presence of atmospheric parameters and instrumental limitations. In section 2 we describe the observation and data reduction techniques. In section 4 we describe the results for various analyses. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Observation & Time & Median & Median & Median & Median \\ night & (hr) & PWV (mm) & seeing (”) & Temp (\({}^{\circ}\)C) & RH (\%) \\ \hline 23/05/2019 & 7.64 & 0.32 & 0.9 & 11.9 & 4 \\ 25/05/2019 & 7.19 & 3.83 & 1.1 & 8.1 & 9 \\ 27/05/2019 & 3.00 & 5.45 & 1.3 & 6.8 & 9 \\ 29/05/2019 & 7.26 & 9.10 & 0.7 & 8.9 & 24 \\ 30/05/2019 & 6.78 & 3.83 & 0.9 & 7.3 & 36 \\ 31/05/2019 & 6.50 & 3.51 & 0.8 & 7.2 & 16 \\ 01/06/2019 & 5.58 & 0.51 & 0.8 & 9.1 & 14.5 \\ 02/06/2019 & 7.98 & 2.57 & 0.7 & 14.1 & 4.5 \\ 03/06/2019 & 7.81 & 3.83 & 0.7 & 15.8 & 5 \\ 04/06/2019 & 6.67 & 2.23 & 1.0 & 16.4 & 8 \\ 05/06/2019 & 4.00 & 1.71 & 1.1 & 15.7 & 3.5 \\ 07/06/2019 & 3.20 & 1.33 & 0.5 & 14.0 & 3 \\ 08/06/2019 & 7.21 & 1.81 & 0.7 & 14.0 & 3.5 \\ 09/06/2019 & 3.84 & 2.24 & 0.5 & 14.9 & 4 \\ 10/06/2019 & 2.69 & 2.91 & 1.4 & 12.8 & 8.5 \\ 26/06/2019 & 6.10 & 5.45 & 0.9 & 10.2 & 17 \\ \hline \end{tabular} \end{table} Table 1: Observing parameters for all the nights under various atmospheric conditions. All the observations were carried out using the NEAR N-band filter with a bandpass of \(10-12.5\)\(\mu m\). Figure 1: (a) A single exposure raw science frame. (b) A derotated median filtered averaged science frame for one night of observation. (c) Same as (b) with PCA analysis. ## 2 Observations and Data Reduction The planned NEAR campaign ran from 23\({}^{rd}\) May until 11\({}^{th}\) June of 2019, with a total of 20 nights accounting for 90.2 hours of observations. Several nights were lost due to weather, to compensate for the loss, an additional night of observation was carried out on 26\({}^{th}\) of June 2019. The final observation time was 96.2 hrs. For the analysis presented in this work, we removed 5 nights with less than 1.7 hrs of observation time, this resulted in 16 nights with 93.35 hrs to work with. The summary of the final selected nights with various atmospheric conditions is summarized in Table 1. ### Observation The \(\alpha\) Cen campaign observations incorporate a common strategy for all the observing nights, these include the use of AO, a high-performance annular groove phase mask (AGPM) coronagraph, and chopping. The chopping was performed by the deformable secondary mirror (DSM) of the VLT and centering of the on-axis targets on the coronagraphs with the help of the QACITS algorithm[8]. The chop throw of the DSM was \(~{}4.9^{\prime\prime}\), which sufficient to chop between \(\alpha\) Cen A and B. The chopping was performed (typical speed of 10 \(Hz\)) to reduce the sky background and the excess low-frequency noise common to the mid-IR arrays (Si:As array)[9]. A uniform exposure of 5.992 msec was used for all the nights except a part of 1st night 23\(rd\) May. A two-hour of observation was performed with an exposure of 5.493 msec. To keep the data uniform for all the analyses, 2 hrs of observation with smaller exposure time is excluded. ### Thermal sky background calculation For calculating thermal sky background, individual exposure raw frames were used. An example of the raw frame is shown in Figure 1 (a). The background flux was measured by calculating the total flux per pixel on the region of frames free from any stellar residuals (at the top two corners) using small square boxes as shown in Figure 1 (a). Next, an average was calculated by dividing by the area to estimate background values per pixel. The detector bias was estimated by calculating the flux inside the red box and dividing by area as shown in Figure 1 (a). The sky background was calculated by subtracting the detector bias from the total flux. The values of airmass, seeing, temperature, PWV, and relative humidity were extracted from the header of each fits files. Apart from the airmass, values of atmospheric parameters are measured using a common weather monitoring facility at the VLT site called Astronomical Site Monitor(ASM)[10]. The measurements for seeing and PWV are done for zenith observations. So for the comparison, they are corrected for telescope pointing. ### Science frames analysis The data reduction for all nights followed a common strategy, this include a chop subtraction: since chopping was performed between \(\alpha\) Cen A and B, with either of them being on-axis in a single exposure. The chop subtraction was done in a way that the chopped frames showed \(\alpha\) Cen A-B as coronagraphic PSF, and off-axis PSFs: \(\alpha\) Cen A as negative and B as positive as shown in the Figure 1 (b). For frame selection, telemetry was calculated to remove the bad frames based on the criteria of AO correction, coronagraphic leakage, and background variance. An additional parameter, using the positions of the off-axis PSFs was used to co-align the frames. The background variance was calculated, using a 10 pix square box at the four corners of the chopped frames, free from stellar residuals. The variance was measured for each box and then an average was calculated. The coronagraphic residuals were measured using a 20 pix radius circular aperture from chopped frames. The position of the coronagraph was estimated using the position of the off-axis \(\alpha\) Cen A/B. Once good frames were identified, they were binned by averaging 500 frames. A rolling average approach was used to avoid adding of frames in case they were apart in the parallactic angle. The averaged frames showed a low spatial structure (residual sky-background, left-over after chopping), this was removed by applying a median filter of kernel size of 15. An example of such filtering is shown by Figure 1 (b). Figure 2: Thermal sky background compared with various atmospheric parameters such as temperature, PWV (mm), RM (%), seeing (”), and effect of airmass. Vertical dashed lines separate different nights of observations and horizontal dashed lines represent median values. Figure 3: Science PSF quality for \(\alpha\) Cen A/B and coronagraphic performance compared with visible seeing and PWV. ### ADI analysis To study the impact of thermal sky background on the HCI performance, nights with similar parallactic angles and airmass were selected. This resulted in 9 nights out of 16, from these 9 nights, 4 nights were selected with low, high, and same PWV of 2.57, 9.1, and 3.83 respectively. For ADI analysis, a full frame PCA routine based on Vortex Image Processing library[11] with 10 principle components was used to process the data. ## 3 Instrumental Limitations One of the limitations affecting the high-performance coronagraph at smaller separations was AGPM glow as shown in Figure 1 (a). In future mid-IR instruments, this could be removed by incorporating a cold pupil stop in front of the AGPM mask. The other limitations come from the science detector. The detector has vertical read-out channels with different bias as shown in Figure 1 (a). With chopping the effect of different channels is removed. But when a target passes through one channel to another, a charge leakage affects the photometry. Which renders the high precision photometric measurements. Persistence is another big limitation with such a detector as shown in Figure 1 (c). The figure shows PCA processed image for one night of observation. The persistence stripe due to chopping between \(\alpha\) Cen A/B is clearly visible. This limits the search area for the exoplanets. The faint arcs present in the image are part of off-axis PSFs. Development of mercury cadmium telluride (HgCdTe) based mid-IR detector arrays with lower-noise performance, shows a promising future for HCI instruments working in the mid-IR[12]. ## 4 Results ### Effect of atmospheric parameters A comparison between thermal sky background and various atmospheric parameters such as temperature, seeing, PWV and RH, as well as the effect of airmass is shown in Figure 2. Thermal sky background values presented in the figure are filtered for clouds, using a sigma clipping of 3.5 using astropy library. A strong correlation between sky background and PWV can be seen, and a weak correlation with RH exists. As expected sky background follows airmass. We find no correlation between visible seeing and temperature. In a previous work done by [13, Turchi et al], PWV is shown to have a direct impact on sky background IR emission in the \([10-12.5]\)\(\mu m\) wavelength window. A comparison between science PSF quality and coronagraphic performance with visible seeing and PWV is shown in Figure 3. The quality of AO correction in the science band is represented by FWHM of \(\alpha\) Cen A/B. A Figure 4: HCI performance under varying conditions of PWV. Four nights were selected with similar parallactic angles, airmass, and duration having different PWV values. strong correlation between science PSFs (\(\alpha\) Cen A/B) quality and PWV is observed. We see visible seeing has no effect on the PSF quality, this shows that thermal background is the dominating factor in the N-band. For coronagraphic performance, no correlation with PWV, seeing, and PSF quality is seen. An incremental degrading coronagraphic performance can be seen for the first 5 nights of observation 3 (middle plot). After investigation, it was found that there was an ice formation on the AGPM coronagraphic mask. By incorporating small warm-up cycles the effect of ice formation was reduced. This is evident in the reduction of residuals for the rest of the campaign. The effect of PWV on sky background is shown by variance in Figure 4a. The background variance gets doubled by increasing PWV values from 2.57 to 9.1. The effect of increased sky background variance on high contrast performance is shown the Figure 4b. The figure shows 5 \(\sigma\) contrast curves for different values of PWV. The contrast curve gets degraded by \(\approx 50\%\) from PWV values of 2.57 to 9.1. In the presence of high PWV, the HCI performance is significantly degraded. ## 5 Summary and Conclusions In this work, we explore the effect of various atmospheric parameters and instrumental limitations on HCI performance in the N-band. We show that thermal sky background is one of the biggest limiting factors for the HCI observations in the mid-IR regime. The amount of thermal sky background is directly correlated with PWV. A high PWV can double background noise variance, which results in a degradation of contrast by 50%. ## 6 Acknowledgment The authors would like to thank the ESO and the Breakthrough Foundation and all the people involved for making the NEAR project possible. Part of this work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819155), and by the Wallonia-Brussels Federation (grant for Concerted Research Actions).
2306.09968
ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation
Large language models have exhibited exceptional performance on various Natural Language Processing (NLP) tasks, leveraging techniques such as the pre-training, and instruction fine-tuning. Despite these advances, their effectiveness in medical applications is limited, due to challenges such as factual inaccuracies, reasoning abilities, and lack grounding in real-world experience. In this study, we present ClinicalGPT, a language model explicitly designed and optimized for clinical scenarios. By incorporating extensive and diverse real-world data, such as medical records, domain-specific knowledge, and multi-round dialogue consultations in the training process, ClinicalGPT is better prepared to handle multiple clinical task. Furthermore, we introduce a comprehensive evaluation framework that includes medical knowledge question-answering, medical exams, patient consultations, and diagnostic analysis of medical records. Our results demonstrate that ClinicalGPT significantly outperforms other models in these tasks, highlighting the effectiveness of our approach in adapting large language models to the critical domain of healthcare.
Guangyu Wang, Guoxing Yang, Zongxin Du, Longjun Fan, Xiaohu Li
2023-06-16T16:56:32Z
http://arxiv.org/abs/2306.09968v1
# ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation ###### Abstract Large language models have exhibited exceptional performance on various Natural Language Processing (NLP) tasks, leveraging techniques such as the pre-training, and instruction fine-tuning. Despite these advances, their effectiveness in medical applications is limited, due to challenges such as factual inaccuracies, reasoning abilities, and lack grounding in real-world experience. In this study, we present ClinicalGPT, a language model explicitly designed and optimized for clinical scenarios. By incorporating extensive and diverse real-world data, such as medical records, domain-specific knowledge, and multi-round dialogue consultations in the training process, ClinicalGPT is better prepared to handle multiple clinical task. Furthermore, we introduce a comprehensive evaluation framework that includes medical knowledge question-answering, medical exams, patient consultations, and diagnostic analysis of medical records. Our results demonstrate that ClinicalGPT significantly outperforms other models in these tasks, highlighting the effectiveness of our approach in adapting large language models to the critical domain of healthcare. deep learning large language model medical knowledge electronic medical record text generation ## 1 Introduction In recent years, the paradigm of pre-training and fine-tuning large language models has brought about significant advancements in Natural Language Processing (NLP) domain. The earliest approaches like BERT[1], utilized optimized objectives like Masked Language Model (MLM) to pre-train on large text corpora such as BookCorpus[2], in an unsupervised manner to learn good representations. These representations can be fine-tuned and adapted to one or more specific downstream tasks to improve their performance. Further research aims to develop competent generalists, i.e. generalized systems that can perform multiple NLP tasks without the need for a manually labeled training dataset for each task. For instance, T5[3] treats multiple NLP tasks as text-to-text transformation tasks and leverages an encoder-decoder architecture, achieving promising results such as text classification, question answering, and summarization, though with a larger number of parameters. In contrast, GPT-3[4] uses large auto-regressive model for few-shot predictions, improving performance without parameter fine-tuning by incorporating few-shot demonstrations through text interaction with the model. PALM[5] is Transformers-based and Pathways-enabled large-scale language model. Compared to other models, PALM is more resource-efficient in terms of computation and achieves state-of-the-art few-shot results across hundreds of natural language, code, and mathematical reasoning tasks. With their substantial generalization capabilities in NLP tasks, large pre-trained models are increasingly utilized for various tasks and facilitating human interaction through dialogue models. LaMDA [6], a transformer-based model designed for dialogues, leverages annotated data and external knowledge to augment its helpfulness and role consistency. InstructGPT [7] aligns with user intent across various tasks through fine-tuning and reinforcement learning with human feedback, resulting in improved truthfulness and reduced toxicity in output generation. ChatGPT can simulate human interaction, write abstracts or create movie scripts in response to prompts, driving the AI revolution. Large language models are also effective for writing assistance and generating efficient code for programmers. As we know, medicine and health care still face many challenges, including aging population, lack of equitable access, rising costs, doctor and nurse burnout, and global pandemics. Information technology has the potential to transform modern medicine by offering new tools and insights for healthcare, with ChatGPT and GPT-4 promising to revolutionize clinical decision support, clinical trial recruitment, clinical data management, research support, patient education [8, 9]. Google researchers developed FlanPaLM, an instruction-tuned variant of PaLM, showing improved task performance via natural language instructions. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy in MultiMedQA multiple-choice datasets, but remains outperformed by clinicians. Recent prospective suggests generalist medical AI (GMAI) using foundation models may disrupt task-specific paradigms, enabling versatile applications like interactive note-taking, bedside decision support, and patient chatbots [10].However, there are considerable challenges to overcome in applying generative language models to the medical field. The output of generative language models may have factual errors, logic inconsistencies, and problems with coherence, such as citing article references that do not exist [11]. The models have limited reasoning abilities and lack grounding in real-world experience, leading to general and vague responses. ChatGPT has been found lacking in depth and insight [4], likely due to its alignment model used for reward-based training, which produces overly generalized answers that lack medical expertise. This evidence implies that employing these technologies in the medical field brings unique hurdles, such as the necessity for high accuracy, interpretability, and secure handling of sensitive health data. In this study, we present ClinicalGPT, a large language model that is specifically designed for tasks across medical applications. To train the model, we leverage extensive and diverse datasets consisting of real-world medical records, allowing us to transform domain-specific knowledge to the model. In addition, we establish a comprehensive evaluation framework that includes medical knowledge question-answering, medical examinations, patient consultations, and medical record analysis. By utilizing parameter-efficient fine-tuning methods, we were able to further improve the performance of ClinicalGPT. The results demonstrate that ClinicalGPT outperform existing models in term of performance, thus confirming the effectiveness of our approach. ## 2 Methods ### Dataset In this study, we incorporated a large and diverse medical datasets including cMedQA2, cMedQA-KG, MD-EHR, MEDQA-MCMLE, and MedDialog, for the training and evaluation of our model. The cMedQA2 dataset [12] is a Chinese medical question-and-answer dataset that consists of 120k questions and 226k answers. The data is aggregated from a Chinese medical question-and-answer online forum1. For training purposes, we followed the original dataset partition as proposed by the author, and then we randomly selected one answer per question. We annotated 10k questions from the training set for training reward models and used 4k questions from the validation set for reinforcement learning. We sampled questions from the testing set for evaluation. Figure 1: The overview of ClinicalGPT. The cMedQA-KG is a medical question-answer dataset which are curated based on knowledge graphs. It is established on three knowledge graphs: cMeKG2, xywy-KG3, and 39Health-KG 4.These knowledge graphs cover comprehensive medical entities such as disease, medication, and symptom, and their relationships. Detailed descriptions of the knowledge graphs can be found in Appendix A. We have designed templates (see Appendix B) to transform each knowledge triplet into fine-tuning instruction data, i.e text-to-text pair for text generation, yielding 100k question-answer pairs. cMedQA-KG is used exclusively for training purposes. Footnote 2: [http://cmekg.pcl.ac.cn](http://cmekg.pcl.ac.cn) Footnote 3: [https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph](https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph) Footnote 4: [https://github.com/zhihao-chen/QASystemOnMedicalGraph](https://github.com/zhihao-chen/QASystemOnMedicalGraph) The MEDQA-MCMLE dataset is a subset of the original MEDQA dataset [13], consisting of Chinese medical examination questions in a multiple-choice format. It includes 34k questions, each offering multiple choices, typically 4 or 5. We have followed the original author's division of the dataset into training, validation, and testing sets. As this dataset is derived from professional medical board examinations, it effectively evaluates applied knowledge, clinical reasoning, and patient-centric skills. The MedDialog dataset [14] is a data collection of multi-turn medical conversations obtained from an online platform5. MedDialog comprises 1.1 million dialogues and 4 million utterances. Due to the large volume of data, we have randomly sampled 100k, 1k, and 1k dialogues for the training, validation, and testing sets, respectively. These multi-turn dialogues closely resemble real interactions between doctors and patients, aiding the model in understanding the process of clinical inquiry and decision-making. Footnote 5: [https://www.haodf.com](https://www.haodf.com) The MD-EHR dataset is comprised of electronic health records from multicenter, large-scale hospitals in China. This dataset contains 100k records covering a range of disease groups, including Respiratory, Digestive, Urinary, Psychiatry, Neurology, Gynecology, and Hematology. Each record within the MD-EHR dataset provides a comprehensive overview of the patient's complaints, medical history, findings from physical examinations, ancillary test results, and the final diagnosis. We have divided the dataset into three sets: 2,000 records for the validation set, 2,000 records for the testing set, and the remaining entries for the training set. Following T5[3], we transformed the medical records into a text generation task by concatenating the notes from the records as input and using the diagnosis as the output. ### Finetuning We adopt the T5 model's [3] strategy of utilizing text generation grounded in language models to complete all tasks in our study. Language models, pre-trained on extensive corpora, have demonstrated a remarkable ability to understand and generate human-like text [4]. These models calculate the probability of a sequence of words in a text, \(T=(w_{1},w_{2},...,w_{L})\). Specifically, the casual language model calculates the probability of the text \(T\) that can be formulated as \(p(T)=p(w_{1})p(w_{2}|w_{1})...p(w_{L}|w_{1},w_{2},...,w_{L-1})\), where \(L\) represents the length of the text. Several large language models, such as BLOOM, GLM, and others, are available for public use. To enhance the utility of large models for downstream tasks, we apply an instruction-tuning approach with supervised fine tuning (SFT). The language model \(p_{\theta}\) is trained to generate a response \(R=v_{1:n}\) for a given input prompt \(I=w_{1:m}\), optimizing the likelihood \(p_{\theta}(R|I)=p_{\theta}(v_{1:n}|w_{1:m})\), where \(n\) and \(m\) represent the lengths of the response and input prompt, respectively. Thus, the loss function is \(\frac{1}{n}\sum_{i=m+1}^{m+n}-\log p_{\theta}(w_{i}|w_{1},...,w_{i-1})\). To incorporate domain-specific knowledge into LLMs, we turn to knowledge graphs (KGs) specific to the domain for constructing prompt-response pairs. KGs capture knowledge in the form of structured triples \((s,r,o)\), where \(s\) denotes the subject, \(r\) the relationship, and \(o\) the object. An example of such a triple could be (Cough, SymptomOf, Pneumonia). We leverage a set of manually designed templates to transform these triples into question-answer pairs, rendering them suitable for instruction tuning. The manually designed templates can be found in Appendix B. ### Reward model Existing works have demonstrated that reinforcement learning can incorporate human feedback to enhance large language models. For instance, WebGPT [15] is a browser-assisted question-answering system that utilizes human feedback for performance improvement. InstructGPT also [7] to align with human feedback via reinforcement learning for helpful and safe response generation. We follow the work of [7], constructing a reward model (RM) \(r_{\mu}\) to furnish the reward signal crucial for the reinforcement learning process. We employ rank-based training for the RM. Human labelers rank responses for a given input prompt \(I\), generating a comparison pair for each prompt. For a comparison pair with a human-preferred response \(R_{w}\) and a less preferred response \(R_{l}\), the loss is given by \(-\log(\sigma(r_{\mu}(I,R_{w})-r_{\mu}(I,R_{l})))\). ### Reinforcement learning We adopt the method proposed by Stiennon et al. [16], leveraging reinforcement learning to enhance the fine-tuned models with the objective of generating high-quality and helpful outputs, as well as improving the generation of medical texts, thereby aiding in the accurate description and treatment of patient conditions. We utilize the trained reward model as the reward function. In order to prevent the model from deviating too far from its initial state, we employ Proximal Policy Optimization (PPO) as our optimization strategy. Specifically, we incorporate a penalty term in the reward function that penalizes the KL divergence between the learned reinforcement learning policy, denoted as \(\pi_{\phi}^{RL}\), and the original supervised model, \(\pi^{SFT}\). This is to ensure that the final model does not deviate excessively from the original supervised model. The complete reward function is defined as follows: \(R(x,y)=r_{\mu}(x,y)-\beta\log(\pi_{\phi}^{RL}(y|x)/\pi^{SFT}(y|x))\), where \(r_{\mu}(x,y)\) represents the output of the reward model and \(\beta\) is the coefficient for KL divergence in the reward function. The loss function used in PPO optimization is given by: \(L=r_{\mu}\hat{A}_{t}-\beta KL[\pi_{\phi_{old}},\pi_{\phi}]\), where \(r_{\mu}\) is the reward function, \(\hat{A}_{t}\) is an estimator of the advantage function, \(\phi_{old}\) represents the parameters of the policy at the previous step, and \(\pi_{\phi}\) is the current policy. ## 3 Experiments and results ### Implemented details We chose BLOOM-7B[17] as our base large language model, due to its open-source nature and multilingual support. For the supervised fine-tuning process, we set the learning rate to 5e-5, with a batch size of 128 and a maximum length of 1,024, training across 3 epochs. During the training of the reward model, we utilized the last feature vector of the final output sequence features as the text representation. Based on the fine-tuned model, we added a binary classification head to output the reward. We set the learning rate to 2e-5, with a batch size of 128, a maximum length of 1,024, and training over 3 epochs. For the reinforcement learning process, we applied a learning rate of 1e-5 and a maximum length of 1,024, training for 4000 steps. To efficiently train the large language model, we adopted LoRA (Low-Rank Approximated adapter)[18], a parameter efficient fine tuning method, with r of 8, alpha of 32, and dropout of 0.1. To decrease memory usage and improve training speed, we employed the ZeRO-2 [19], and made use of both TF32 (TensorFloat-32) and BF16 (Bfloat16). We selected several instruction fine-tuned models for comparison, including ChatGLM-6B [20], LLAMA-7B[21] (fine-tuned on English and Chinese data), and BLOOM-7B [22] (fined-tuned on crosslingual tasks). ### Medical conversation We conducted performance evaluation of the medical conversation on the test set of MedDialog. To address the challenge of multiple rounds of conversation within each medical dialogue, we randomly truncated the dialogue at a certain round, discarding the subsequent dialogue, and using the historical dialogue prior to this round as input. The sample response is shown in Table 1. We used three evaluation metrics: BLEU[23], ROUGE[24], and GLEU, to assess the quality of the conversations. BLEU is a commonly used metric that compares a candidate translation with one or more reference translations based on n-gram precision. GLEU calculates the average score of different n-grams, providing a more comprehensive evaluation of the generated text. ROUGE, on the other hand, is a particularly useful metric for evaluating automatic summarization and machine translation, as it focuses on the recall aspect of generated summaries by comparing them with references. The experimental results are presented in Table 2. It demonstrates that ClinicalGPT achieves outstanding performance on BLEU-1 and all ROUGE scores. ClinicalGPT comes second only to BLOOM-7B in terms of BLEU-2, BLEU-3, and BLEU-4. The superior ROUGE scores achieved by ClinicalGPT indicate that the responses generated by the model cover the information provided by the reference text more effectively. ### Medical examination In this study, the medical examination assessment using the MEDQA-MCMLE dataset was evaluated with the categories which are the highest frequencies in the dataset. The selected categories included Medical ethics, Respiratory system, Digestive system, Urinary system, Hematologic diseases, Rheumatic immune Diseases, Pediatric diseases, and Pharmacology. The models were fed with the form of questions and options as input, and the generated text was subsequently used to extract answers to compute accuracy. The sample response is shown in Table 3. The experimental results, as shown in Table 4, reveal that ClinicalGPT outperformed other LLMs such as LLAMA-7B, ChatGLM-6B, and BLOOM-7B in all evaluated categories, boasting an average accuracy of 38.4. Specifically, ClinicalGPT achieved strong performance, exceeding the average scores of ChatGLM-6B, BLOOM-7B, and LLAMA-7B with 19.9, 25.7, and 27.2 respectively. Among all categories, ClinicalGPT achieved the best score in Rheumatic immune with an accuracy of 47.4. Conversely, it underperformed in Respiratory and Digestive diseases, with accuracies of 26.1 and 36.9, respectively. These findings suggest that while ClinicalGPT excels in understanding and generating \begin{table} \begin{tabular}{l} \hline \hline **Description of medical conditions and history** \\ \hline **User:** \\ ( Disease: Suspected Paper’s disease of the skin) \\ ( Duration of illness: more than six months) \\ ( Disease description: Has been treated as czezen for several years.) \\ ( HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: He: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: He: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: He: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: HeI: He: HeI: HeI: HeI: HeI: He: He responses related to rheumatic immune system, further refinement is required to improve its performance in Respiratory and Digestive diseases. ### Diagnosis The diagnostic capabilities of LLMs (large language models) were evaluated on the testing set of MD-EHR. Disease groups were selected for evaluation, including Respiratory, Digestive, Urinary, Psychiatry, Neurology, Gynecology, and Hematology. The models were provided with concatenated notes from each medical record as input and generated text as output. The accuracy of the models was calculated by comparing the generated text with the diagnosis labels in the medical records. The sample response is shown in Table 5. The experimental results are demonstrated in Table 6 for each disease group. ClinicalGPT outperformed other language models, such as ChatGLM-6B, LLAMA-7B, and BLOOM-7B, across all disease groups. The average accuracy of ClinicalGPT across all disease groups was 80.9%, which is obviously higher than the 40.9% of ChatGLM-6B, 36.6% of LLAMA-7B, and 60.3% of BLOOM-7B. ClinicalGPT demonstrated particularly strong performance in the Digestive and Urinary departments, achieving accuracies of 90.1% and 89.9%, respectively. This indicates a robust capability for understanding and interpreting medical records across different disease groups. However, ClinicalGPT exhibited slightly lower, yet still impressive, performance in the Gynecology and Hematology departments, with accuracies of 78.6% and 80.7% respectively. This suggests that there may be room for improvement, specifically in the fields of Gynecology and Hematology, although ClinicalGPT still performed well overall across a range of medical specialties. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & **Respiratory** & **Urimary** & **Digestive** & **Rheumatic immune** & **Average** \\ \hline **ChatGLM-6B** & 24.6 & 24.4 & 20.0 & 10.5 & 19.9 \\ **LLAMA-7B** & 20.3 & 35.6 & 21.2 & 31.6 & 27.2 \\ **BLOOM-7B** & 15.9 & 31.1 & 29.4 & 26.3 & 25.7 \\ **ClinicalGPT** & **26.1** & **40.0** & **36.9** & **47.4** & **37.6** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparisons on medical examination. \begin{table} \begin{tabular}{l} \hline \hline **Prompt:** \\ \hline \hline \multicolumn{1}{l}{(Complaint: Abdominal pain for 1 day.)} \\ \hline \hline \multicolumn{1}{l}{(Current medical history and physical examination: Abdominal pain, no obvious pattern, vomiting, previous or current fever, no discomfort such as abdominal discussion of abnormal c cying. Urinary volume is acceptable, mental state is acceptable, and appetite is average. R 20 times/minute, P 110 times/minute, clear mind, good mental response, stable breathing, and lips, no constipation in the throat, thick respiratory sounds in both lungs, no male heard, strong heart sounds, regular rhythm, no murmur heard in each valve area, soft abdomen, full abdominal tenderness, liver and spleen not reached, and no abnormalities were found in the nervous system examination.)} \\ \hline \hline \multicolumn{1}{l}{(Curation: high urine amylase in the external hospital, influenza+, abdominal plain film: Bowel obstruction. The image shows that the shape and size of the liver section are normal, the capsule is smooth, and the parenchymal echo is uniform. No obvious abnormal echo is detected.)} \\ \hline \hline **Response:** \\ \hline \hline \multicolumn{1}{l}{(Hello, according to your symptoms and examination results, you are diagnosed as Appendicitis. Appendicitis is a common acute abdominal pain disease, which usually requires surgical treatment. We suggest that you undergo surgical treatment as soon as possible to avoid worsening the condition.)} \\ \hline \hline \multicolumn{1}{l}{(Before the operation, we will make some necessary preparations for you, including fasting and drinking water, as well as Blood test. After the surgery, we will closely observe and care for you to ensure your recovery is good.)} \\ \hline \hline \multicolumn{1}{l}{(A the same time, we also recommend that you pay attention to rest, avoid vigorous exercise and excessive fatigue, maintain good dietary habits and lifestyle, and promote physical recovery.)} \\ \hline \multicolumn{1}{l}{(If you have any questions or need further assistance, please feel free to contact us at any time. We will do our best to provide you with the best medical services.)} \\ \hline \hline \end{tabular} \end{table} Table 5: A sample response in diagnosis for an electronic medical record. ### Medical question answering For medical question-answering (QA) assessment, our model was benchmarked against several other models using a dataset of 388 questions sampled from cMedQA2. Automated evaluation metrics were used, with GPT-4 serving as the reference model. Given the question, each model generated an answer independently. Then GPT-4 was used to assess these responses based on their accuracy, helpfulness, and safety. The GPT-4 assigned a judgment of Win, Tie, or Lose for each comparison. A "Win" indicates ClinicalGPT provided a superior response, a "Lose" indicates the competing model offered a better response, and a "Tie" means that no obvious difference between the responses was observed. The results of the medical question-answering evaluation are presented in Table 7. According to the results, ClinicalGPT outperformed all of BLOOM-7B, LLAMA-7B, and ChatGLM-6B. In comparisons against BLOOM-7B and LLAMA-7B, our model won in 89.7% and 85.0% of the cases respectively. The percentage of tie cases were relatively small, at 1.8% against BLOOM-7B and 2.3% against LLAMA-7B. Meanwhile, ClinicalGPT wins against ChatGLM-6B at 67.2%. The tie rate increased to 10.9% and the loss rate to 22.0%. This performance suggests that while ChatGLM-6B has a commendable repository of medical knowledge and displays fluent textual expression, training with ClinicalGPT is beneficial for augmenting the capabilities in medical question answering, despite the extensive knowledge reserves of larger models. ## 4 Conclusion In this study, we introduced ClinicalGPT, a large language model tailored for medical and clinical applications. Recognizing the limitations that generic large language models present in these specialized fields, we took steps to refine the model, assembling comprehensive datasets for its fine-tuning. These datasets incorporate real medical records, patient consultations, diverse medical knowledge, and exam data, all aimed at shaping the model's knowledge base and responsiveness. Our extensive experiments cover a range of critical tasks in the medical field, such as medical conversation, medical examination, diagnosis, and medical question answering. The empirical results highlight the superior capabilities of ClinicalGPT in understanding and generating medical and clinical-related responses. ## Acknowledgments Parts of the experiments are conducted in the InforSuperBahn Testbed. The authors appreciate Nanjing Institute of InforSuperBahn for providing the test and evaluation platform.
2303.14881
Flavor dependence of jet quenching in heavy-ion collisions from a Bayesian analysis
We investigate the flavor dependence of jet quenching, by performing a systematic analysis of medium modifications on the inclusive jet, $\gamma$+jet, and $b$-jet in Pb+Pb collisions at the LHC. Our results from MadGraph+PYTHIA exhibit excellent agreement with experimental measurements of the inclusive jet, $\gamma$+jet and $b$-jet simultaneously in p+p collisions. We then utilize a Bayesian data-driven method to extract systematically the flavor-dependent jet energy loss distributions from experimental data, where the gluon, light quark and $b$-quark initiated energy loss distributions are well constrained and satisfy the predicted flavor hierarchy of jet quenching, i.e. $\langle \Delta E_g \rangle > \langle\Delta E_q\rangle > \langle\Delta E_b\rangle$. It is shown that the quark-initiated jet energy loss distribution shows weaker centrality and $p_\text{T}$ dependence than the gluon-initiated one. We demonstrate the impacts of the slope of initial spectra, color-charge as well as parton mass dependent jet energy attenuation on the $\gamma/b$-jet suppression observed in heavy-ion collisions.
Shan-Liang Zhang, Enke Wang, Hongxi Xing, Ben-Wei Zhang
2023-03-27T02:32:19Z
http://arxiv.org/abs/2303.14881v2
# Flavor dependence of jet quenching in heavy-ion collisions ###### Abstract We investigate the flavor dependence of jet quenching, by performing a systematic analysis of medium modifications on the inclusive jet, \(\gamma\)+jet, and \(b\)-jet in Pb+Pb collisions at the LHC. Our results from MadGraph+PYTHIA and LBT exhibit excellent agreement with experimental measurements of the inclusive jet, \(\gamma\)+jet and \(b\)-jet simultaneously both in p+p and Pb+Pb collisions. We then utilize a Bayesian data-driven method to extract systematically the flavor-dependent jet energy loss distributions from experimental data, where the gluon, light quark and \(b\)-quark initiated energy loss distributions are well constrained. It is shown that the quark-initiated jet energy loss distribution shows weaker centrality and \(p_{T}\) dependence than the gluon-initiated one. We demonstrate the impacts of the slope of initial spectra, color-charge as well as parton mass dependent jet energy attenuation on the \(\gamma/b\)-jet suppression observed in heavy-ion collisions. ## I Introduction The understanding of strongly interacting nuclear matter at extremely high temperature and energy density is one of the primary subjects in the study of high-energy nuclear collisions at Relativistic Heavy Ion Collider (RHIC)[1; 2; 3] and the Large Hadron Collider (LHC) [4; 5; 6; 7]. Jet quenching has long been identified as a very powerful tool to investigate the phase transition from hadron gas to the the quark-gluon plasma (QGP) with deconfined quarks and gluons [8; 9], and numerous studies have shown that parton energy loss in the QGP may lead to the suppression of the single inclusive hadron/jet spectra [1; 2; 3; 10; 11; 12; 13; 14; 15; 16], the shift of \(\gamma\)/Z+hadron/jet correlations [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32] and dihadron transverse momentum asymmetry [33; 34; 35; 36], the modification of jet internal structures [37; 38; 39; 40; 41; 42; 43; 44; 45], as well as the azimuthal anisotropy (\(v_{2}\)) of hadrons and jets [46; 47; 48; 49] with the large transverse momentum (\(p_{T}\)) in nucleus-nucleus (A+A) collisions, by comparison with those in proton-proton (p+p) collisions [50; 51; 52]. The interaction between an energetic parton and the QGP is sensitive to the colour charge and the mass of the parton, while medium-induced gluon radiation is expected to be enhanced for gluon due to its larger color factor, and to be suppressed for heavy quarks by the dead-cone effect relative to that for light quarks [53; 54; 55; 56]. A separate determination of quark and gluon jet energy loss could play a significant role in revealing the fundamental color structures of the QGP and testing the color representation dependence of the jet-medium interaction [57; 58]. This however proves difficult, as the final state hadronic observables are a mixture of quark and gluon contributions. A clean method for identifying quark or gluon energy loss remains a challenge, despite many past attempts such as the multivariate analysis of jet substructure observables [59], the proposal of using the averaged jet charge [60; 61; 62] and electroweak gauge boson tagged jet [24; 25; 26; 27; 28; 29; 31; 63; 64; 65]. One recent important measurement by the ATLAS Collaboration, i.e. the nuclear modification factor for \(\gamma\)-tagged and \(b\)-tagged jets [66; 67], shows quite different modification pattern from that of single inclusive jets [68]. It is reported that the \(\gamma\)-tagged jets \(R_{AA}\)[66] are much higher and show a weaker centrality dependence than inclusive jet \(R_{AA}\)[68], indicating a sensitive observation of color factor dependence of jet-medium interaction. In addition, the ratio of \(R_{AA}\) between \(\gamma\)-tagged jet and inclusive jets are above most of the theoretical model calculations [66], which challenges the implemented color charge dependence of energy loss in these models. Likewise, systematic difference between \(b\)-jets and inclusive jets \(R_{AA}\) are also observed [67], suggesting a role for mass and colour charge effects in partonic energy loss in heavy-ion collisions. Those differences may arise from not only the inclusive jet mixture of quarks and gluons, where gluon lose more energy, but also the slope of their initial spectrum [69]. Therefore, it is necessary to have explicit knowledge of model-independent but flavor-dependent jet energy loss distributions, which can help to constrain jet quenching model uncertainties and to identify the transport properties of QGP [70]. The purpose of this work is to extract the flavor-dependent jet energy loss distributions by performing a systematic study of the medium suppression of the inclusive jet, \(\gamma\)+jet, and \(b\)-jet in Pb+Pb collisions relative to that in p+p in a unified framework simultaneously. In the numerical calculation a Monte Carlo event generator MadGraph5+PYTHIA8 [71], which can perform the next-to-leading order (NLO) matrix element (ME) matched to the resummation of parton shower (PS) calculations, is employed to simulate initial hard partons with shower partons and jet cross sections, and the framework of Linear Boltzmann Transport (LBT) Monte Carlo model [72; 73; 74; 75; 76] is applied to study the interaction and propagation of these hard partons in the hot/dense QGP medium. Specifically, a Bayesian data-driven analysis [77] of the nuclear modification factors of inclusive jet [68], \(\gamma\)+jet [66], and \(b\)-jet [67] is performed to quantitatively extract the flavor dependent jet energy loss distributions. We study the relative contributions from the slope of initial spectra, color-charge as well as parton mass dependent jet energy attenuation to the \(\gamma/b\)-jet suppression in heavy-ion collisions at the same time. The remainder of the paper is organized as follows. In Sec. II we first introduce the framework. With a systematic study of the inclusive jet, \(\gamma\)+jet, and \(b\)-jet productions in p+p collisions using MadGraph+Pythia, and their medium alterations in Pb+Pb collisions within LBT, a Bayesian data-driven analysis of nuclear modification factors of these processes is performed to quantitatively extract flavor dependent jet energy loss distributions in Sec. III. Finally, a summary is presented in Sec.IV. ## II Framework In order to study the flavor dependence of jet energy loss, we express the final observable of the nuclear modification factor \(R_{AA}\) in a given centrality in terms of the flavor dependent \(R_{AA}^{i,C}\), \[R_{AA}^{C}=\frac{\sum_{i}R_{AA}^{i,C}d\sigma_{pp}^{i}}{\sum_{i}d \sigma_{pp}^{i}}=R_{AA}^{\theta,C}+\sum_{i\neq g}(R_{AA}^{i,C}-R_{AA}^{\theta,C})f_{i}, \tag{1}\] where the superscripts \(i\) and \(C\) stand for the parton flavor and centrality, respectively. \(d\sigma_{pp}^{i}\) is the differential cross section for parton \(i\) initiated jet in p+p collisions, \(f_{i}=d\sigma_{pp}^{i}/\sum_{i}d\sigma_{pp}^{i}\) is the fraction of the total jet cross section from the parton \(i\) initiated one. In our analysis, the flavor and centrality dependent nuclear modification factor \(R_{AA}^{i,C}\) is assumed to be factorized as the convolution of its cross section in p+p collisions and the corresponding parton energy loss distribution [69; 77] \[R_{AA}^{i,C}(p_{T})=\frac{\int d\Delta p_{T}d\sigma_{pp}^{i}(p_{T}+\Delta p_{ T})\otimes W_{AA}^{i,C}(x)}{d\sigma_{pp}^{i}(p_{T})}, \tag{2}\] where \(x=\Delta p_{T}/\langle\Delta p_{T}\rangle\) is the scaled variable with \(\Delta p_{T}\) the amount of energy loss and \(\langle\Delta p_{T}\rangle\) the averaged jet energy loss, which can be parametrized as \(\langle\Delta p_{T}\rangle=\beta_{i}(p_{T})^{\gamma_{i}}\log(p_{T})\) following Refs. [37; 69]. In Eq. (2), \(W_{AA}^{i,C}\) is the scaled energy loss distribution of parton \(i\) in a given centrality class \(C\) of A+A collisions and can be assumed as: \[W_{AA}^{i}(x)=\frac{\alpha_{i}^{\alpha_{i}}x^{\alpha_{i}-1}e^{- \alpha_{i}x}}{\Gamma(\alpha_{i})} \tag{3}\] where \(\Gamma\) is the standard Gamma-function, and the above functional form can be empirically interpreted as the energy loss distribution resulting from \(\alpha_{i}\) number of jet-medium scattering in the medium. In this setup, for each parton flavor \(i\), the scaled jet energy loss distributions \(W_{AA}^{i}(x)\) can be determined by three parameters, \(\alpha_{i},\beta_{i},\gamma_{i}\). According to this flavor decomposition, one can extract \(\alpha_{i},\beta_{i},\gamma_{i}\), for each parton flavor \(i\) to determine the flavor and centrality dependent jet energy loss distributions \(W_{AA}^{i}(x)\) through a global analysis by combining the simulations of p+p cross section and the measurements of nuclear modification factor \(R_{AA}\) for jet related observables. We apply an advanced statistical tool, i.e. Bayesian analysis, for this purpose. Such a method has been successfully employed to extract the bulk and heavy quark transport coefficients [78], as well as the inclusive jet [77] and gluon [70] energy loss distributions in heavy-ion collisions. The process can be summarized as \[P(\theta|data)=\frac{P(\theta)P(data|\theta)}{P(data)}, \tag{4}\] where \(P(\theta|data)\) is the posterior distribution of parameters \(\theta\) given the experimental data, \(P(\theta)\) is the prior distribution of \(\theta\), \(P(data|\theta)\) is the Gaussian likelihood between experimental data and the output for any given set of parameters and \(P(data)\) is the evidence. Uncorrelated uncertainties in experimental data are used in the evaluation of the Gaussian likelihood. To estimate the posterior distribution given by Eq. 4, the Markov chain Monte Carlo (MCMC) process is carried out using the Methodpolis-Hasting algorithm [79]. A uniform prior distribution \(P(\theta)\) in the region \([\alpha_{i},\beta_{i},\gamma_{i}]\in[(0,10),(0,8),(0,0.8)]\) is used for the Bayesian analysis. We first run \(1\times 10^{6}\) burn-in MCMC steps to allow the chain to reach equilibrium, and then generate \(1\times 10^{6}\) MCMC steps in parameter space. ## III Results and Discussions ### Cross sections in p+p and in Pb+Pb In our analysis, we consider three different observables, i.e. the inclusive jet, \(\gamma\)+jet and \(b\)-jet, to study the flavor dependence of jet energy loss distribution. We simulate \(d\sigma_{pp}^{i}\) using a Monte Carlo event generator MadGraph5+PYTHIA8 [71], which combines the NLO matrix element (ME) with the matched parton shower (PS). Furthermore, those shower partons are reconstructed using the anti-\(k_{t}\) algorithm [80] implemented in the FastJet [81]. In order to compare with the \(b\)-jet measurements, we define \(b\)-jet to be the one that contains at least one \(b\)-quark (or \(\bar{b}\)-quark) with momentum \(p_{T}>5\) GeV/c and a radial separation from the reconstructed jet axis \(\Delta R<0.3\). In ATLAS measurements [66; 67; 68], the jets are accepted in the rapidity range of \(|y|<2.8\) for inclusive jet and \(\gamma\)+jet, \(|y|<2.1\) for \(b\)-jet. Besides, for \(\gamma\)+jet event, \(\gamma\) is required to have \(p_{T}^{\gamma}>50\) GeV/\(c\), and a cut \(\Delta\phi_{\gamma j}>\pi/2\) is imposed to select the back-to-back \(\gamma\)+jet pairs. In our simulations, we implement correspondingly the same kinematic cuts adopted by experiments. In the top panel of Fig. 1, we plot the differential cross section of: (a) inclusive jet, (b) \(\gamma\)+jet, and (c) \(b\)-jet as a function of jet transverse momentum \(p_{T}\) obtained from MadGraph+Pythia8 simulation at 5.02 TeV in p+p collisions. Through the comparison with experimental data [66; 67; 68], one can see clearly that the simulations give very well descriptions of all experimental data. Notice that the inset of Fig. 1(a) is the scaled ratio of \(\gamma\)+jet (blue solid) and \(b\)-jet (red dashed) cross section to that of inclusive jet. In Fig. 1(a-c), one can see that the inclusive jet spectrum is much steeper than \(\gamma\)+jet, while \(b\)-jet have similar slope as the inclusive jet, consistent with the results of Refs. [66; 67]. In order to study the flavor dependence of jet energy attenuation in heavy ion collisions, we present the relevant contributions in terms of jet flavor, which is defined as the flavor of the hard parton that fragments into the final observed jet. In the middle panel of Fig. 1, we show the fraction from quark- and gluon- initiated jet in: (d) inclusive jet, (e) \(\gamma\)+jet, and (f) \(b\)-jet as a function of jet \(p_{T}\). One can see that for inclusive jet, the contribution from gluon (quark) initiated jet dominates in low (large) \(p_{T}\) region, and gradually decreases (increases) with increasing \(p_{T}\). While for \(\gamma\)+jet, the quark initiated jet dominates (\(\sim 80\%\)) in the whole \(p_{T}\) region. For \(b\)-jet, it can be generated either from the initial hard scattering or from the parton showers via gluon and quark splitting. In the first case, it is the \(b\)-quark that initiates the \(b\)-jet, the relevant contribution is shown by \(b\)-quark in Fig. 1(f). In heavy-ion collisions, the medium modification to such \(b\)-jet has direct connection to the heavy quark energy loss [54; 55; 56; 82]. On the other hand, the medium modification on the latter two cases (with gluon and quark splitting) would resemble that of a massive quark or gluon jets. As can be seen, \(b\)-jet from gluon initiated contributes about \(40\%\) to the cross section in the whole \(p_{T}\) region, while the light quark initiated contribution goes up with increasing \(p_{T}\). To benchmark the medium effect which bridges the initial flavor origin and the final observed jet attenuation in A+A collisions. We present in the bottom panel of Fig. 1 the nuclear modification factor \(R_{AA}\) evaluated as a function of jet \(p_{T}\) for: (g) inclusive jet, (h) \(\gamma\)+jet, and (i) \(b\)-jet, and compared with ATLAS data [66; 67; 68]. The theory curves are obtained from LBT model [72; 73; 74; 75; 76], which includes both elastic [72; 73; 74] and inelastic scatterings [54; 75; 76; 83; 84; 85] for jet shower and recoil medium partons. Our results from LBT with \(\alpha_{s}=0.18\), which is the only parameter in LBT that controls the strength of parton interaction, show excellent agreements with the experimental data for inclusive jet [68], \(\gamma\)+jet [66] and \(b\)-jet [67]. As can be seen from the figure, the nuclear modification factor \(R_{AA}\) for \(\gamma\)+jet is larger than that of inclusive jet. This is attributed to their different quark and gluon origins and the slope of the reference spectra in p+p collisions. The nuclear modification factor \(R_{AA}\) for \(b\)-jet is also larger than that of inclusive jet in low Figure 2: (Color online) Distributions of and the correlations between the Bayesian-extracted parameters for gluon (left) and quark (right) initiated jet energy loss via fitting to \(R_{AA}\) of inclusive jet and \(\gamma\)-tagged jet in central 0-10% Pb+Pb collisions at 5.02 TeV [66; 68]. Figure 1: (Color online) Up: Transverse momentum distributions of: (a) inclusive jet, (b) \(\gamma\)-tagged jet, and (c) \(b\)-jet simulated by MadGraph+Pythia8 (lines) and the comparison with experimental data (samples) [66; 67; 68] in p+p collisions. The inset in (a) is the ratio of \(\gamma\)-tagged jet (blue solid) and \(b\)-jet (red dashed) to inclusive jet cross section. Middle: fraction of quark (Dashed blue line) and gluon (Solid red line) initiated jet of : (d) inclusive jet, (e) \(\gamma\)-tagged jet, and (f) \(b\)-jet as a function of jet \(p_{T}\) in p+p collisions. Bottom: nuclear modification factor of: (g) inclusive jet, (h) \(\gamma\)-tagged jet, and (i) \(b\)-jet calculated by LBT (lines) and the comparison with experimental data (samples) [66; 67; 68] in Pb+Pb collisions. region, while the difference disappear at large \(p_{T}\) region, which should be a mixed effect of color charge and parton mass dependence of jet quenching in medium. ### Colour-charge dependence of \(R_{aa}\) In Fig. 2, we present the distributions of the final-extracted parameters for gluon (left) and quark (right) initiated jet energy loss as well as their correlations, via Bayesian-fitting to the ATLAS data [66; 68] on \(R_{AA}\) of inclusive jets and \(\gamma\)-tagged jets in 0-10% Pb+Pb collisions at 5.02 TeV simultaneously. As can be seen, \(\beta_{i}\) and \(\gamma_{i}\), which reflect the average energy loss, are strongly correlated and well constrained for quark and gluon initiated jet. The mean value as well as its standard deviation of those final extracted parameters for gluon and charm quark energy loss distribution are summarized in Table 1. The final fitted nuclear modification factor \(R_{AA}\) of inclusive jet and \(\gamma\)-tagged jet as well as the comparison to experimental data [66; 68] in 0-10% centrality at 5.02 TeV are shown in Fig. 3.(a), and data-driven extracted nuclear modification factor of quark- and gluon- initiated inclusive jet are shown in Fig. 3.(b). The corresponding bands are results with one sigma deviation from the average fits of \(R_{AA}\). Data-driven extracted average energy loss fraction \(\langle\Delta p_{T}\rangle/p_{T}\) and scaled energy loss distributions \(W_{AA}(x)\) of quark and gluon initiated jet are also presented in Fig. 3.(c) and Fig. 3.(d). As can be seen, average energy loss of gluon and quark jet is well constrained in \(p_{T}<150\) GeV/\(c\), but is weaker constrained at high \(p_{T}\) due to large experimental data errors and the scarcity of \(\gamma\)-tagged jet experimental data at such high \(p_{T}\). The quark-initiated jets lose less fraction of its energy and shows a weaker dependence on the jet \(p_{T}\) compared to gluon-initiated jets due to its color factor as expected. Since jet showers also contain gluons even if they are initiated by a hard quark, the net energy loss of a gluon-tagged jet is always larger than that of a quark-tagged jet but the ratio is smaller than 9/4 from the naive leading order estimation [86; 87; 88]. Fig. 3.(a) shows that \(\gamma\)-tagged jet \(R_{AA}\) is less suppressed compared to that for inclusive jet, which is a mix effect of the slope of initial spectra and parton color-charge in p+p collisions. To clarify the relative contributions from the color-charge effect and the initial parton spectra between \(\gamma\)-tagged jet and inclusive jet, we calculate an artificial reference \(R_{AA}^{\rm ref}\) following Eq.(1), by assuming the inclusive jet production has the same fraction of quark jet as \(\gamma\)+jet. This reference \(R_{AA}^{\rm ref}\) is shown by magenta lines in Fig. 4.(a). The difference between \(R_{AA}^{\rm ref}\) and inclusive jet \(R_{AA}\) (denoted as "\(R_{AA}^{\rm jet}\)") should be attributed largely to the different color-charge effect between quark-medium and gluon-medium interactions, while the distinction between \(R_{AA}^{\rm ref}\) and \(\gamma\)+jet \(R_{AA}\) (denoted as "\(R_{AA}^{\gamma+{\rm jet}\),") should be attributed mostly to the slope of reference spectra in p+p. Fig. 4.(b) shows the relative contribution fraction from large quark fraction, evaluated as \(f^{\rm flavor}=(R_{AA}^{\rm ref}-R_{AA}^{\rm jet})/(R_{AA}^{\gamma+{\rm jet}}-R _{AA}^{\rm jet})\), to the less suppression of \(\gamma\)+jet \(R_{AA}\) compared to inclusive jet \(R_{AA}\). The increased quark jet fraction in inclusive jet production give the dominant contributions to the difference of \(R_{AA}\) between \(\gamma\)+jet and inclusive jet at \(p_{T}>60\) GeV/\(c\). Then \(1-f^{\rm flavor}\) characterized approximately the relative contribution from the slope of reference spectra, which plays a dominated role in the suppression at low \(p_{T}\). Besides, the distinction between \(\gamma\)+jet \(R_{AA}\) and inclusive jet \(R_{AA}\) will dimin Figure 3: (Color online) (a) Data-driven Bayesian fitted nuclear modification factor \(R_{AA}\) of inclusive jet (orange) and \(\gamma\)-tagged jet (gray) and the comparison to experimental data at [66; 68]. (b) Data-driven extracted nuclear modification factor of quark (blue) and gluon (red) initiated jet. (c) Fraction of jet average energy loss of light quark (blue) and gluon (quark) initiated jet, (d) scaled energy loss distributions \(W_{AA}^{i}(x)\) of quark (blue) and gluon (red) initiated jet. Figure 4: (Color online) (a) The reference \(R_{AA}^{\rm ref}\) (magenta) and the comparison with \(R_{AA}\) of \(\gamma\)+jet (grey) and inclusive jet (orange) jet in 0-10% centrality at 5.02 TeV, and the comparison with experimental data [66]. (b) The relative contribution fraction from large quark fraction to the less suppression of \(\gamma\)+jet \(R_{AA}\) compared to inclusive jet \(R_{AA}\). ish with increasing \(p_{T}\), because quark-initiated jets contribute a lion's share to the yields of both \(\gamma\)+jet and the inclusive jet at very large \(p_{T}\), which can be verified with the upcoming high precision measurements at the LHC. ### Centrality dependence of \(R_{aa}\) Moreover, we extract the centrality dependent quark and gluon jet energy loss distributions before exploring parton-mass effect on jet quenching motivated by two reasons. First, \(\gamma\)-tagged jet \(R_{AA}\)[66] shows a weaker dependence on centrality compared to inclusive jet [68], indicating that gluon-initiated jets may show a distinct centrality dependence with quark-initiated jets. Second, the experimental data of \(\gamma\)+jet \(R_{AA}\)[66], inclusive jet \(R_{AA}\)[68] and \(b\)-jet \(R_{AA}\)[67] are in different centrality bins. We need centrality dependent quark and gluon jet energy loss distributions to fit \(\gamma\)+jet \(R_{AA}\), inclusive jet \(R_{AA}\) and \(b\)-jet \(R_{AA}\) simultaneously. As a matter of fact, there are no experimental data of inclusive jet \(R_{AA}\) and \(\gamma\)+jet \(R_{AA}\) in the same centrality class except in central 0-10% centrality. For inclusive jet measurements, the existing measurements are provided for centrality bins 0-10%, 10-20%, 20-30%, 30-40%, 40-50%,50-60%, 60-70%, 70-80% [68], while for \(\gamma\)+jet \(R_{AA}\), it is limited to 0-10%, 10-30%, 30-80% [66]. In order to take full advantage of the existing measurements for inclusive jet \(R_{AA}\) in different centrality bins, we generate the inclusive jet \(R_{AA}\) as well as the corresponding errors in 10-30% and 30-80% centrality bins according to \(R_{AA}^{c^{\prime}}=\sum_{c\in C}P^{c}R_{AA}^{c}\), where \(P^{c}=N_{bin}^{c}/\sum_{c}N_{bin}^{c}\) is the probability of finding jet events in a given centrality bin following Ref. [89]. With such an extension, we can perform a simultaneous fit for both inclusive jet \(R_{AA}\) and \(\gamma\)+jet \(R_{AA}\) in 10-30% and 30-80% centralities. In Fig. 5, we present the data-driven fitted nuclear modification factor \(R_{AA}\) of inclusive jet [68] and \(\gamma\)+jet [66] in 10-30% and 30-80% centralities and the comparison with experimental data at 5.02 TeV. All final spectra based on Eq. (1) and Eq. (2) are in nice agreement with the experimental data. The corresponding mean value as well as its standard deviation of those final extracted parameters for gluon and light quark energy loss distribution are summarized in Table 1. Meanwhile, we obtain \(R_{AA}\) for quark-initiated jets and gluon-initiated jets in 10-30%, 30-80% centrality. Combined with the flavor dependent \(R_{AA}\) in 0-10% as extracted in the previous section (Fig. 3.(b)), we obtain the centrality dependence of final fitted gluon-initiated jet, quark-initiated jet and inclusive jet \(R_{AA}\). In Fig. 6, we show the centrality dependence of final fitted gluon jet (red), quark jet (blue) and inclusive jet (green) \(R_{AA}\) in Pb+Pb collisions in the region \(100<p_{T}<112\) GeV/\(c\) by step lines. One finds that the quark-initiated jet has weaker dependence on the centrality than that for gluon-initiated jet. Figure 5: (Color online) Data-driven fitted nuclear modification factor \(R_{AA}\) of the inclusive jet \(R_{AA}\)[68] and \(\gamma\)+jet \(R_{AA}\)[66] in 10-30%, 30-80% centrality bins and predictions of inclusive jet \(R_{AA}\) in 10-20%, 20-30%, 30-40% and 0-20% centrality bins as well as the comparison with experimental data [68]. \begin{table} \begin{tabular}{c|c c c} \hline & \(\alpha_{i}\) & \(\beta_{i}\) & \(\gamma_{i}\) \\ \hline \multirow{2}{*}{0-10\%} & gluon & 5.44\(\pm\)2.15 & 1.46\(\pm\)0.22 & 0.25\(\pm\)0.03 \\ & quark & 0.47\(\pm\)0.06 & 1.09\(\pm\)0.21 & 0.24\(\pm\)0.04 \\ \hline \multirow{2}{*}{10-30\%} & gluon & 1.48\(\pm\)0.45 & 1.65\(\pm\)0.32 & 0.21\(\pm\)0.03 \\ & quark & 3.96\(\pm\)1.05 & 1.47\(\pm\)0.13 & 0.11\(\pm\)0.02 \\ \hline \multirow{2}{*}{30-80\%} & gluon & 4.84\(\pm\)2.72 & 0.89\(\pm\)0.14 & 0.14\(\pm\)0.03 \\ & quark & 2.28\(\pm\)0.88 & 1.07\(\pm\)0.07 & 0.07\(\pm\)0.01 \\ \hline \end{tabular} \end{table} Table 1: Parameters [\(\alpha_{i}\), \(\gamma_{i}\), \(\beta_{i}\)] of quark and gluon jet energy loss distribution from Bayesian fits to experimental data [66; 68] on inclusive jet and \(\gamma\)+jet suppressions at 5.02 TeV. Figure 6: (Color online) The centrality dependence of final fitted gluon jet (red), quark jet (blue) and inclusive jet (green) \(R_{AA}\) in Pb+Pb collisions at 5.02 TeV. Next, we can fit the centrality dependent \(R_{AA}\) of quark- and gluon- initiated jet via a simple parametrization \(h^{i}(C)=a_{i}C^{2}+b_{i}C+c_{i}\), with \(C\) stands for the centrality. The best fit curves of \(h^{i}(C)\) are shown in Fig. 6 by dashed and dotted lines, and the corresponding best-fit parameter values are presented in Table. 2. Notice that the extrapolation to peripheral collisions (\(>80\%\)) is greater than one and can not be trusted, a reasonable identification of jet energy loss distribution for peripheral collisions would require a corresponding extension of experimental measurements. If we ignore the \(p_{T}\) dependence of \(h^{i}(C)\), \(R^{i,C}_{AA}\) for any centrality \(C\) can be simply obtained by \(R^{i,C}_{AA}=h^{i}(C)*R^{i,rc}_{AA}/h^{i}(rc)\), where \(rc\) stands for reference centrality. Based on Eq. (1) and the above extracted centrality dependent quark and gluon jet \(h^{i}(C)\), the predication of inclusive jet \(R_{AA}\) in 0-20%, 10-20%, 20-30%, 30-40% are presented in Fig. 5. One can see that our extracted centrality dependence of quark and gluon jet energy loss distributions can describe the experimental data \(R_{AA}\)[68] very well. ### Parton-mass dependence of \(R_{aa}\) Finally, with the extracted centrality dependent quark and gluon energy loss distributions, we also extract \(b\)-jet energy loss in the same framework based on Eqs. (1) and (2) through fitting to the experimental data of \(b\)-jet \(R_{AA}\)[67], inclusive jet \(R_{AA}\) in 0-20% centrality [68] and \(\gamma\)-tagged jet \(R_{AA}\) in 0-10% [66] simultaneously. The final fitted nuclear modification factor \(R_{AA}\) of \(b\)-jet (time green lines), inclusive jet (magenta lines) and \(\gamma\)-tagged jet (yellow lines) as well as the comparison with experimental data [66; 67; 68] are shown in Fig. 7(a). The corresponding bands are results with one sigma deviation from the average fits of \(R_{AA}\). Meanwhile, Fig. 7(b) shows the extracted nuclear modification factor \(R_{AA}\) for \(b\)-quark initiated (green, denoted as "\(R^{\rm b}_{AA}\)"), light-quark (denoted as "\(R^{\rm quark}_{AA}\)") and gluon (denoted as "\(R^{\rm gluon}_{AA}\)") initiated \(b\)-jet in 0-20% centrality, with the corresponding parameters for gluon, quark and \(b\)-quark energy loss distribution summarized in Table.3. The final extracted light-quark and gluon initiated jet energy loss distributions are consistent with our previous results in the same centrality, while \(b\)-quark initiated jets is less suppressed compared to light-quark initiated jets due to its large mass in low \(p_{T}\) region as expected. To explore the underlying \(b\)-jet suppression mechanism in heavy-ion collisions, we also present in Fig. 7(c) the ratio of \(b\)-quark initiated jets \(R_{AA}\) to light quark initialed jet \(R_{AA}\) as \(R^{\rm b}_{AA}/R^{\rm quark}_{AA}\), and also in Fig. 7(d) the ratio of \(b\)-jet \(R_{AA}\) (denoted as "\(R^{\rm b\)-jet\({}_{\rm jet}\)") to inclusive jet \(R_{AA}\)\(R^{\rm b\)-jet\({}_{\rm jet}\)" and \(R^{\rm jet}_{AA}\) extracted from global analysis and the comparison to the experimental measurements [67]. Our numerical results can describe the experimental data within large uncertainties [67]. These ratios are greater than unity and goes down with increasing \(p_{T}\), indicating that the parton mass effect is reduced with increasing \(p_{T}\)[90]. However, the mass effect for \(b\)-jet could persist to large \(p_{T}\), even at \(p_{T}\sim 300\) GeV/\(c\), and is consistent with the current data and a model based on strong coupling (via the AdS/CFT correspondence) [91], in contrast to Ref. [89; 90] in which mass effects are expected to be small at \(p_{T}>70\) GeV/\(c\). Those disagreements may be explained by two reasons. For one thing, due to the subdominant contributions and the limited \(b\)-jet \(R_{AA}\) data points with large uncertainties, especially at large \(p_{T}\), which have weak constraints on \(b\)-quark initialed jet. From there, \(b\)-quark initiated jet energy loss distributions is weakly constrained at present. For another, this may be attributed to the mixture of mass effect and color effect, as we may show below. For further demonstration of \(b\)-quark mass effect on the suppression of \(b\)-jet, we show in Fig. 7(d) (green lines) the ratio of \(b\)-jet \(R_{AA}\) to inclusive jet \(R_{AA}\), assuming \(b\)-jet has the same fraction of gluon initiated jet as inclusive jet (denoted as "\(f\)-jet \(=f^{\rm jet}\)"). The difference between this ratio and \(R^{\rm b\)-jet\(}_{AA}/R^{\rm jet}_{AA}\) should be attributed to the \(b\)-quark mass effect. One can see that the deviation between \(b\)-jet and the inclusive jet is significantly reduced with increasing \(p_{T}\). The mass effect roughly give considerable contributions to the ratio of \(R^{\rm b\)-jet\(}_{AA}/R^{\rm jet}_{AA}\) and are expected to be small at \(p_{T}\sim 300\) GeV/\(c\). To further illustrate the color-charge effect on the suppression of \(b\)-jet, we also calculated the ratio of \(b\)-jet \(R_{AA}\) to inclusive jet \(R_{AA}\), assuming \(b\)-quark jet lose the same fraction of energy as light-quark initiated jet (denoted as "\(R^{b}_{AA}=R^{\rm quark}_{AA}\)"), as shown by yellow lines in Fig. 7(d). The difference between this ratio and \(R^{\rm b\)-jet\(}_{AA}/R^{\rm jet}_{AA}\) should be attributed to the different gluon and quark fraction. As can be seen, those ratio is significantly enhanced and also show a downward tendency with increasing \(p_{T}\), indicating that, less gluon initiated jet contribution also lead to the less suppression of \(b\)-jet compared to inclusive jet in heavy-ion collisions, especially in low \(p_{T}\) region. Furthermore, the contribution from gluon initiated jet to inclusive jet production is greater than that to \(b\)-jet in the \begin{table} \begin{tabular}{c|c|c|c} \hline & \(a_{i}\) (\(\times 10^{-5}\)) & \(b_{i}\) (\(\times 10^{-3}\)) & \(c_{i}\) \\ \hline Quark & 5.58\(\pm\)1.09 & 1.65\(\pm\)0.57 & 0.58\(\pm\)0.0048 \\ \hline Gluon & 2.20\(\pm\)1.57 & 7.37\(\pm\)0.78 & 0.34\(\pm\)0.0056 \\ \hline \end{tabular} \end{table} Table 2: The best-fitted Parameters [\(a_{i}\), \(b_{i}\), \(c_{i}\)] for centrality dependent quark and gluon jet energy loss distributions. \(p_{T}<300\) GeV/\(c\) region as shown in Fig. 1. Thus \(b\)-jet \(R_{AA}\) is moderately larger than inclusive jet \(R_{AA}\). Therefore, we can see that color charge effect and mass effect gives almost the same contributions to \(b\)-jet suppression respectively in heavy-ion collisions. ## IV Summary We have carried out a systematic investigation of parton color-charge and parton mass dependence of nuclear modification factor by a systematic study of the medium modifications on three full jet observables: the inclusive jet, \(\gamma\)+jet, and \(b\)-jet, in Pb+Pb collisions relative to that in p+p at the LHC. Our results from MadGraph+PYTHIA and LBT give very nice descriptions of the experimental data for these three jet observables both in p+p and Pb+Pb. Then a Bayesian data-driven method is applied to extract the model-independent but flavor-dependent jet energy loss distributions. Fitting to those experimental data simultaneously, gluon, quark and \(b\)-quark initiated jet energy loss can be well constrained. It is seen that the energy loss of quark-initiated jets shows a weaker centrality dependence and weaker \(p_{T}\) dependence compared to that of gluon-initiated jets. We find that large quark-initiated jet fraction underlies \(\gamma\)+jet suppression at large \(p_{T}\), while the flat spectra give the dominate contribution to \(\gamma\)+jet suppression at low \(p_{T}\). However, the \(b\)-initiated jet is less suppressed compared to light-quark initiated jet. We demonstrate that the quark mass effect and color charge effect have comparable impacts to the ratio \(R_{AA}^{b\text{-jet}}/R_{AA}^{\text{jet}}\), though their influence may decrease significantly at \(p_{T}\sim 300\) GeV/\(c\). Such a systematic extraction of jet energy loss distributions can help constrain model uncertainties and pave the way to precise predictions of the properties of the hot QCD medium created in relativistic heavy-ion collisions1. Footnote 1: When finalizing this paper, the authors notice a very recent parallel study of extracting the flavor dependence of parton energy loss [92], but from the nuclear modifications of various hadron species instead of jet observables presented in our work. **Acknowledgments:** This research is supported by Natural Science Foundation of China with Project Nos. 12035007, 12022512, 12147131, Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008. S.Z. is further supported by the MOE Key Laboratory of Quark and Lepton Physics (CCNU) under Project No. QLPL2021P01.
2305.10969
Strategic Proxy Voting on the Line
This paper offers a framework for the study of strategic behavior in proxy voting, where non-active voters delegate their votes to active voters. We further study how proxy voting affects the strategic behavior of non-active voters and proxies (active voters) under complete and partial information. We focus on the median voting rule for single-peaked preferences. Our results show strategyproofness with respect to non-active voters. Furthermore, while strategyproofness does not extend to proxies, we show that the outcome is bounded and, under mild restrictions, strategic behavior leads to socially optimal outcomes. We further show that our results extend to partial information settings, and in particular for regret-averse agents.
Gili Bielous, Reshef Meir
2023-05-18T13:32:16Z
http://arxiv.org/abs/2305.10969v1
# Strategic Proxy Voting on the Line ###### Abstract This paper offers a framework for the study of strategic behavior in proxy voting, where non-active voters delegate their votes to active voters. We further study how proxy voting affects the strategic behavior of non-active voters and proxies (active voters) under complete and partial information. We focus on the median voting rule for single-peaked preferences. Our results show strategyproofness with respect to non-active voters. Furthermore, while strategyproofness does not extend to proxies, we show that the outcome is bounded and, under mild restrictions, strategic behavior leads to socially optimal outcomes. We further show that our results extend to partial information settings, and in particular for regret-averse agents. Keywords:Computational Social Choice Proxy Voting Strategic Voting Strategyproofness. ## 1 Introduction In the age of internet, we see an increase of platforms and mechanisms for collective decision-making. However, many of these platforms suffer from low participation rates (Schaupp and Carter, 2005; Jonsson and Ornebring, 2011). Thus, while there is an increase in the ability of individuals to influence collective decision-making in many areas, most decisions are made by a small, non-elected and non-representative groups of active voters. Partial participation may increase vote distortion (Ghodsi et al., 2019) (the worst-case ratio between the social cost of the candidate elected and the optimal candidate, first defined in (Procaccia and Rosenschein, 2006)); lead to counter-intuitive equilibria (Desmedt and Elkind, 2010); and significantly decrease the likelihood of selecting the Condorcet winner (when it exists) (Gehrlein and Lepelley, 2010). Above all, when the outcome of an election only considers a fraction of all opinions, it is unreasonable to assume that they accurately reflect the aggregated opinions of the collective. Proxy voting, a long standing practice in politics and corporates (Riddick and Butcher, 1991), and an up-and-coming practice in e-voting and participatory democracies (Petrik, 2009), aims at mitigating the adverse effects of partial participation. Non-active voters (followers) delegate their vote to another active voter (proxy), thereby at least having some influence on the outcome. Cohensius et al. (2017) proposed a model where the voters are sampled from a given distribution of non-atomic voters. Among them are a subset of proxies, with voting power proportional to the population mass that delegates to them. The outcomes of various voting rules, in particular the median voting rule, as determined by the voters are compared against the outcome via proxy voting. They show that for most settings, the outcome via proxy voting improves the accuracy with respect to the aggregated social preference of the entire population. However, such delegation changes the power dynamic of voters by shifting some of the voting power to proxies. While much consideration is granted in the literature of social choice for the strategic behavior of voters (Gibbard, 1973; Satterthwaite, 1975) and candidates (Dutta et al., 2001; Sabato et al., 2017), there is little consideration of the _strategic behavior of proxies or followers_ in proxy-mediated settings. Cohensius et al. (2017) consider strategic participation (i.e. selecting to participate or abstain) with mostly positive results. Notably, they show convergence to an equilibrium with the same accuracy as without strategic behavior using proxy voting. Yet they pose the question of strategic behavior of followers as an open question, which was part of the inspiration to the current study. Moreover, it is common to study strategic behavior in adversarial settings assuming complete information. This makes sense as a worst-case assumption for strategyproofness, but treating uncertainty is unavoidable when we are trying to model actual strategic voting and predict its implications (for an overview of uncertainty in voting and equilibrium models, see Meir (2018) Chapters 6 and 8). In the context of proxy voting, assuming full information is even less reasonable: by delegating their vote, followers may wish to avoid the cognitive strain, time loss and other costs associated with determining and communicating their position. Thus, a setting that requires followers to explicitly define their positions negates these benefits of proxy voting for followers. Therefore, it makes more sense that active voters set their strategies based on partial information on the positions of potential followers. While there are many ways to model such uncertainty, we adopt the framework of Reijngoud and Endriss (2012) that allows for simple and flexible definition of information sets. ### Related Work The effects of delegation on the accuracy of results have been recently studied in the context of _liquid democracy_, a delegation model where voters may continue to transitively delegate their votes. Kahng et al. (2021); Caragiannis and Micha (2019) show that the concentration of power in liquid democracy can be so severe that it leads to low accuracy with respect to an assumed ground truth. Subsequent work attempted to limit power concentration, either by an impartial planner (Golz et al., 2021) or in by altering the delegation mechanism Halpern et al. (2021). In contrast, as previously mentioned Cohensius et al. (2017) achieved positive results for one-step (proxy) delegation. There are two closely related spatial models to our setting. The first is the model of Cohensius et al. [2017] mentioned above. The second is _Strategic Candidacy Games_ proposed by Sabato et al. [2017], where candidates are assumed to have self-supporting preferences over possible outcomes. Thus, candidates have incentives to strategize even when they cannot guarantee their own win. Their results show the existence of a Nash equilibrium for Condorcet-consistent voting rules for voters with symmetric single-peaked preferences and every set of preferences for candidates. Our model differs and generalizes in the following sense. First, in our model, candidates' preferences are based on the outcome, not the identities of candidates. In particular, the preferences are determined _ex-post_ for a given state. Second, candidates in Sabato et al. [2017] are weightless, while voters (equivalent to followers in our model) are atomic. Our results demonstrate a stronger claim than the mere existence of a Nash Equilibrium, even for this generalized model. In particular, we show convergence to NE similar to the one described by their work (subject to certain restrictions). As a result, our work significantly expands upon theirs. Moreover, some of our results become trivialized when proxies are non-atomic. ### Contribution and Paper Structure The use of spatial models for the study of behavior and results of voting mechanisms have been first introduced by Hotelling [1929] and Downs [1957]. Our model follows this approach, is too a spatial model, assuming the political spectrum is represented as positions on the real line. We focus on the median voting rule that has been shown to be (group) strategyproof for single-peaked preferences [Black, 1948; Moulin, 1980]. It is important to stress that the objectives of candidates in Hotelling-Downs is different than our setting. Proxies want to maximize the outcome with respect to their preferences whereas in HD candidates wish to maximize their votes. While vote maximizing seems like a winning strategy in this context, in fact the winning strategy is to restructure the partition of votes. We show this in some section. Our initial study considers strategyproofness and manipulability with respect to both followers and proxies positions. Then, we consider sequences where proxies react to other proxies' actions. Finally, we turn to study strategic behavior in partial information settings. Our contribution is as follows: * Followers never have an incentive to misreport their position (or, equivalently, to follow a proxy other than the nearest one).[Section 3] * Proxy voting with the median voting rule is _manipulable_ with respect to proxy positions, and we provide a complete characterization of manipulable scenarios.[Section 3] * In sequences of manipulations, the outcome of each step is bounded.[Section 4] * Under mild restrictions, sequences of manipulations converge to an optimal equilibrium.[Section 4] * Manipulations under partial information may converge to a worse equilibrium than without delegation.[Section 5] * If agents are regret-averse, then manipulations converge to a socially optimal equilibrium even with partial information.[Section 5] A preliminary version of this paper was presented in EUMAS2022 [Bielous and Meir, 2022]. In this version, we offer two significant additions to our results. First, we generalize our positive results for unrestricted manipulations by providing a bound on the outcome in strategic proxy settings. Second, in this version we show that our results extend beyond complete information settings to partial information scenarios. We show that even with limited information, the proxy voting framework retains its desirable properties. In particular, we examine the implications for regret-averse agents and find that our results hold in these cases as well. By investigating the effects of strategic behavior in proxy voting under different information conditions, our study contributes to a deeper understanding of the dynamics and potential benefits of this voting mechanism. ## 2 Model and Preliminaries We define the model of _Strategic Proxy Games (SPG)_ as follows. Model.Our basic model follows the one in by Cohensius et al. [2017]. There is a set of proxies (active agents) \(M=\{1,...,m\}\), and a set of followers \(N=\{1,...,n\}\). We refer to the set of all agents \(N\cup M\) as 'voters'. Each voter \(1\leq i\leq n+m\) has a position \(p_{i}\in\mathbb{R}\) along the political spectrum. True positions are \(p\in\mathbb{R}^{m+n}\), where \(p|_{M}:=(p_{j})_{j\in M}\) and \(p|_{N}:=(p_{i})_{i\in N}\). A _state_ is a vector \(s\in\mathbb{R}^{m}\), such that \(s_{j}\) is the position Proxy \(j\) declares. We denote by \(\left(s_{-j},s^{\prime}_{j}\right)\) the state that is equal to \(s\) except for the strategy of Proxy \(j\), that is \(s^{\prime}_{j}\). Delegation.We assume that each follower delegates their vote to the nearest proxy (this is known as the Tullock delegation model [Tullock, 1967]). Formally, given a vector of positions \(p\) and a state \(s\), each Follower \(i\in N\) delegates their vote to Proxy \(j\in M\), where \[\varphi_{i}\left(s\right):=\operatorname*{argmin}_{j\in M}\lvert s_{j}-p_{i}\rvert\] We assume the existence of a deterministic tie-breaking scheme that only depends on the state of voters. All proxies delegate their vote to themselves. PreferencesVoters are assumed to have single-peaked preferences with peak at \(p_{i}\). That is, for every \(x,y\in\mathbb{R}\), if \(x<y\leq p_{i}\), then Voter \(i\) prefers \(y\) to \(x\), and if \(p_{i}\leq x<y\), then Voter \(i\) prefers \(x\) to \(y\). For followers we further assume preferences are symmetric, that is, for every \(x,y\in\mathbb{R}\), if \(\lvert x-p_{i}\rvert<\lvert y-p_{i}\rvert\), then Follower \(i\) prefers \(x\) to \(y\). Thus, preferences of voters are consistent with the delegation model. Example 1: Consider the SPG appearing in Figure 1. There are two proxies \(\{1,2\}\) with positions \(p_{1}=-1\) and \(p_{2}=1.5\). There is a single follower \(\{3\}\) with position \(p_{3}=0\). In the truthful state \(s=p|_{M}=(-1,1.5)\), the follower delegates their vote to \(\varphi_{3}\left(s\right)=1\). Thus, there are two votes to \(-1\) and a single vote to \(1.5\). Weighted median.Given a finite vector \(\vec{s}\in\mathbb{R}^{m}\) such that each \(s_{i}\in\vec{s}\) has weight \(w_{s_{i}}\in\mathbb{R}^{+}\), let \(W=\sum_{s_{i}\in\vec{s}}w_{s_{i}}\). The weighted median of \(\vec{s}\), denoted \(\mathrm{med}(\vec{s};w)\), is \(s_{i}\in\vec{s}\) such that \[\sum_{\{s_{j}\in\vec{s}\setminus\{s_{i}\}:s_{j}\leq s_{i}\}}w_{s_{j}}\leq\frac {W}{2}\quad\text{and}\quad\sum_{\{s_{j}\in\vec{s}\setminus\{s_{i}\}:s_{j}\geq s _{i}\}}w_{s_{j}}\leq\frac{W}{2}.\] That is, the sum of weights of elements that are smaller than \(s_{i}\) is at most half the total sum of weights, and the same holds for the sum of weights of elements that are larger than \(s_{i}\). Weighted median voting rule.Next, we define the Weighted Median voting rule. The weight of each proxy is defined as the number of delegations to them. Then, the _weighted median voting rule_ (WM) selects the position that is the weighted median of proxy positions. Formally: \[\mathrm{med}(s,p|_{N}):=\mathrm{med}((s,p|_{N});1)\] is the unweighted median of all voters (proxies and followers) at state \(s\), and the weighted median is \[\mathrm{wm}(s,p|_{N}):=\mathrm{med}(s;w)\;\;\text{where}\;w_{j}:=|\{i\in N: \varphi_{i}(s)=j\}|+1,\] and we often omit \(p|_{N}\) when clear from context. Ties break lexicographically. At state \(s\), the WM voting rule selects \(\mathrm{wm}(s,p|_{N})\in\mathbb{R}\) as the winner. Note that \(\mathrm{wm}(s)=s_{j}\) for some \(j\in M\), and we denote this selected proxy by \(j^{*}(s,p|_{N})\in M\). We denote the median and weighted median in the true state \(p\) by \(\mathrm{med}:=\mathrm{med}(p)\) and \(\mathrm{wm}:=\mathrm{wm}(p)\), respectively. The WM winner in Example 1 is the position \(-1\). This is because the proxy at \(-1\) receives a total of \(2\) votes from the single follower who delegates to them, whereas the proxy at \(1.5\) receives \(1\) vote. Strategyproofness and Manipulations.We say that a voter is _truthful_ if they declare their true position \(p_{i}\). Voters may lie about their positions, and we assume that voters are rational, that is, they lie only if by lying the outcome changes in their favor. Figure 1: An example SPG. Large dots indicate the positions of proxies, small dots indicate the positions of followers. We say that \(p_{i}^{\prime}\neq p_{i}\) is a _manipulation_ for voter \(i\in N\cup M\) if voter \(i\) strictly prefers \(\operatorname{wm}(p^{\prime})\) to \(\operatorname{wm}(p)\), where \(p^{\prime}=(p_{-i},p_{i}^{\prime})\). A voting rule is _strategyproof_ if for every vector of true positions \(p\), no voter has a manipulation; otherwise, it is _manipulable_. The Median voting rule, i.e. the voting rule that selects the unweighted median is known to be (group) strategyproof for single-peaked preferences [1, 10], and thus in particular to voters who try to minimize their distance as in our model. ## 3 Strategyproofness of Weighted Median ### Manipulation by Followers We begin our analysis by showing that strategyproofness extends to Weighted Median with respect to followers' positions. In their work, cohensius2017cohensius demonstrate that for any distribution of followers and proxies, the WM winner is the proxy who is closest to the true median. That is, the proxy \(j^{*}\) selected by the weighted median rule is the one closest to the (unweighted) median of the entire population. Equivalently, it is the proxy selected by the median voter in the population. Formally, let \(i^{*}\) be the median voter in profile \((s,p|_{N})\). Then: Lemma 1 (cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohensius2017cohens2017cohensius2017cohens2017cohensius2017cohens2017cohensius2017cohensius2017cohens2017cohensius2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017co2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017co2017cohens2017cohens2017cohens2017cohens2017co2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017co2017cohens2017cohens2017cohens2017co2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017cohens2017co2017cohens20 * \(p_{j^{\prime}}<p_{i}<p_{j}\): Since \(p_{i}\geq\mathrm{med}(p)\), and since \(|\mathrm{med}(p)-p_{j}|\leq|\mathrm{med}(p)-p_{j^{\prime}}|\), we get \(|p_{i}-p_{j}|<|p_{i}-p_{j^{\prime}}|\). Thus, by symmetric single-peakedness Follower \(i\) prefers \(p_{j}\) to \(p_{j^{\prime}}\), in contradiction to \(p_{i}^{\prime}\) being a manipulation. Remark 1: Another interpretation of Theorem 3.1 is that under the weighted median voting rule, it is a dominant strategy for a follower to support her nearest proxy. In other words, the theorem _justifies_ the Tullock delegation model. As Theorem 3.1 shows that WM is strategyproof with respect to the positions of followers, we can henceforth consider them as non-strategic agents. In what follows, followers are considered to always be truthful. In particular, the position vector \(p|_{N}\) is fixed. ### Manipulation by Proxies We continue to analyze the strategic behavior of proxies. While we obtain a positive result of strategyproofness when only followers are strategic, the same does not hold for proxies, as demonstrated by the following example. Example 2: Recall the SPG appearing in Example 1. The truthful WM winner is \(\mathrm{wm}(s)=-1\), and the winning proxy is \(1\). Consider the state \(s=(p_{1},1-\varepsilon)\) for some \(0<\varepsilon<2\). The single follower delegates their vote to Proxy 2. There are two votes to \(s_{2}=1-\varepsilon\) and only one vote to \(s_{1}=-1\), thus, \(\mathrm{wm}(s)=1-\varepsilon\). As preferences are single-peaked and Proxy 2's peak is at \(p_{2}=1.5\), we get that Proxy 2 strictly prefers \(1-\varepsilon\) to \(-1\). Hence, \(1-\varepsilon\) is a manipulation for Proxy 2. The counter-example presented in Example 2 can be easily expanded to any number of followers and proxies. However, rather than formally constructing such example, The following theorem provides a complete characterization of manipulable scenarios. As a consequence, it shows that manipulations exist under very simple and reasonable conditions. Theorem 3.2: _There is a proxy that has a manipulation in the truthful state \(p|_{M}\) iff it holds that \(p_{j}\neq\mathrm{med}\) for all \(1\leq j\leq m\), and there are proxies \(j,j^{\prime}\in M\) such that \(p_{j}<\mathrm{med}<p_{j^{\prime}}\)._ Figure 2: The SPG with manipulation by Proxy 2. Large empty dot is Proxy 2’s true position, the manipulation is reporting \(1-\varepsilon\). The single follower delegates their vote to Proxy 2. Proof: "\(\Leftarrow\)" Suppose \(p_{j}<\mathrm{med}<p_{j^{\prime}}\), and w.l.o.g. let \(j\) be the closest proxy to the median, so \(j^{*}(p)=j\), and \(\mathrm{wm}=p_{j}\). As preferences are single-peaked, Proxy \(j^{\prime}\) prefers med to \(\mathrm{wm}(p)\). We proceed by showing that moving to \(p^{\prime}_{j^{\prime}}=\mathrm{med}\) is a manipulation for Proxy \(j^{\prime}\), as in Fig. 3. Indeed, denote \(p^{\prime}=(p_{-j},p^{\prime}_{j^{\prime}})\). Since \(p_{j},p^{\prime}_{j^{\prime}}\) are on the same side of med, we have that the position of the median does not change, i.e., \(\mathrm{med}(p^{\prime})=\mathrm{med}\). By Lemma 1 we get that \(\mathrm{wm}(p^{\prime})\) is the proxy closest to \(\mathrm{med}(p^{\prime})\), whose position is \(p^{\prime}_{j^{\prime}}=\mathrm{med}\). Since \(\mathrm{wm}(p^{\prime})=p^{\prime}_{j^{\prime}}\) is strictly between \(p_{j^{\prime}}\) and \(\mathrm{wm}\), this is a manipulation for \(j^{\prime}\). "\(\Rightarrow\)". If there is some proxy \(k\) such that \(p_{k}=\mathrm{med}\), then by Lemma 1, med is the WM winner. Therefore, every proxy with position at med have their peak outcome, so there is no more preferred outcome for them. Consequently they do not have a manipulation. Further, no position is closer to med, thus no other proxy can change the outcome by reporting a position that is closer to the median. Therefore, they can only manipulate by reporting a position that changes the position of the median. Assume towards contradiction that there is such a proxy \(k\) with a manipulation \(p^{\prime}_{k}\), and let \(p^{\prime}=(p_{-k},p^{\prime}_{k})\). W.l.o.g assume that \(p_{k}>\mathrm{med}\). Then, the position of the median would only change in \(p^{\prime}\) if Proxy \(k\) reports a position on the other side of med, i.e. \(p^{\prime}_{k}<\mathrm{med}\). We get that \(\mathrm{med}(p^{\prime})<\mathrm{med}\). Since \(p^{\prime}_{k}\) is a manipulation, the outcome of \(p^{\prime}\) holds \[\mathrm{wm}(p^{\prime})\leq\mathrm{med}(p^{\prime})<\mathrm{med}=\mathrm{wm} <p_{k}.\] The first inequality is since \(\mathrm{med}(p^{\prime})\) is the maximal position \(p^{\prime}_{i}<\mathrm{med}\) for any \(i\in M\cup N\). By single-peakedness \(k\) prefers med to \(\mathrm{wm}(p^{\prime})\), in contradiction to \(p^{\prime}_{k}\) being a manipulation. Finally, assume that for all proxies \(p_{k}\leq\mathrm{wm}<\mathrm{med}\). Clearly the proxy \(j\) who is closest to med have no manipulation since their position \(p_{j}\) wins. Also note that in any manipulation \(p^{\prime}_{k}\) we have \(\mathrm{med}(p^{\prime})\geq\mathrm{med}\). The only way for \(k\) to change the outcome is by becoming the winner themselves, i.e. reporting a position closer to \(\mathrm{med}(p^{\prime})\) than \(p_{j}\). However since \(p_{j}<\mathrm{med}\leq\mathrm{med}(p^{\prime})\), we have \(p_{k}<\mathrm{wm}=p_{j}<\mathrm{wm}(p^{\prime})\) and thus Proxy \(k\) strictly loses from such a move. ## 4 Manipulations for Better Outcomes Consider the manipulation described in the proof of Theorem 2.1. The outcome of the manipulated state is the true median, which is the outcome of the median Figure 3: A proxy with a manipulation. voting rule with complete participation. That is, in this case the manipulation has a positive effect on the accuracy of the outcome! Manipulations are often beneficial.The example above may seem counter-intuitive, but it is in fact common that strategic behavior improves the outcome, even when applied repeatedly by all voters. This is true especially in simple voting rules like Plurality, as manipulations play a form of compromise that lets voters avoid socially-inferior outcomes. This was shown both in theoretical analysis Grandi et al. (2013) and in simulations Meir et al. (2014); Grandi et al. (2013). Evaluation.This brings about the natural question _does strategic voting of proxies always improve the outcome?_ We note that in general, questions about 'good outcomes' in voting are tricky, since there are numerous ways to evaluate the outcome of a voting rule, e.g. if we define the optimum to be the outcome of the used rule itself on the truthful votes (as in Branzei et al. (2013)), then strategic behavior is bad by definition. However, in the context of delegation on the real line, a natural evaluation metric is the distance to the 'ideal point' that would be selected if everyone had voted, i.e. to \(\mathrm{med}(p)\). This is exactly the approach followed by (Cohensius et al., 2017), when showing (non-strategic) delegation improves the outcome. Another natural measure is the _social cost_, i.e. the sum of distances of all voters from the selected position. When using the median voting rule without delegation, the two goals coincide, as the median is known to minimize the social cost. However a good approximation of the true median may have poor social cost, and vice versa. 3 Footnote 3: Suppose there are \(k\) followers and \(1\) proxy \(j\) on \(0\), \(k+2\) followers on \(1\), and a second proxy \(j^{\prime}\) on \(2-\varepsilon\). Then \(j^{\prime}\) is the closest proxy to \(\mathrm{med}=1\) (and thus selected by WM), but has social cost of \(3k+\Theta(1)\), whereas \(j\) has social cost of \(k+\Theta(1)\). Conversely, if \(j^{\prime}\) is on \(1+2/k\), then \(j\) still minimizes the social cost, but its distance from \(\mathrm{med}\) is \(\Theta(k)\) larger than that of \(j^{\prime}\). Following Cohensius et al. (2017), we adopt the distance to the true median as our goal, but also discuss the implications on social cost where relevant. Convergence.When discussing strategic behavior, an even more fundamental question than welfare is _stability_. A hierarchy for notions of stability in iterative voting is explained in Meir (2018). For example, under Plurality with a mild assumption on voters' behavior it is known that iterative voting always converges to a pure Nash equilibrium Meir (2017). In candidacy games, which are equivalent to our setting with weightless proxies, it was shown that a pure Nash equilibrium Figure 4: If the positions of all proxies are on the same side of \(\mathrm{med}\), then non of them have a manipulation. exists Sabato et al. (2017), but there are no results regarding convergence, or regarding the more general model where proxies have weights. In this section and the next one, we therefore study both the conditions under which iterative voting by strategic proxies is guaranteed to converge, and bounds on the distance of the final outcome from the true median of the population. ### Dynamics and Convergence _Policies_. A _policy_ for proxy \(j\in M\) is a function that maps a state to a strategy. Formally, let \(\mathcal{S}=\mathbb{R}^{m}\) be the set of all possible states for the proxies, then, a policy for \(j\) is a function \(\pi_{j}:\mathcal{S}\rightarrow\mathbb{R}\). _Better-responses_. For every \(j\in M\) and every state \(s\), we say that the position \(s^{\prime}_{j}\) is a _better-response_ to \(s\) if \(j\) strictly prefers the outcome of \(\left(s_{-j},s^{\prime}_{j}\right)\) to the outcome of \(s\). We denote the set of better-responses of \(j\) to \(s\) by \(\mathcal{B}^{j}_{s}\). We say that a policy is a _better-response policy_ for \(j\) if for every \(s\), the strategy selected by the policy is a better response to \(s\), that is \(\pi_{j}\left(s\right)\in\mathcal{B}^{j}_{s}\). A better-response policy is said to be _best-response policy_, if the outcome of the selected strategy is most preferred by the proxy within their better-response set. Note that a best-response policy may not exist. Truth-oriented.A proxy \(j\) is _truth-oriented_ if their policy selects their true position whenever it is in their better-response set, and is weakly better than any other strategy. Formally, for every \(s\), if \(p_{j}\in\mathcal{B}^{j}_{s}\) and \(j\) prefers \(\mathrm{wm}(s_{-j},p_{j})\) to \(\mathrm{wm}\big{(}s_{-j},s^{\prime}_{j}\big{)}\) for every \(s^{\prime}_{j}\in\mathcal{B}^{j}_{s}\), then \(\pi_{j}\left(s\right)=p_{j}\). Truth-orientation is closely related to _truth-bias_ proposed by Meir et al. (2010). Truth-biased agents would resort to truth if it is weakly better than _any_ other strategy, in particular when the better-response set is empty. Truth-orientation is a weaker requirement, as truth is only compared to better-responses. Dynamics.A _dynamics_\(\tilde{s}=\left(s^{t}\right)_{t=0}^{\infty}\) is a (possibly infinite) series of states, where \(s^{t}\) is the state after step \(t\). We assume that the initial state is truthful, i.e., \(s^{0}=p|_{M}\). Then, for every \(t>0\) there is a single proxy \(j=j^{t}\in M\) changing position from \(s^{t-1}_{j}\) to \(s^{t}_{j}\) according to their policy \(\pi_{j}\). Thus \[s^{t}=\left(s^{t-1}_{-j},s^{t}_{j}\right)=\left(s^{t-1}_{-j},\pi_{j}\left(s^{t -1}\right)\right).\] We do not assume any particular order over proxies' turns, except that there is no starvation. That is, every proxy eventually gets to play again, an infinite number of times. Recall that the winner in state \(s\) is denoted by \(j^{*}(s)\). We denote by \(j^{*}(t)\) and \(\mathrm{wm}^{t}\) the winner at time \(t\) and their position, respectively. Thus, \(j^{*}(t)=j^{*}(s^{t})\) and \(\mathrm{wm}^{t}=s^{t}_{j^{*}(t)}=\mathrm{wm}(s^{t})\). We further denote by \(\mathrm{med}^{t}:=\mathrm{med}(s^{t})\) the median at step \(t\). We further denote by by \(j^{*}:=j^{*}(s^{0})\) the winner of the initial state, thus \(p_{j^{*}}=\operatorname{wm}(p)=\operatorname{wm}\). At a given state \(s\), we denote by \(\Delta(s):=|\operatorname{med}(s)-\operatorname{wm}(s)\,|\) the distance between the unweighted median and the weighted median (the winner). We also denote \(\Delta^{t}:=\Delta(s^{t})\) and \(\Delta:=\Delta(s^{0})\). The standard setting for the study of on-going dynamics in voting is Iterative Voting (Meir, 2017). However, since our model involves an infinite action set, the terminology and results cannot be applied in a straightforward way. We address it when relevant. Instead, we say that a dynamics \(\tilde{s}\)_converges_ if it has a limit. A state \(s\) is a _pure Nash equilibrium_ (PNE) if for every \(j\in M\) it holds that \(\mathcal{B}_{s}^{j}=\varnothing\), that is, no proxy has a better-response to \(s\). We start our analysis with bounding the distance from the true median that the outcome can converge to. For the rest of this section, we show that for every step in a better-response dynamics from truth, the current median and outcome are bounded in a neighborhood of \(\operatorname{med}\) with radius \(\Delta\). Theorem 3.1: _Assume all proxies are truth-oriented. For every step \(t\geq 0\) in a better-response dynamics \(\tilde{s}\) by proxy \(j\), if \(j\)'s peak is left of \(\operatorname{med}\) then \(s_{j}^{t}\leq\operatorname{med}+\Delta\); and if their peak is right of \(\operatorname{med}\), then \(\operatorname{med}-\Delta\leq s_{j}^{t}\)._ Remark 2: We point out that if proxies are weightless, as in the model by Sabato et al. (2017), then the theorem trivially holds. This is since \(\operatorname{med}(s)=\operatorname{med}(p)\) at every state, and thus \(\Delta^{t}\) can only become smaller at every step, as the current proxy moves closer to \(\operatorname{med}(p)\) to become the new winner. Consider a truthful state where the peak of the initial winner \(p_{j^{*}}\) is left of the median. For a better-response dynamics from this truthful state, we prove the following lemma: Lemma 2: _Assume all proxies are truth-oriented. If for every \(0\leq t^{\prime}\leq t\), and for all \(j\in M\) s.t. \(p_{j}\geq\operatorname{med}(p)\) it holds that \(s_{j}^{t^{\prime}}\geq med-\Delta=p_{j^{*}}\), then for the truthful winner \(j^{*}\) it holds that \(p_{j^{*}}\leq s_{j^{*}}^{t}\)_ Proof: Assume false, and let \(0<t_{0}\leq t\) be the first step that the truthful winner \(j^{*}\) moves to the left of their peak, that is, \(s_{j^{*}}^{t_{0}}<p_{j^{*}}\leq s_{j^{*}}^{t_{0}-1}\). In particular this means that \(j^{*}\) is the moving proxy at \(t_{0}-1\). Since no proxy with peak right of \(\operatorname{med}\) reported a position left of \(\operatorname{med}-\Delta\) prior to \(t\), it follows that the median at every step prior to \(t\) is right of \(\operatorname{med}-\Delta\). This is in particular true at \(t_{0}\leq t\), that is, \(\operatorname{med}-\Delta\leq\operatorname{med}^{t_{0}}\). It follows that \(s_{j^{*}}^{t_{0}}<p_{j^{*}}=\operatorname{med}-\Delta\leq\operatorname{med}^ {t_{0}}\). We argue that this contradicts the assumption that proxies are truth-oriented. One of the following must hold. Either \(s_{j^{*}}^{t_{0}}\) is the winning position at \(t_{0}\), in which case \(p_{j^{*}}\) is closer to the current median and can therefore win. Since the peak is the optimal outcome for \(j^{*}\), it is in their better-response set, the outcome is weakly better than the outcome of every other strategy in \(\mathcal{B}_{s^{t_{0}}}^{j^{*}}\). Otherwise, since \(s_{j^{*}}^{t_{0}}\) is a better-response to \(s^{t_{0}-1}\), it must be that by reporting \(s_{j^{*}}^{t_{0}}\) the position of the median changed, such that \(j^{*}(t_{0})\neq j^{*}\) is closer to \(\operatorname{med}^{t_{0}}\) than \(\operatorname{med}^{t_{0}-1}\). Again since \(t_{0},t_{0}-1\leq t\) we get that \(p_{j^{*}}\leq\operatorname{med}^{t_{0}},\operatorname{med}^{t_{0}-1}\). Thus, reporting \(p_{j^{*}}\) would have the same effect on the position of the median as \(s_{j^{*}}^{t_{0}}\). Therefore, \(p_{j^{*}}\) is weakly better than any other better-response. Figure 5 demonstrates the possible scenarios. Note that by symmetry the same hold for the case where the truthful winner's peak is right of the median. We turn to prove Theorem 3.1. Proof: Assume that there was no violation up to step \(t\geq 0\). In particular, this imply that \(\mathrm{med}-\Delta\leq\mathrm{med}^{t}\leq\mathrm{med}+\Delta\). Consider a proxy \(j\), and let \(s_{j}^{\prime}\) be a possible strategy for \(j\) such that \(s_{j}^{\prime}\notin[\mathrm{med}-\Delta,\mathrm{med}+\Delta]\). If \(s_{j}^{\prime}\) is on the same side of the median as \(p_{j}\), then this is not a violation. Otherwise, \(s_{j}^{\prime}\) is a better-response iff it is between \(p_{j}\) and \(\mathrm{wm}(s^{t})\), the winning position at \(t\). This is because otherwise, \(s_{j}^{\prime}\) would either have no effect on the outcome, or the outcome would be further than \(\mathrm{wm}(s^{t})\) to their peak. Note that this also implies that \(\mathrm{wm}(s^{t})\notin[\mathrm{med}-\Delta,\mathrm{med}+\Delta]\). By lemma 2, we have that \(s_{j^{*}}^{t}\in[\mathrm{med}-\Delta,\mathrm{med}+\Delta]\). Thus, \(\mathrm{wm}(s^{t})\notin[\mathrm{med}-\Delta,\mathrm{med}+\Delta]\) only if \(\mathrm{wm}(s^{t})\) and \(s_{j^{*}}^{t}\) are not on the same side of the median. Moreover, it must be that \(\mathrm{med}^{t}\) is between \(\mathrm{med}\) and the position of the current winner \(\mathrm{wm}(s^{t})\). However, if \(\mathrm{med}^{t}\neq\mathrm{med}\), then there is a proxy with a reported position in \(s^{t}\) that is between \(\mathrm{med}^{t}\) and \(\mathrm{wm}(s^{t})\) such that their peak is on the other side of \(\mathrm{med}\) than \(\mathrm{med}^{t}\). This contradicts Lemma 1, thus, all violations are not in the better-response set of the proxies, and there is no violation at step \(t\). Corollary 1: _For every state \(s^{t}\) in a better-response dynamics \(\tilde{s}\) with truth-oriented proxies, both the median and the outcome of \(s^{t}\) are in the interval \([\mathrm{med}-\Delta,\mathrm{med}+\Delta]\)._ The bound on the outcome shows that strategic behavior in proxy voting can reduce the distance between the outcome and the true median. However, it does not convergence to a stable state (equilibrium), or even a reduced social cost. In what follows, we discuss conditions for both. ### Monotone Policies Monotonicity justificationConsider the better-response set of some proxy \(j\) with peak \(p_{j}<\mathrm{med}(s)\) at state \(s\). While it is possible that \(s_{j}\) is on the same side of \(\mathrm{med}(s)\) and there are better responses on both sides, the following must hold: Figure 5: Possible states at \(t_{0}\) after \(t^{*}\) moves. * There is at least one better response \(s^{\prime}_{j}\leq\mathrm{med}(s)\); * \(j\) weakly prefers any better-response \(s^{\prime}_{j}\leq\mathrm{med}(s)\) to any \(s^{\prime\prime}_{j}>\mathrm{med}(s)\), due to single-peakedness. Therefore, it is reasonable to assume that proxies restrict their policies so as to select a position that is on the same side of the median as their true position.4 In the following discussion, we restrict policies to ones that preserves the integrity of proxies positions with respect to the median. Footnote 4: This assumption is somewhat similar to a ‘no overbidding’ assumption in auctions. MonotonicityFormally, we say that a better-response dynamics \(\tilde{s}\) is _monotone_ if for every \(j\in M\), we have for any \(j\in M\) s.t. \(p_{j}\leq\mathrm{med}\), and any step \(t\), that \(\pi_{j}\left(s^{t}\right)\leq\mathrm{med}\) (and likewise for \(p_{j}\geq\mathrm{med}\). Note that for every state \(s^{t}\) of a monotone better-response \(\tilde{s}\), the median of \(s^{t}\) is \(\mathrm{med}\). **Observation 1**: _Under monotone dynamics, \(\mathrm{med}^{t}=\mathrm{med}\) for all \(t\)._ This is since the same set of voters (followers and proxies) remain on each side of the \(\mathrm{med}\). ### Narrowing in on the Median Our goal in this section is to prove that that any monotone better-response dynamics converges to the true median. The problem is that this may not hold at every step, which requires some extra work. The following Lemma shows that any better-response in a monotone better-response dynamics where the winning proxy is not the moving proxy strictly decreases the distance to the median. Lemma 3: _Let \(\tilde{s}\) be a monotone better-response dynamics. Then, for every \(t\geq 0\) if \(j^{t+1}\neq j^{*}(t)\), then \(\Delta^{t+1}<\Delta^{t}\)._ Figure 6: Proof of Theorem 3. For a violation to be a better-response, this must be the state at \(t\), in contradiction to Lemma 1. Proof: By Lemma 1, for every \(k\in M\) it holds that \[|\mathrm{wm}\big{(}s^{t+1}\big{)}-\mathrm{med}|\leq|s_{k}^{t+1}-\mathrm{med}|.\] In particular, this holds for \(j^{*}(t)\in M\). We get: \[|\mathrm{wm}\big{(}s^{t+1}\big{)}-\mathrm{med}|\leq|s_{j^{*}(t)}^{t+1}- \mathrm{med}|\] Since \(j^{t+1}\neq j^{*}(t)\), it holds that \(s^{\prime}(t+1)\) is a better-response for \(s^{t}\), so \(|s_{j^{*}(t+1)}^{t+1}-\mathrm{med}|\neq|s_{j^{*}(t)}^{t+1}-\mathrm{med}|\). Hence: \[\Delta^{t+1}=|\mathrm{wm}\big{(}s^{t+1}\big{)}-\mathrm{med}|<|\mathrm{wm} \big{(}s^{t}\big{)}-\mathrm{med}|=\Delta^{t}.\qquad\quad\sqcap\] While Lemma 3 shows that moves made by proxies with reported position not at the current outcome must reduce the distance to the true median, it is possible for winning proxies to move in a way that increase the distance to the median. Figure 7 describe a proxy that makes 2 consecutive steps. The first makes them the winning proxy, the next is a better-response. As they remain the winning proxy, the outcome after the second step is further from the median. Meta-movesWe call sequences of consecutive better-responses by the same winning proxy _meta-move_. Formally, a meta-move of length \(\ell\) from \(s^{t}\) is a subsequence of steps in a better-response dynamics \(\tilde{s}\) such that: * \(j^{t+1}\neq j^{*}(t)\) and \(j^{*}(t+1)=j^{t+1}\). That is, in state \(s^{t}\), the proxy \(j^{t}\) moves in a way that makes them the winner. * Let \(\ell>0\) such that for every \(1\leq i\leq\ell\) it holds that \(j^{t+i}=j^{t+1}=j^{*}(t+1)\). In other words, after \(j^{t+1}\) becomes the winning proxy at step \(t+1\), they continue to make consecutive better-responses for \(\ell\) steps. The following shows that while local manipulations within a meta-move can increase the current distance to the true median (as Figure 7 demonstrates), meta-moves globally decrease the distance to the true median. Lemma 4: _For every meta-move of length \(\ell\) from \(s^{t}\) of a monotone better-response dynamics \(\tilde{s}\), it holds that \(\Delta^{t+\ell}<\Delta^{t}\)._ Figure 7: Consecutive steps that increase the distance to the median. Gray dots indicate truthful positions of proxies, empty dots indicate positions of manipulation. Arrows indicate moves. A small full dot is the position of a (single) follower. Proof: By Lemma 1, monotonicity and since for every \(1\leq i\leq\ell\) it holds that \(j^{t+i}=j^{*}(t+1)\neq j^{*}(t)\), we get that \(\Delta^{t+i}\leq\Delta^{t}\). Furthermore, since \(s^{\prime}(t+1)\) is a better-response for \(j^{t+1}\), it must be that the outcome of \(s^{t+1}\) is not equal to the outcome of \(s^{t}\). We get that for every \(i\), \(s^{\prime}(t+i)\) is a better-response and therefore \(s^{\prime}(t+i)\neq s^{\prime}(t+1)\). Thus \(\Delta^{t+i}\neq\Delta^{t}\). In particular, this holds for \(i=\ell\). Lemma 3 and Lemma 4 together provide a complete analysis of the better-response sets for proxies, and show that the better-response set strictly decreases after each (meta) move. However, this alone is not sufficient for convergence. Example 3: Recall the setting appearing in Example 1. Define \(\alpha_{1}=\frac{1}{4}\), and for every \(t\in\mathbb{N}\), define \(\alpha_{t+1}=\frac{1}{2}\alpha_{t}\). We define the following policy for \(j\in M\): \[\pi_{j}\left(s^{t}\right)=\mathrm{med}-sign\left(\mathrm{med}-p_{j}\right) \left(\Delta^{t}-\alpha_{t}\right)\] For every \(t\in\mathbb{N}\) we get that \[\Delta^{t+1} =|\mathrm{wm}\!\left(s^{t+1}\right)-\mathrm{med}|\] \[=|\mathrm{med}-sign\left(\mathrm{med}-p_{j^{*}(t+1)}\right) \left(\Delta^{t}-\alpha_{t}\right)-\mathrm{med}|\] \[=|-sign\left(\mathrm{med}-p_{j^{*}(t+1)}\right)\left(\Delta^{t} -\alpha_{t}\right)|=\Delta^{t}-\alpha_{t}\] As \(\alpha_{t}=\frac{1}{2}\alpha_{t-1}\), we get \(\Delta^{t+1}=\Delta^{1}-\sum_{i=0}^{t-2}\frac{1}{2^{i}}\alpha_{1}=\Delta^{1}- \alpha_{1}\sum_{i=0}^{t-2}\frac{1}{2^{i}}\). As \(t\to\infty\), we get that the distance to the median converges to \(\Delta^{1}-2\alpha_{1}=\Delta^{1}-2\frac{1}{4}\Delta^{1}=\frac{1}{2}\Delta^{1}\), and the outcome oscillates between \(-\frac{1}{2}\) and \(\frac{1}{2}\). Thus the best-response dynamics diverges. Figure 8 shows a schematic of this dynamic. Iterative Voting comparison.Note that Example 3 not only shows that monotone better-response dynamics need not converge, it also shows a key difference between our setting and Iterative Voting. We say that a dynamic is _acyclic_ if there are no recurring states. For finite action sets, i.e., when the space of available better-responses for each agent is finite, acyclicity implies convergence. Example 3 demonstrates that for infinite action spaces this may not hold. Interpretation of alpha.In effect, \(\alpha_{t}\) is the amount by which the outcome gets closer to the true median between steps. As \(\Delta^{t}\) decreases, so does the leeway that proxies have to improve the outcome for themselves. While it is reasonable that \(\alpha_{t}\) decreases as \(\Delta^{t}\) decreases, Example 3 captures the behavior in which \(\alpha_{t}\) decreases at a higher rate than \(\Delta^{t}\). Figure 8: A dynamics that diverges. The two large black dots indicate oscillation positions. The arrow indicates the first manipulation. By restricting policies such that \(\alpha_{t}\) and \(\Delta^{t}\) decrease at the same rate, we can obtain convergence. Moreover, this guarantees that \(\Delta^{t}\) itself converges to \(0\), meaning that the outcome converges to the true median. While the example above shows that even monotone policies may diverge, this relies one the gaps between \(\Delta^{t}\) becoming smaller and smaller. Fix some constant \(\alpha<1\). We say that a meta step is _big_ if \(\Delta^{t+\ell}<\alpha\Delta^{t}\), and otherwise it is _small_. Corollary 2: _Consider a monotone better-response dynamics, and suppose there is only a finite number of small steps between any two big steps. Then the dynamics converges, and the limit PNE is the true median._ Proof: After \(T\) big meta steps, we have that \(|\mathrm{med}-\mathrm{wm}(s)\,|<\alpha^{T}\Delta\to 0\). Moreover, the corollary still holds if \(\alpha\) is not a constant but increases towards \(1\) over time, as long as this does not occur too fast.5 Footnote 5: For example, we can allow \(\alpha_{k}=\max(\alpha,1-\frac{1}{k})\), at the \(k\)’th big meta step. Why are there big steps?We argue that it is not reasonable that proxies will insist on small steps forever as in Example 3. To see why, note that while smaller steps are preferable to the moving proxy, this benefit get smaller and smaller. On the other hand, the fraction of small meta-steps from all better-responses becomes smaller over time, so almost every better-response is big. Why this result is good.The true median is the outcome of the median voting rule. It is both Condorcet consistent and the minimal sum of distances from voters' true preferences. Thus, the median of all voters reflects the social optimum. As such, Corollary 2 implies that the strategic behavior of proxies (with the above restrictions) can in fact produce a socially optimal and stable outcome. ### Discretization Justify discretizationIn many real-world applications, the assumption that voters can express any position on the political spectrum \(\mathbb{R}\) is unreasonable. Voters are unlikely to distinguish between positions that are too similar, and this is the case both for selecting their truthful position, and distinguishing between different proxy positions for delegation. In computerized settings, there is some limited resolution to the expression of preferences (e.g. a temperature or a monetary amount). As it turns out, any such limit eliminates the possibility of oscillation we encountered in the previous section. In this section, we assume w.l.o.g that the political spectrum is restricted to the set of all integers \(\mathbb{Z}\). Convergence for discrete spaces.For discrete spaces, every monotone policy meets the conditions of Corollary 2. This is due to the fact that every better-response made by a proxy with position that is not the current weighted median must decrease the distance to the true median by at least \(1\) (as the minimal distance between every distinct possible positions). Thus, the conditions are met for \(\alpha=1-\frac{1}{\Delta^{1}}\). Therefore, for discrete spaces, every monotone better-response dynamics converges, and the outcome is the true median, which is the socially optimal outcome. Best response.Furthermore, for discrete spaces (in contrast to continuous) there is a well-defined best-response, that is to reposition at a distance that is one step closer to the true median than the current winner on their opposite side of the median. In particular, the best-response is monotone. Connection to Iterative Voting.Following the terminology of [10], a game has the _Finite Best Response Property (FBRP) from truth_ if from any truthful state, when restricted to best-responses, the dynamics converges. Thus, SPGs with WM are FBRP from truth. However, for non-monotone policies convergence to a socially worse outcome is possible. We show tis in Appendix 0.A. Our conjecture is that convergence holds for the non-discrete case as well, and that ultimately proxies would have an incentive to deviate back to their original side of the median. Yet, this is a matter of future research. ## 5 Partial Information In previous sections we assumed that the proxies have complete information about the positions of proxies and followers alike. This assumption is common when analyzing adversarial behavior. However, is it reasonable in a proxy voting setting? Recall that one of the applications of proxy voting is to mitigate the adverse effects of partial participation, where voters want to avoid explicitly reporting their positions. Moreover, followers may not even know their exact position; instead, they only know how to rank proxies based on proximity. Thus, followers can still delegate their vote without the additional cognitive strain of determining their exact position. In this section we relax the assumption of complete information. First, it is worth noting that when proxies have no information about the positions of followers, proxy voting becomes strategyproof. To see this, consider the states appearing in Figure 9 In the bottom state, the proxy at \(-20\) can manipulate the outcome by deviating to \(-5\). However, in the top state, the proxy has no manipulation. When proxies have no information except proxies positions, the proxies cannot distinguish between the two states. Thus, proxies do not even know _if_ they have a valid manipulation, let alone find one. However assuming no information at all sounds too restrictive, and we would like to consider intermediate cases that are more reasonable. Outline summary for section.For the rest of this section, we first describe formally a less restrictive setting for the study of partial information. Then, we show that when only partial information is made available to voters, the strategic behavior of proxies may converge to a worse social outcome than the truthful state. ### Model Information sets.We employ the framework described in [Reijngoud and Endriss, 2012]. In this setting, a _poll information function (PIF)_\(\sigma\) maps each state \(s\) to an information set \(\sigma\left(s\right)\). For example, in a Plurality voting scenario, we can think of a PIF that returns the score of each candidate, or just the candidate ranking, or even just the name of the winning candidate. The set \(\sigma\left(s\right)\) is then communicated to all voters. Intuitively, we can think of the PIF as the results of a poll that is broadcast publicly after all private information is collected. Poll information with delegation.What information is likely to become public in our setting? Clearly the proxies' positions, as otherwise followers will not be able to delegate. Other than that, we only assume that the identity (and position) of the winner is announced. In this section only we use \(p\) instead of \(p|_{N}\) for the followers' positions, to simplify notation. To avoid confusion, we use \(s^{0}\) rather than \(p|_{M}\) for proxies' true positions. We thus denote by \(\sigma_{winner}\) the PIF that takes as input the state \(s\) (proxies' current positions) and followers' true positions \(p\) and returns \(\left(s,\mathrm{wm}(s,p)\right)\). That is, reveals the proxies' positions and the winner. Since this is the only PIF we use in this work, we just write \(\sigma\). In this setting, proxies are unable to distinguish between states that yield the same information by \(\sigma\). Recall the two states from Figure 9. When only proxy positions are communicated by \(\sigma\), the states are indistinguishable by the proxies. However, proxies can deduce an equivalent set of states that are consistent with Figure 9: An example of two states that are indistinguishable if followers’ positions are unknown to proxies. the information available to them. In particular, both states in Figure 9 would be in the same set. Formally, at any state \(s\), the proxies know only the identity of the winner \(j^{*}\), and their positions. We define the set of possible profiles as \[P^{\sigma}(s,j^{*}):=\{p\in\mathbb{R}^{n}:\sigma(s,p)=(s,j^{*})\}.\] These are all possible positions of followers that are compatible with the revealed information. While in the previous sections \(j^{*}=j^{*}(s)\) could be implicitly inferred from proxies' positions \(s\) (since \(p\) was known), in this section the state is defined as \((s,j^{*})\), i.e. it explicitly contains all known information. Dominating manipulations.Following the terminology of [10], we define a _dominating manipulation_ for Proxy \(j\) as a position \(s^{\prime}_{j}\) that satisfy the following conditions. First, by reporting \(s^{\prime}_{j}\), there exists a profile \(p^{\prime}\in P^{\sigma}(s,j^{*})\) that when combined with \(s^{\prime}_{j}\), results in a more preferable outcome. Second, for all other profiles in \(P^{\sigma}(s,j^{*})\), it is the case that \(j\) weakly prefers them over the current outcome. More formally, let \(\succ_{j}\) be a full order over all possible outcomes that define \(j\)'s true preferences. Then, \(s^{\prime}_{j}\)_dominates_\(s_{j}\) in state \((s,j^{*})\) if for any profile \(p\in P^{\sigma}(s,j^{*})\) it holds that \(\operatorname{\mathrm{wm}}\bigl{(}s_{-j},s^{\prime}_{j},p\bigr{)}\succc_{j} \operatorname{\mathrm{wm}}(s,p)=s_{j^{*}}\), and the preference is strict for at least some \(p^{\prime}\in P^{\sigma}(s,j^{*})\). In the special case where \(\sigma\) returns the true followers' positions (no uncertainty), dominating manipulations are just better-responses. ### Convergence under Partial Information In appendix 0.B, we show that dominating-manipulations dynamics may converge to a worse social outcome than the truthful state. However, while this is possible for general policies, it is not the case for policies that can guarantee monotonicity. In this section, we find a sufficient condition for convergence to the true median for the partial information setting. We say that a policy for Proxy \(j\in M\) is _strong-monotone_ if for every state \((s,j^{*})\) and every profile \(p\in P^{\sigma}(s,j^{*})\), it holds that \(s_{j}\) and \(\pi_{j}(s)\) are on the same side of \(\operatorname{\mathrm{med}}(s,p)\). Note that for complete information, strong-monotone policies are reduced to monotone policies. The median interval.For a dynamics \(\tilde{s}\), let \(s^{t}_{r},s^{t}_{\ell}\) be the positions of the closest proxies to the winner from right and left respectively at \(t\). Let \(I^{t}:=\{\operatorname{\mathrm{med}}(s^{t},p):p\in P^{\sigma}(s^{t},j^{*}(s^{ t}))\}\) be the interval of possible positions for the median in \(s^{t}\), and denote \(\ell^{t},r^{t}\) the lower and upper bounds of \(I^{t}\), respectively. We define an interval \(I^{t}\) recursively as follows. For \(t=0\), set \(I^{0}=\left(\frac{s^{0}_{r}+s^{0}_{j^{*}}}{2},\frac{s^{0}_{\ell}+s^{0}_{j^{*}} }{2}\right)\). For every \(t>0\) define \(I^{t}\) as follows: * If \(j^{*}\neq j^{t-1}\) then \(I^{t}=I^{t-1}\cap\left(\frac{s_{r}^{t}+s_{j^{*}(0)}^{t}}{2},\frac{s_{\ell}^{t}+s _{j^{*}(0)}^{t}}{2}\right)\). * Otherwise, if \(s_{j^{*}(t)}^{t-1}<s_{j^{*}(t)}^{t}\) set \(I^{t}=I^{t-1}\cap\left[s_{j^{*}(t)}^{t},\infty\right)\), else if \(s_{j^{*}(t)}^{t-1}>s_{j^{*}(t)}^{t}\) set \(I^{t}=I^{t-1}\cap\left(-\infty,s_{j^{*}(t)}^{t}\right]\) Remark 3: Note that the position \(\mathrm{wm}(s^{t})\) is known to proxies even though the profile \(p\) is unknown, since \(\mathrm{wm}(s^{t})=s_{j^{*}}^{t}\) and both of \(s^{t}\) and \(j^{*}=j^{*}(s^{t})\) are known. Hence the proxies can indeed infer \(I^{t}\) from the information they know. Strong-monotone dominating manipulations for non-winning proxies.Then, the set of dominating manipulations at state \((s^{t},j^{*})\) that are also strong-monotone for Proxy \(j\) with \(p_{j}\leq\mathrm{wm}(s^{t})\) is the open interval: \[\left(\min\{s_{\ell}^{t},\mathrm{wm}\big{(}s^{t}\big{)}-2|\mathrm{wm}\big{(}s^ {t}\big{)}-\ell^{t}|\},\mathrm{wm}\big{(}s^{t}\big{)}\right),\] if \(\ell^{t}<\mathrm{wm}(s^{t})\), and empty otherwise. The set for proxies on the other side of the median is similar with respect to \(r^{t}\). Strong-monotone dominating manipulations for winning proxies.For winning proxies, they have a dominating manipulation only if their current position is between their peak and \(I^{t}\), and the closest proxy on the other side of \(I^{t}\) is farther than their position. That is, w.l.o.g assume that \(p_{j^{*}(t)}<\mathrm{wm}(s^{t})<\ell^{t}\), and \(|r^{t}-s_{r}^{t}|>|r^{t}-\mathrm{wm}(s^{t})|\). In this case, their set is \((r^{t}-|r^{t}-\mathrm{wm}(s^{t})|,\mathrm{wm}(s^{t}))\). This is the equivalent of a meta-move in this setting. Their set is empty in any other case since either any beneficial deviation may cross the median, or a deviating may have a negative outcome. We get that a policy is strong monotone if it holds that if the position of a proxy is left (right) of \(\ell^{t}\) (\(r^{t}\)), then their resulting strategy would have the same orientation with respect to \(I^{t}\). Furthermore, as \(I^{t}\) decreases with every step of a moving proxy, and that in turn decreases the set of strong-monotone dominating manipulations for each proxy, these sets are monotonically decreasing. If we impose a similar restriction as in the proof of Corollary 2, we get convergence with the true median as an outcome with a similar argument. ### Rationalizing Monotonicity In decision theory, the concept of regret is often used to model types of agents. Given a strategic decision made under uncertainty, the regret is the difference in utilities between the outcome, and the optimal strategy that the agent could use ex-post. A regret-averse or risk-neutral agent would select the strategy that minimizes the maximal regret. In what follows, we show that the minimax-regret policy guarantees monotonicity, and therefore if all proxies are regret-averse then even with partial information convergence to the true median is guaranteed. The following theorem shows that the minimax-regret strategy is strong-monotone. Theorem 4.1: _If the policy of every proxy is strong-monotone in a dynamics up to step \(t\geq 0\), then the minimax-regret strategy of every proxy is strong-monotone._ Proof: First, consider a proxy \(j\) with peak \(p_{j}\) left of the current winner. For a possible strategy \(s^{\prime}_{j}\), we calculate the maximal regret by distinguishing between the possible values of \(s^{\prime}_{j}\): * \(s^{\prime}_{j}\leq\ell^{t}-|\ell^{t}-\operatorname{wm}(s^{t})|\)- if the median is right of the current winner, then ex-post there is nothing Proxy \(j\) can do to change the outcome in their favor, thus the regret is \(0\). Otherwise, the optimal position for them is the position symmetric to the current winner with respect to the median, whereas by reporting \(s^{\prime}_{j}\) the outcome does not change. Thus, the difference in utility is the distance between the current winner and the optimal position. The maximal regret is reached when the median is at \(\ell^{t}\) (up \(\varepsilon>0\)) and is equal to \(2\cdot|\ell^{t}-\operatorname{wm}(s^{t})|\). * \(s^{\prime}_{j}\geq\operatorname{wm}(s^{t})\)- It is still possible that the median is (weakly) left of the current winner. By reporting a position that is right of the current winner the median would be bounded by the current winner and therefore the outcome will not change. Thus, the maximal regret is at least \(2\cdot|\ell^{t}-\operatorname{wm}(s^{t})|\). * \(\ell^{t}-|\ell^{t}-\operatorname{wm}(s^{t})|<s^{\prime}_{j}<\operatorname{ wm}(s^{t})\)- as in the first case, if the median is right of the current winner then the regret is \(0\). Otherwise, for every possible position of \(\operatorname{med}^{t}\), the optimal strategy for \(j\) ex-post is \(opt^{t}=\operatorname{med}^{t}-|\operatorname{med}^{t}-\operatorname{wm}(s^{ t})|\). Therefore, if \(opt^{t}<s^{\prime}_{j}\) then the regret is \(s^{\prime}_{j}-opt^{t}\). Otherwise, \(s^{\prime}_{j}\) is further than the current winner from the current median and thus the outcome will not change. Hence, the regret is \(\operatorname{wm}(s^{t})-opt^{t}\). We get that the maximal regret is \(\max_{\operatorname{med}^{t}}\{\operatorname{wm}(s^{t})-opt^{t},\left(s^{ \prime}_{j}-opt^{t}\right)\cdot\mathbbm{1}_{opt^{t}<s^{\prime}_{j}}\}\). Figure 10: Example of maximal regret values for a proxy with position left of the current winner. If \(|\ell^{t}-\mathrm{wm}(s^{t})|=0\), then the regret for the first case is 0, for the second it is positive and there are no strategies that fit the last case. Thus, the minimax-regret strategy in this case is the set \((-\infty,\ell^{t})\), and every position in this set is strong-monotone. Otherwise, for the last case, \(2\cdot|\ell^{t}-\mathrm{wm}(s^{t})|\) bounds the regret from above and for the other cases it bounds the regret from below, therefore the minimax regret must be attained for \(\ell^{t}-|\ell^{t}-\mathrm{wm}(s^{t})|<s^{\prime}_{j}<\mathrm{wm}(s^{t})\). We argue that the minimax regret is attained at \(\ell^{t}\). First, for \(s^{\prime}_{j}=\ell^{t}\), the maximal regret is \(|\ell^{t}-\mathrm{wm}(s^{t})|\). To see this, if \(opt_{t}>\ell^{t}\) then the regret is given by \(\mathrm{wm}(s^{t})-opt^{t}\leq|\ell^{t}-\mathrm{wm}(s^{t})|\). Otherwise, the regret is given by \[\ell^{t}-opt^{t}\leq\ell^{t}-\big{(}\ell^{t}-|\ell^{t}-\mathrm{wm}\big{(}s^{t }\big{)}|\big{)}=|\ell^{t}-\mathrm{wm}\big{(}s^{t}\big{)}|.\] Next, for \(\ell^{t}<s^{\prime}_{j}<\mathrm{wm}(s^{t})\), if \(\mathrm{med}^{t}=\ell^{t}+\varepsilon\) for \(\varepsilon>0\) then \[opt^{t}=\ell^{t}-|\ell^{t}-\mathrm{wm}\big{(}s^{t}\big{)}|+2\varepsilon\] thus the regret is \(s^{\prime}_{j}-opt^{t}>|\ell^{t}-\mathrm{wm}(s^{t})|\). Finally, for \(s^{\prime}_{j}<\ell^{t}\) then if \(\mathrm{med}^{t}=\frac{\mathrm{wm}(s^{t})+s^{\prime}_{j}}{2}+\varepsilon\) then the regret is \(\mathrm{wm}(s^{t})-opt^{t}>|\ell^{t}-\mathrm{wm}(s^{t})|\). We get that the minimax-regret strategy for Proxy \(j\) is \(\ell^{t}\leq\mathrm{med}^{t}\). Thus, it is strong-monotone. For proxies with positions right of the current winner, the analysis is symmetric for \(r^{t}\). For the winning proxy, if their reported position is at their peak, then it is the optimal position for them regardless of the underlying state. Thus, it is also their minimax-regret strategy (the maximal regret of their peak is 0). Otherwise, w.l.o.g assume that the winning proxy's truthful position (peak) is left of their current position. By definition of \(I^{t}\) and since all steps until \(t\) are strong-monotone, it follows that \(\ell^{t}\geq\mathrm{wm}(s^{t})\). Consider the current position of the winning proxy. Their maximal regret is attained for the case that the median is at their current position, in this case their optimal position is at distance \(\min\{|s^{t}_{r}-\mathrm{wm}(s^{t})|,|s^{t}_{\ell}-\mathrm{wm}(s^{t})|\}\) left of their position (up to \(\varepsilon\)). Thus, their maximal regret at this position is bounded from below by \(\min\{|s^{t}_{r}-\mathrm{wm}(s^{t})|,|s^{t}_{\ell}-\mathrm{wm}(s^{t})|\}\). For every position right of their current position, for the same case the regret would be at least \(\min\{|s^{t}_{r}-\mathrm{wm}(s^{t})|,|s^{t}_{\ell}-\mathrm{wm}(s^{t})|\}\), so it bounds the maximal regret from below. Finally, for every position left of their current position, if the median is at \(r^{t}-\varepsilon\) then the winning position would be \(s^{t}_{r}\), so the regret is bounded from below by \(|s^{t}_{r}-\mathrm{wm}(s^{t})|\geq\min\{|s^{t}_{r}-\mathrm{wm}(s^{t})|,|s^{t}_{ \ell}-\mathrm{wm}(s^{t})|\}\). Thus, their minimax-regret strategy is their current position, which is strong-monotone. Remark 4: Note that the minimax-regret policy is also in the set of dominating manipulations. Consequently, it follows that if the policy of all proxies is minimax-regret, then the resulting dynamics is strong-monotone and therefore converges to the true median. ## 6 Conclusions and Future Work We introduced _Strategic Proxy Games_, a framework to study strategic behavior of proxies in voting mechanisms. First, we demonstrated that in this model, the extension of the median voting rule to the weighted median voting rule via proxy voting maintains strategyproofness with respect to followers' positions. In particular, this suggests that with respect to follower positions, the delegation scheme is optimal for followers preferences. Our study uses the Tullock delegation scheme, however, other delegation models have been studied in the literature. In the one-step delegation domain, Green-Armytage (2015) consider delegation that accounts for small errors in assessment of positions, and Alon et al. (2015) consider social connections that influence the weight of proxies. Exploring the impact of different delegation models on the outcome of proxy voting and the strategic behavior of followers and proxies would be an interesting direction for future research. We point out that many of our results depend on the fact that the proxy who attracts the median voter wins. Our conjecture is that for delegation models that correlate well with distance the same property hold, and would yield similar results. In this research we focused on the median voting rule. We plan to study the implication of strategic proxy behavior in higher dimensions, as well as with other voting rules. We continued to study the strategic behavior of proxies, and showed that while strategyproofness does not extend to proxy voting, the distance of the outcome in a setting of repeated manipulations is bounded by the distance of the truthful outcome. Thus, in terms of distance, manipulations can only have a positive impact on the outcome. Moreover, when proxies maintain the integrity of their positions with respect to the median, the outcome converges to the social optimum. We further show that for discrete spaces, non-monotonicity can result in a worse social outcome. The combination of the above results show that in the context of proxy voting, both complete truthfulness and unbounded manipulation are sub-optimal. The assumption of monotonicity is a compromise between these extremities. In future work, we plan to further study non-monotone settings. In particular, we showed that truth-orientation bounds the distance of the outcome for the social optimum. We conjecture that under the stronger assumption of truth-bias, proxies would have an incentive to revert to a monotone state. Finally, we study the implications of partial information to the strategic behavior of proxies. While our results show that the outcome may increase the social cost, we also show that for policies that guarantee to be monotone, in particular minimax-regret, would converge to the social optimum. While the public information for voters that we consider is very minimal, i.e. it only includes the winning position and the positions of proxies, it may be possible to achieve the same positive results with even less information available. A particular case of interest is when the winning position is not made public, rather an estimate of it. This case may be more realistic for several reasons. First, if voters positions are estimated, then the outcome can be an estimate as well. This may be appropriate for settings where followers, for reason of e.g. cognitive strain or privacy, wish to communicate an interval of approved positions that expresses an estimate of their peak. This setting is somewhat related to that of Green-Armytage (2015) mentioned above, and the setting proposed by Feldman et al. (2016), where candidates admit attraction intervals, and voters may select any candidate that attracts them. Moreover, approximate positions of voters are more realistic in the context of polls. Another possibility is to allow for more strategic depth for proxies, where they communicate an approximate positions in an effort to maximize support. ###### Acknowledgements. This research was supported by the Israel Science Foundation (ISF; Grant No. 2539/20).
2308.08102
ChatLogo: A Large Language Model-Driven Hybrid Natural-Programming Language Interface for Agent-based Modeling and Programming
Building on Papert (1980)'s idea of children talking to computers, we propose ChatLogo, a hybrid natural-programming language interface for agent-based modeling and programming. We build upon previous efforts to scaffold ABM & P learning and recent development in leveraging large language models (LLMs) to support the learning of computational programming. ChatLogo aims to support conversations with computers in a mix of natural and programming languages, provide a more user-friendly interface for novice learners, and keep the technical system from over-reliance on any single LLM. We introduced the main elements of our design: an intelligent command center, and a conversational interface to support creative expression. We discussed the presentation format and future work. Responding to the challenges of supporting open-ended constructionist learning of ABM & P and leveraging LLMs for educational purposes, we contribute to the field by proposing the first constructionist LLM-driven interface to support computational and complex systems thinking.
John Chen, Uri Wilensky
2023-08-16T02:21:52Z
http://arxiv.org/abs/2308.08102v1
ChatLogo: A Large Language Model-Driven Hybrid Natural-Programming Language Interface for Agent-based Modeling and Programming ###### Abstract Building on Papert (1980)'s idea of children talking to computers, we propose ChatLogo, a hybrid natural-programming language interface for agent-based modeling and programming. We build upon previous efforts to scaffold ABM & P learning and recent development in leveraging large language models (LLMs) to support learning of computational programming. ChatLogo aims to support conversations with computers in a mix of natural and programming languages, provide a more user-friendly interface for novice learners, and keep the technical system from over-reliance on any single LLM. We introduced the main elements of our design: an intelligent command center, and a conversational interface to support creative expression. We discussed the presentation format and future work. Responding to the challenges of supporting open-ended constructionist learning of ABM & P and leveraging LLMs for educational purposes, we contribute to the field by proposing the first constructionist LLM-driven interface to support computational and complex systems thinking. ## 1 Introduction In Mindstorms, Seymour Papert's pioneering book on Constructionism, a central motif was to support children talking to computers. Instead of using computers to "program" children, children gain control of computers by programming them. Consequently, the Logo programming language family opens vast possibilities for learning in mathematics (e.g. through Logo, [11]), in physics (e.g. through DynaTurtle [5]), as well as in complex systems (e.g. through NetLogo [17]). Like the original Logo language, to empower children in learning to "talk to computers", designers of Logo descendants strive to make their syntax close to natural languages. Whereas, programming languages, however close to natural forms of talking, still require a formal system of syntax and vocabulary. In this proposal, we focus on NetLogo [17], the most widely used programming language for agent-based modeling and programming (ABM & P) in the Logo family. Agent-based modeling (ABM) is a powerful methodology that leverages simple computational rules for individual agents to produce complex emergent phenomena [18]. Agent-based programming (ABP) is a decentralized and often probabilistic programming paradigm that serves as the technical foundation of ABM [3]. While ABM has been widely employed in educational settings, facilitating deep engagement with ABM still poses challenges for teachers and learners, partly due to NetLogo's formal structures and vocabulary, and partly due to ABP being a different paradigm than what is usually taught at school [3]. While many efforts have been done to scaffold the learning of ABM & P, only a number of them are dedicated to open-ended learning contexts (e.g. [12][4]). Meanwhile, recent advances in large language models (LLMs), have opened up new opportunities for supporting open-ended constructionist learning of NetLogo. While not directly evaluated on NetLogo, codex, GPT-3.5, and GPT-4 have all demonstrated considerable performance in general programming tasks. With their recent usage in education [9], it seems that "talking to computers" in a natural language context finally comes within reach. Building on those recent efforts, we present the design of ChatLogo, an LLM-driven hybrid natural-programming language interface for agent-based modeling and programming. ## 2 Background ChatLogo is inspired by two lines of previous literature: efforts to support constructionist learning of ABM & P; advances in LLMs and conversational programming interfaces. While a constructionist learning approach of ABM would naturally entail ABP to support learners' exploration, modification, and creation of agent-based models, many previous implementations stop short of coding in NetLogo (e.g. [5]). As ABMs are often integrated into science or social science curricula, programming often incurs a higher overhead for teaching and learning, since teachers and students are less prepared for the CS-related content [12]. Responding to this challenge, several studies tried to create block-based programming interfaces for NetLogo (e.g. [6]). While such interfaces could get children to start coding in 1-2 minutes [7], a trade-off always exists between the "floor" and "ceiling": the threshold for initial engagement, and the potential for expression [3]. As the power of block-based interfaces increases, they start to ask for scaffolding as well. For example, our recent study [3] found that interactive scaffolds significantly increased online young learners' short-term and long-term engagement with a block-based ABP environment. Pluralism was identified as a key element that contributed to the improvement: with several scripted pathways, the conversational experience for learners to build their own projects encouraged them to come back again. However, there is always a limitation for pre-scripted scaffolds, as they became less efficient when young learners came up with their own project ideas [3]. The advent of advanced LLMs brought new hopes. Compared to earlier attempts at conversational programming interfaces that are still syntactically constrained (e.g. [15]), state-of-art LLMs such as GPT, PaLM, or LLaMA are capable of handling much more flexible or even malformed human inputs and translating them into programming languages (e.g. [13]) A few pioneering studies have been conducted to evaluate the effectiveness of LLMs in supporting the learning of programming languages. For example, [9] designed a Codex-powered interface and found short-term learning benefits for novice programmers. While promising, LLMs also come with limitations: they are prone to mistakes, hallucinations, potential biases, or harmful language. [14] found that professional programmers' task completion rates or time were not improved by GitHub Copilot, partly because participants felt difficulty in understanding and debugging generated code. [8] found that participants felt they must learn the LLMs "syntaxes" and struggled to form an accurate mental model to interact with LLMs. They also performed worse in domain-specific tasks, e.g. in NetLogo. ## 3 Design Goals ChatLogo is designed as a web-based system with three goals in mind: 1. **Support novice programmers to "talk to computers" in a mix of programming and natural languages.** Both Logo and NetLogo are implicitly conversational. By placing a "command center" in parallel to the main view, the user would communicate with the computer through text messages or changes in the view. However, there are always correct ways to talk to computers, which take time for learners to grasp. Our design needs to bridge the gap between natural and programming languages by accepting both of them and talking back to learners in a more natural way. 2. **Provide a more friendly interface for learners with no or little computer science backgrounds to creatively express themselves by programming computers.** Even with the latest LLM-based interfaces, learners still struggled to find out the "correct" way to interact with computers [8]. LLMs also frequently provide incorrect responses that require expertise in computer science to identify and resolve. Consequently, LLM-based interfaces are currently more beneficial for learners with more prior programming experiences [9]. While eliminating the underlying issues of LLMs are beyond our means, our design should tailor the system for novice learners - rather than tailor novice learners for LLMs. 3. **Keep the technical system from over-reliance on any single LLM.** We recognize the inherent risk in relying on a private-owned LLM. For example, many studies cited in this paper leveraged OpenAI's Codex model released in 2022. Within a year, OpenAI would shut down public access to the model, making replications of those latest studies all but impossible if not for a selected few. There are also fresh and valid concerns about data privacy, especially when children and schools could be potential users of our design. To mitigate this risk, we intentionally build our system on a less powerful general-purpose LLM (gpt-3.5-turbo instead of gpt-4) and ensure that the design would eventually work with other (fine-tuned) LLMs that could eventually be deployed in a local and secure environment. ## 4 Design Overview We briefly describe the prototype design of ChatLogo, a hybrid natural-programming language interface for agent-based modeling and programming. A web-based browser-server system, ChatLogo is built with both LLMs and conventional programming. It is highly modularized: the underlying LLM could be replaced at no cost, and its features could be selectively enabled or disabled depending on the learning needs. The system could be adapted for other languages as well. ### An Intelligent Command Center ChatLogo is an intelligent command center of NetLogo. In this example, we showcase a classical mistake of novice NetLogo programmers: try to'set color' of patches. In NetLogo desktop's command center (Appendix 1), the input box would deny the entrance of such an ill-formatted input and show an error message instead. It is as if the computer tells the user back: The way you talked was wrong. I will not respond until you figure out the correct way. In Turtle Universe, the mobile incarnation of NetLogo [1], we made a slight improvement by introducing the help feature: in Appendix 2, the computer briefly explains the primitive and suggest some alternatives. However, it still requires the user to initiate the action, and we found relatively few users would touch the "Help" button [3]. At a surface level, ChatLogo inherited this interactive design. However, its behavior diverges when the user gave a malformed NetLogo input (Appendix 3): besides an error message, it further provides two AI-driven options that could explain the error messages or fix the code. Appendix 4 demonstrates the explanation pathway. Once the AI finishes the answer, the learner could ask a follow-up question in natural language, or ask the AI to fix it for them. At this point, the AI would stress that it might make more mistakes: instead of taking away the learners' initiative, learners are still in charge of the loop. Alternatively, if they decide to send in a new NetLogo command instead, ChatLogo would attempt to execute it directly. ### A Conversational Interface for Creative Expression An intelligent command center might serve novice learners of NetLogo better. However, it assumes that the learner already knows something about the language, or the input would become unrecognizable in the eyes of the NetLogo compiler. A novice learner might talk in a more "conversational" way: I want to change the background color to red; or, I want to make turtles move around; or more broadly, I want to create a game of ants. A younger learner might also make spelling mistakes along the way, negatively affecting LLMs' performance. We further notice that: especially for LLMs trained to be a chatbot (e.g. gpt-3.5-turbo or gpt-4), they tend to give a long answer for most questions and make decisions for the learner before asking for clarification. For example, Appendix 5 demonstrates GPT-4's answer to a simple question: "In NetLogo, how can I create some moving turtles?" Its answer not only assumed much on the learner's behalf, e.g., turtles would turn back 180 degrees when hitting the edge of the world; it gave the learner step-by-step instructions to follow. In a way, GPT-4 attempts to program the learner. Our approach differs from the pre-trained GPT-4 behavior (Appendix 5). Instead of right away writing code and giving instructions, ChatLogo attempts to clarify the learners' needs and intention (Appendix 6). Instead of sending large chunks of code directly to the learner, it attempts to co-develop the NetLogo code. As shown in Appendix 7, the learner is free to edit the code: either in NetLogo, or in natural languages through the "Ask" feature. Instead of overclaim the correctness of the code, it admits the possibility of making mistakes, and co-works with the learner to address the potential issues (Appendix 8). Finally, upcoming features of ChatLogo will allow learners to add the human-AI co-created code back to the NetLogo model and help learners plan out entire projects in their mind. ## 5 Future work Despite its potential, there is still a long way to go before ChatLogo could be safely and effectively deployed to K-12 educational settings. More work needs to be done to reduce its mistakes, hallucinations, and potentially harmful language. As we do not expect LLMs to solve these fundamental problems overnight, we are also interested in understanding how human-computer interaction and learning design could be leveraged to mitigate the potential harm and develop learners' AI literacy along the way. To achieve this, we are currently running a study with adult NetLogo programmers and evaluating if it would be appropriate to work with children. There has been much debate around LLMs and the future of humanity as of late. Our ultimate hope is that LLMs could become a liberating force, instead of an oppression one, for both children and adults. This requires children to be able to program computers for their own purposes, not vice versa. This asks for a more constructionist future for education, where children could be better equipped and supported to construct their own meaningful artifacts, not vice versa.
2301.04233
Adapting to Skew: Imputing Spatiotemporal Urban Data with 3D Partial Convolutions and Biased Masking
We adapt image inpainting techniques to impute large, irregular missing regions in urban settings characterized by sparsity, variance in both space and time, and anomalous events. Missing regions in urban data can be caused by sensor or software failures, data quality issues, interference from weather events, incomplete data collection, or varying data use regulations; any missing data can render the entire dataset unusable for downstream applications. To ensure coverage and utility, we adapt computer vision techniques for image inpainting to operate on 3D histograms (2D space + 1D time) commonly used for data exchange in urban settings. Adapting these techniques to the spatiotemporal setting requires handling skew: urban data tend to follow population density patterns (small dense regions surrounded by large sparse areas); these patterns can dominate the learning process and fool the model into ignoring local or transient effects. To combat skew, we 1) train simultaneously in space and time, and 2) focus attention on dense regions by biasing the masks used for training to the skew in the data. We evaluate the core model and these two extensions using the NYC taxi data and the NYC bikeshare data, simulating different conditions for missing data. We show that the core model is effective qualitatively and quantitatively, and that biased masking during training reduces error in a variety of scenarios. We also articulate a tradeoff in varying the number of timesteps per training sample: too few timesteps and the model ignores transient events; too many timesteps and the model is slow to train with limited performance gain.
Bin Han, Bill Howe
2023-01-10T22:44:22Z
http://arxiv.org/abs/2301.04233v1
# Adapting to Skew: Imputing Spatiotemporal Urban Data ###### Abstract. We adapt image inpainting techniques to impute large, irregular missing regions in urban settings characterized by sparsity, variance in both space and time, and anomalous events. Missing regions in urban data can be caused by sensor or software failures, data quality issues, interference from weather events, incomplete data collection, or varying data use regulations; any missing data can render the entire dataset unusable for downstream applications. To ensure coverage and utility, we adapt computer vision techniques for image inpainting to operate on 3D histograms (2D space + 1D time) commonly used for data exchange in urban settings. Adapting these techniques to the spatiotemporal setting requires handling skew: urban data tend to follow population density patterns (small dense regions surrounded by large sparse areas); these patterns can dominate the learning process and fool the model into ignoring local or transient effects. To combat skew, we 1) train simultaneously in space and time, and 2) focus attention on dense regions by biasing the masks used for training to the skew in the data. We evaluate the core model and these two extensions using the NYC taxi data and the NYC bikeshare data, simulating different conditions for missing data. We show that the core model is effective qualitatively and quantitatively, and that biased masking during training reduces error in a variety of scenarios. We also articulate a tradeoff in varying the number of timesteps per training sample: too few timesteps and the model ignores transient events; too many timesteps and the model is slow to train with limited performance gain. image inpainting, urban computing, spatial-temporal, missing data + Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote This inconsistency persists despite significant investments in open data. Over the last two decades, cities have increasingly released datasets publicly on the web, proactively, in response to transparency regulation. For example, in the US, all 50 states and the District of Columbia have passed some version of the federal Freedom of Information (FOI) Act. While this first wave of open data was driven by FOI laws and made national government data available primarily to journalists, lawyers, and activists, a second wave of open data, enabled by the advent of open source and web 2.0 technologies, was characterized by an attempt to make data "open by default" to civic technologists, government agencies, and corporations [49]. While open data has indeed made significant data assets available online, their uptake and use has been weaker than anticipated [49], an effect attributable to convenience sampling effects [24]: We release what we can, even if portions are missing, corrupt, or anomalous. In this paper, we consider a neural data cleaning strategy based on masking out corrupted regions and using a trained model to reconstruct the masked region. These masks are necessarily large, irregular, and extend in both time and space; they may represent political boundaries (municipal zoning, zip codes, city blocks), sensor or software failures [26; 62; 65], varying legal restrictions [1; 39], or unusual events (adverse weather). These missing patches can destroy the utility of the entire dataset for applications that assume coverage. By modeling missing or corrupted data by an arbitrary mask, we afford user control: any areas can be masked and reconstructed, regardless of the reason. We envision tools to improve the coverage and quality of data for use in downstream urban learning tasks [23; 32; 34; 44; 53; 56]. Following the literature, we represent spatiotemporal event data in a 2D or 3D raster form (e.g., a histogram). Our basic model uses the partial convolution approach from Liu et al [29] to handle the irregular boundaries of missing data (e.g., districts), which focuses model attention on the valid regions while shrinking the masked region, layer-by-layer, to obtain a complete prediction. More recent approaches to image inpainting on the web emphasize eliminating perceptual artifacts rather than numerical accuracy and are therefore less relevant to our setting. Our contribution is to extend the basic model to the 3D spatiotemporal setting and propose a training regime that adapts to the skewed distribution found in practice. Spatiotemporal interpolation of missing data has been widely studied in the earth sciences [38; 45], especially in remote sensing where weather effects can obscure measurement [46; 65]. Conventional statistical approaches to impute missing values, such as global/local mean imputation, interpolation, and kriging, are essentially linear, and therefore limited in their ability to capture the non-linear dynamics needed to impute large irregular missing regions. Neural image inpainting techniques [29; 57] can recover missing patches via training on large datasets of independent images, such that the reconstructed images appear realistic. These approaches have shown promising results with global climate data [48], but have not been adapted to the urban setting in which data are not smooth functions of space and time, but are rather histograms of events constrained by the built environment. The goal of inpainting for natural images is to produce a subjectively recognizable image free from perceptible artifacts. But the goal in our setting is quantitative accuracy: we intend for our reconstructed results to be used numerically in downstream applications. The distribution is relatively stable, but exhibits skew and sparsity that can obscure local, dynamic features (Figure 2). The challenge for imputation in the urban setting is _skew_: urban data tend to follow population density patterns -- small dense regions surrounded by large sparse areas. These population patterns can dominate the learning process and fool the model into ignoring numerical accuracy in dense regions, even while aggregate error may remains low. To combat skew, we 1) bias the training process to focus on populated regions by seeding the mask in non-zero areas; (2) use 3D convolutions and vary the number of timesteps in each 3D training sample to capture transient events. Together, these two techniques complement each other: biased masking focuses attention on dense regions, and 3D convolutions with a large chunk size focus attention on sparse regions. We evaluate these techniques on the NYC taxi data (a popular dataset for its coverage and quality) and a NYC bikeshare dataset (less dominated by the built environment). We find that the basic model is effective for urban data imputation, while biased masking reliably reduces error over random masking, both globally and locally. Additionally, we find that the number of timesteps per training sample exhibits a tradeoff: too few timesteps and the model ignores transient patterns, while too many timesteps significantly increases training time without enhancing the inpainting results. We evaluate specific local scenarios (high-traffic locations, low-traffic locations, high-variability locations, anomalous events) to reflect the use cases distinct from image inpainting on the web (where subjective quality is all that matters). In summary, we make the following contributions: * We evaluate a basic model adapting image inpainting techniques to urban histograms characterized by skew and sparsity effects due to constraints by the built environment, demonstrating qualitative and quantitative accuracy relative to classical methods. * We improve on this basic model by extending to the 3D spatiotemporal setting to better recognize transient events; we analyze the training time and performance tradeoffs of varying the number of timesteps per training sample. * We propose a self-supervised training process called biased masking to encourage the model to attend to dense population regions Figure 2. Urban data (bottom row) exhibits skewed, sparse, yet stable distributions that can dominate learning, in contrast with the diversity of natural images (top row). and thereby improve accuracy on the highly dynamic regions typical in urban environments; we show that biased masking reliably improves convergence. * We evaluate these techniques on two real mobility datasets (NYC taxi trips and NYC bikeshare trips), both globally and locally in varying traffic conditions, weather events, and disruptions. Finally, we show that the model can be used to remove or synthesize anomalous events through targeted masking. ## 2. Related Work Our work is informed by techniques in image inpainting and geospatial interpolation. **Image Inpainting** Image inpainting, or image completion, is a task of synthesizing missing pixels in images, such that the reconstructed images are visually credible and semantically realistic. In computer vision, there are two broad categories of inpainting techniques. The first category contains diffusion-based or patch-based methods, which utilize low-level image features to recover the missing pixels. The second category contains learning-based methods that generally involve the training of deep neural networks. Diffusion-based methods (Dosov et al., 2015; He et al., 2017; He et al., 2018) propagate information from neighboring valid pixels to missing pixels, typically from border to the center of the missing regions. Those techniques are convenient to apply, but are limited to small missing regions. Recently, Saharia et al. (Saharia et al., 2018) developed an image-to-image translation framework based on conditional diffusion models. The evaluation on inpainting task outperformed several learning-based methods. Patch-based inpainting techniques (Dosov et al., 2015; He et al., 2017; He et al., 2018; He et al., 2018) function by searching similar patches from the valid regions of the same image or from other images, and then paste the patches to the target missing region. However, this process could induce high computational costs. A milestone of patch-based approach, PatchMatch (Dosov et al., 2015), speeds up the search process with a new nearest neighbor algorithm. Learning-based methods are trained to learn image patterns with large volume of image data, thus being capable of recovering missing regions, as well preserving the semantics of the imagery. Pathak et al.(Pathak et al., 2019) proposed context encoder, which was the first work to combine CNN with generative adversarial network. It applied the encoder-decoder architecture and used both \(\ell_{2}\) reconstruction loss and generative adversarial loss in the objective function. Lizuka et al. (Lizuka et al., 2019) improved on their work by incorporating global and local discriminator, which improved content consistency between the valid and missing region. Additionally, they replaced general convolutional layers with dialated convolutional layers to better capture information from distant pixels. Yu et al. (Yu et al., 2020) proposed a two-stage coarse-to-fine model architecture and incorporated contextual attention layer to attend to related features from spatially distant regions. They also replaced general generative adversarial loss with WGANS loss. Liu et al. (Liu et al., 2020) proposed partial convolution, allowing inpainting models to be used on irregular holes rather than just rectangular missing regions. On top the work of partial convolution, Yu et al. (Yu et al., 2020) proposed gated convolutional layers to automatically learn and update the masks as opposed to rule-based update. To further address the problems of blurry textures and distorted structures in the inpainted images, Liu et al. (Liu et al., 2020) proposed coherent semantic attention layer, which can both preserve contextual structure and capture semantic relevance between hole features. Zhou et al.(Zhou et al., 2020) incorporated dual spatial attention modules into the U-Net architecture, which can capture the correlations between facial textures at different scales. Seven different discriminators are utilized to ensure realistic local details as well as global consistency. Yu et al. (Yu et al., 2020) designed spatial region-wise normalization (RN) to overcome the problem of mean and variance shifts. RN computes the mean and variances separately for the missing and valid regions. Xu et al. (Xu et al., 2020) combined the paradigms of both patch-based and learning-based methods, and inpainted missing regions using textures of patch samples from unmasked regions. Additionally, they proposed patch distribution loss to ensure the quality of synthesized missing regions. Zeng et al. (Zeng et al., 2020) introduced aggregated contextual transformation GAN, aiming to improve content reasoning from distant pixels and enhance details of synthesized textures. For more image inpainting works, we refer reader to the following surveys (Liu et al., 2020; Liu et al., 2020; Yu et al., 2020). The recent trajectory in image inpainting involves reducing or eliminating perceptual artifacts such as discontinuous edges and blurred patches using new loss terms, image preprocessing, or training regimes that favor subjective quality over numerical accuracy. For example, the work of Liu et al.(Liu et al., 2020), Yu et al. (Yu et al., 2020), and Xu et al. (Xu et al., 2020) all propose extensions to partial convolutions to repair blurred boundaries between missing and valid regions. Since our focus is on numerical accuracy and downstream utility of the synthesized data, we base our approach on partial convolutions from Liu et al. (Liu et al., 2020). Additionally, we aim to design and study architecture-agnostic training regimes that can be used with newer models when applicable. **Geospatial Missing Data Imputation** Classical spatio-temporal interpolation methods, generally variants of inverse-distance or nearest-neighbor weighting (Liu et al., 2020; Liu et al., 2020), kriging (Liu et al., 2020; Liu et al., 2020), or matrix factorization (Liu et al., 2020) are variations of linear methods that do not attempt to (and cannot) interpolate within large, arbitrary, irregular regions, and typically do not seamlessly consider both space and time. Physics-based models based on computational fluid dynamics (Dosov et al., 2015) or agent-based models that directly encode human behavior (Liu et al., 2020; Yu et al., 2020) have been used to infer mobility dynamics, but must be designed separately for each application rather than learned automatically from data. Gong et al. (Gong et al., 2020) solve multi-variable non-negative matrix factorization to impute urban data, but assume the availability of multiple variables and do not consider arbitrary irregularities. Zhang et al. (Zhang et al., 2020) were concerned about the malfunction of satellites and poor atmospheric conditions (e.g. thick cloud), which could produce missing regions in remote sensing data. They proposed unified spatial-temporal-spectral deep CNN architecture to recover the missing information in satellite images. Kang et al. (Kang et al., 2020) modified the architecture from (Yang et al., 2020) to restore the missing patterns of sea surface temperature (SST) from satellite images. Tasnim and Mondal (Tasnim and Mondal, 2019) also adopted the coarse-to-fine inpainting architecture from (Yang et al., 2020) to restore satellite images. The innovation of their work is the abandonment of coarse-impainting pipeline. Instead, they used another highly correlated temporal image as an auxiliary input to go through the refinement pipeline. Additionally, Kadow, Hall and Ulbrich (Kadow et al., 2020) borrowed the architecture from (Liu et al., 2020) to reconstruct missing climate information. In the geo-spatial domain, most of the literature that we found applied image inpainting techniques on remote sensing data. As far as we acknowledge, there is no prior work that has taken advantage of image inpainting methods to reconstruct missing values in urban data. ## 3. Representative Datasets We worked with two mobility datasets: NYC taxi data and NYC bikeshare data. Although potential _applications_ of the proposed model are widely available, datasets on which to _evaluate_ the model are rare: we need longitudinal coverage to provide ground truth, sufficient complexity to study both global and local fidelity, and accessibility to a general audience for expository purposes. Mobility data achieves all three goals. * [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=top,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=topsep,topsep,topsep=0pt,topsep=topsep,topsepsep=topsep,topsep=topsepsep,topsepsepsep=top this block into training samples. In this paper, we consider only the temporal extent in 3D; varying spatial resolution, bounds, or overlap during rasterization of the source data is left for future work. If we slice the input into individual timesteps, the model cannot exploit temporal consistency. We therefore extend all convolutional layers, inputs, and masks, to 3D, and consider the effect of varying the number of timesteps per training sample. The inputs are 3D image blocks of dimension \(T\times W\times H\), where \(T\) represents the temporal dimension. The masks are also in 3D blocks with the same shape as the image block. The model architecture is illustrated in Figure 4. The parameters of each convolutional layer appear in Table 1. ### Loss function We used \(\ell_{1}\) loss as the objective function for pixel-wise reconstruction accuracy. The \(\ell_{1}\) loss term bridges the absolute gap between the reconstructed value and the ground truth. We adopt the following notation \(\mathbf{I}_{gt}\in\mathbb{R}^{T\times W\times H}\); the block of ground truth images. \(T\) represents the temporal dimension of the block. \(\mathbf{I}_{out}\in\mathbb{R}^{T\times W\times H}\); the block of reconstructed images. \(\mathbf{M}\in\mathbb{R}^{T\times W\times H}\); the block of binary masks. \(N_{1}=T*W*H\): the total number of pixels in the image block. \(N_{\text{valid}}\): the total number of valid pixels in the image block. \(N_{\text{hole}}\): the total number of missing pixels in the image block. Following Liu, we separate the valid and hole regions in the \(\ell_{1}\) loss. Even though the valid region has available data and we therefore typically would not use the predicted values in practice, we want to include this loss during training to improve continuity across mask boundaries (and therefore improve overall error). The \(\ell_{1}\) loss is calculated as \[\mathcal{L}_{total}=\mathcal{L}_{valid}+\lambda\mathcal{L}_{hole}\] where \[\mathcal{L}_{hole} =\frac{1}{N_{\text{hole}}}||(1-\mathbf{M})\odot(\mathbf{I}_{out }-\mathbf{I}_{gt})||_{\mathbf{1}}\] \[\mathcal{L}_{valid} =\frac{1}{N_{\text{valid}}}||\mathbf{M}\odot(\mathbf{I}_{out}- \mathbf{I}_{gt})||_{\mathbf{1}}\] ### Biased Masking By default, masks can be generated by randomly select a starting point in the image and then conducting a random walk for a fixed number of step. We call this process **random masking**. However, since urban data is constrained by the built environment and is therefore highly skewed toward populated areas, random masks tend to include a large number of zero-valued cells, squandering opportunities to learn from the steep gradients in dense, high-traffic regions; Figure 4(a) illustrates an example. To focus attention on populated areas, we use a **biased masking** approach: 1) Given an input image, apply Gaussian blur to blend the pixel values and increase the region of potential starting points. 2) Select a threshold (e.g., 90% percentile of the image values) to identify populous regions. 3) Randomly select a starting location from one of the detected areas and generate masks via random walk. The probability of selecting one of the detected areas is proportional to the size of the area. These steps are illustrated in Figure 4(b). The biased masking approach makes the learning problem more challenging by increasing "contrast": ensuring that masks tend to include dense, dynamic regions, but also include sparse, stable regions. To compare the performance of the two masking approaches, we generated two masks (one random and one biased) for each training sample. ## 5. Experimental Evaluation We consider the following questions: 1. Is the core 3D model qualitatively & quantitatively effective at inpainting missing data? (Section 5.1, Figure 6, Table 2) 2. Does increasing the number of timesteps per training sample generally improve performance? (Section 5.2, Figure 7) 3. Does biased masking improve performance overall, and in specific regions? (Section 5.3, Figure 8) \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Layers** & **Channel** & **Kernel Size** & **Stride** & **Padding** \\ \hline encoder 1 & 64 & (1,3,3) & (1,2,2) & (0,1,1) \\ encoder 2 & 128 & (1,3,3) & (1,2,2) & (0,1,1) \\ encoder 3 & 256 & (1,3,3) & (1,2,2) & (0,1,1) \\ encoder 4 & 512 & (1,3,3) & (1,2,2) & (0,1,1) \\ encoder 5 & 512 & (7,3,3) & (2,2,2) & (2\({}^{\prime}\)((T-1)\)//4),1,1) \\ encoder 6 & 512 & (7,3,3) & (2,2,2) & (2\({}^{\prime}\)((T-1)\)/4),1,1) \\ \hline decoder 1 & 512 & (1,3,3) & (1,1,1) & (0,1,1) \\ decoder 2 & 512 & (1,3,3) & (1,1,1) & (0,1,1) \\ decoder 3 & 256 & (1,3,3) & (1,1,1) & (0,1,1) \\ decoder 4 & 128 & (1,3,3) & (1,1,1) & (0,1,1) \\ decoder 5 & 64 & (1,3,3) & (1,1,1) & (0,1,1) \\ decoder 6 & 1 & (1,3,3) & (1,1,1) & (0,1,1) \\ \hline \end{tabular} \end{table} Table 1. Parameters of 3D convolutional layers. T represents the temporal dimension of the image block. Figure 5. Comparison of the random and biased masking regimes. * Does varying the number of timesteps per training sample influence the spatial distribution of error between sparse and dense regions? (Section 5.2, Figure 9) * Does the model faithfully reconstruct local, dynamic conditions in specific areas of interest? (Section 5.5, Figure 11) With NYC taxi data, we trained the models on both mask types -- random and biased, and with different temporal dimension T = {1,2,3,5,7,10,15}. Based on initial experiments on both mask types and at lower temporal chunk sizes, we found that \(\lambda=12\) offered effective performance; we fix \(\lambda\) to be \(12\) for all experiments on the taxi data. The batch size and initial learning rate are set to \(16\) and \(0.01\) respectively. Learning rate decays every \(500\) training iterations at rate of \(0.9\). Unless otherwise stated, we evaluate the model on the test set using \(\ell_{1,hole}\), which is the sum of the absolute value of the difference between the ground truth and predictions at the masked positions only. We compare our models with baseline statistical methods: * **Temporal Global Mean**: On the training data, we calculate the average taxi demand at each pixel, for each hour of the day. On the test data, we assign each masked pixel the corresponding global mean computed from the training data. * **Nearest Neighbor (NN) Interpolation**: We assign each masked pixel the value of the nearest unmasked pixel. We experimented with both 2D and 3D implementations using scipy.3 Footnote 3: [https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html#scipy.interpolate.griddata](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html#scipy.interpolate.griddata) * **RBF Interpolation** We interpolate using radial basis functions (RBF) on observations at points sampled outside the masked region. We experimented with both 2D and 3D RBF interpolation with RBF Python implementation.4 Footnote 4: [https://github.com/treevchines/RBF](https://github.com/treevchines/RBF) We considered 3D kriging, but found the poor scalability to be prohibitive: the estimated time to complete the computation for an experiment with T=2 was about two weeks on a typical platform. Moreover, kriging is a linear method, and we have no reason to believe that it can reconstruct data across large, irregular regions. Another approach, which we did not study, is to use physics-based models based on computational fluid dynamics (Beng et al., 2017) or agent-based models that directly encode human behavior (Shi et al., 2018; Wang et al., 2018) to capture macro traffic dynamics. These approaches can potentially "fill" large missing regions, but must be designed separately for each application rather than learned automatically from data. ### Model Effectiveness (Q1) We find that for both taxi and bikeshare datasets the proposed model faithfully captures qualitative visual patterns and also significantly outperforms baseline methods on multiple metrics. #### 5.1.1. Qualitative Analysis We first present some visual examples of inpainting results on NYC taxi data in Figure 6. The left figure shows taxi demand at four different hours of the day (8AM, 2PM, 8PM, and 2AM). From left to right, we show the ground truth, the (biased) mask, the mask applied to the ground truth, and the reconstructed image. The inpainting model was trained with 5 timesteps per training sample and with biased masking. For all hours and all masks, the model is effective at reconstructing missing data, even when the majority of the signal is obscured. The reason is clear: the patterns are sufficiently stable from timestep to timestep as to allow the model to infer missing values from temporal patterns as well as spatial patterns. The model is also responsive to the time of day: We see fewer ridges at 2AM than at 2PM, as expected, suggesting that the model has learned temporally local patterns as opposed to relying on global spatial patterns. The transition across the mask boundary is also smooth, suggesting the model was able to consider local spatial patterns appropriately. Overall, we find that the model is perceptually effective at reconstructing missing values, even in challenging cases. The right plot in Figure 6 visually shows corresponding results for bikeshare data. The model was trained with bikeshare data using T=3, biased masking and \(\lambda=4\). We observe similar observations as the results from taxi data -- at all times of day and for all masks, the reconstructed images are visually similar to the ground truth images, indicating the consistent effectiveness of our model. #### 5.1.2. Quantitative Analysis Table 2 contains quantitative results of baseline models and our neural models in different evaluation metrics. We observe that: 1) Our neural models, trained with either masking type or with any temporal dimension, always outperform the baseline models. The 2D baseline models that ignore the temporal dimension are especially ineffective. Global mean ignores spatial effects and just models a function \(pixel,hour\to value\). 2D- and 3D- nearest neighbor methods perform poorly when the nearest neighbors may be far away; 2D- and 3D-RBF methods assume relatively uniform sampling across the region, which is not possible in our setting of wide-area missing data. 2) At T=5 and 7, our method performs similarly and achieves the best performances -- almost 50% lower \(\ell_{1}\) error and 66% lower \(\ell_{2}\) error than the best baseline. 3) SSIM does not significantly distinguish different models; while popular in image inpainting, this metric is designed to capture perceptual similarity of natural images, which are not relevant for the spatiotemporal aggregations we study. 4) The model training time increases by about 9 minutes for every additional hour included in a chunk. At T=5, the model takes 55 minutes to train. The baseline heuristic-based methods -- global mean and 2D- and 3D-NN -- are very fast (completing in a few minutes) but very inaccurate given that they do not attempt to model global dynamics. The 3D-RBF method is inefficient: T=2 required over 24 hours to train. ### Temporal Dimension Tradeoff (Q2) Figure 7 shows the prediction errors for NYC taxi data, evaluated on random masks (top plot) and biased masks (bottom plot). The y-axis is the \(\ell_{1}\) loss considered for the masked region only ("Hole"). The x-axis varies the number of timesteps included per training sample (Temporal dimension), ranging from 1 to 15. (a) When tested with random masks, the average mask covers the entire region, concentrated at the center. Models trained with biased masking reduces error at all sizes. The \(\ell_{1}\) error decreases as the number of timesteps increases up until T=7, then starts to increase again (T=5 and T=7 have similar performances when trained with biased masking.) At T=2, the model begins to make use of the temporal dependency between the data by applying 3D convolutions. With both biased and random masking, the \(\ell_{1}\) loss decreases sharply when T changes from 1 to 5. (b) When tested with biased masks, the average masked cells are concentrated at the upper left due to the bias toward populated regions. The plot has a similar U-shape as that of random masking. ### Biased Masking is Effective (Q3) Figure 7, as discussed, compares the effects of biased masking to random masking at various value of T; we see that at all tested temporal dimensions, models trained with biased masking outperform those trained with random masking, indicated by smaller \(\ell_{1}\) errors. In addition to the measurement of overall error, we also inspected the convergence rates under both training regimes, as measured by the validation set with our selected scenarios (Figure 8). The scenario masks are chosen to evaluate local accuracy in high-traffic, low-traffic, high-variability, and semantically important locations. See 5.5 for masks of the scenarios and detailed evaluations. Overall, when we tested with random and biased masks, the model trained with biased masks converged faster and had smaller errors, indicating that biased masking is beneficial to the imputation task under skewed distributions (upper left). Evaluating the 5th Avenue and Penn station scenarios, the model trained with biased \begin{table} \begin{tabular}{l c c c c c} **Model** & **Mask Type** & \(\ell_{\text{\emph{label}}}\) & \(\ell_{\text{\emph{label}}}\) & **SSIM** & **PSNR** & **Train (m)** \\ \hline Global Mean & - & 1.2644 & 55.3298 & 0.9973 & 61.4880 & \textasci{}5 \\ 2D-RBF & - & 3.1442 & 284.8807 & 0.9880 & 54.8346 & 70 \\ 2D-NN & - & 3.1179 & 318.6575 & 0.9884 & 54.6717 & \textasci{}5 \\ 3D-RBF & - & 1.6653 & 47.7088 & 0.9956 & 57.9921 & \textasci{}24h \\ 3D-NN & - & 1.3632 & 84.0529 & 0.9964 & 59.1652 & \textasci{}5 \\ \hline Ours, \(T=1\) & biased & 0.9817 & 37.8468 & 0.9984 & 62.4628 & 18 \\ & random & 0.9406 & 0.3730 & 0.9983 & 62.4679 & 18 \\ \hline Ours, \(T=2\) & biased & 0.8551 & 32.6429 & 0.9986 & 63.3815 & 27 \\ & random & 0.8979 & 35.2923 & 0.9958 & 63.1056 & 27 \\ \hline Ours, \(T=3\) & biased & 0.7847 & 25.8374 & 0.9987 & 63.6445 & 35 \\ & random & 0.7950 & 26.4765 & 0.9989 & 63.7221 & 35 \\ \hline Ours, \(T=5\) & biased & 0.7196 & 17.8080 & **0.9991** & 64.4028 & 55 \\ & random & 0.7606 & 20.6116 & 0.9990 & 64.1000 & 55 \\ \hline Ours, \(T=7\) & biased & **0.7185** & **18.6746** & 0.9990 & 64.3407 & 75 \\ & random & 0.7489 & 20.0100 & 0.9990 & 64.2656 & 75 \\ \hline Ours, \(T=10\) & biased & 0.7537 & 28.4833 & 0.9886 & 63.3329 & 75 \\ & random & 0.7820 & 26.1138 & 0.9985 & 63.1288 & 75 \\ \hline Ours, \(T=5\) & biased & 0.7729 & 25.3586 & 0.9958 & 63.1885 & 140 \\ & random & 0.7849 & 21.9446 & 0.9899 & 63.8721 & 140 \\ \hline \end{tabular} \end{table} Table 2. Model training time and performance. Figure 6. Reconstructed results of taxi demand images (Left) and bike demand images (Right) at different hours time trained with biased masking and 3D partial convolutions (T=5 for taxi data and T=3 for bikeshare). From left to right, each column displays the ground truth image, mask, masked ground truth, and reconstructed data. From top to bottom, each row presents the taxi demand at 8AM, 2PM, 8PM, and 2AM, respectively. Figure 7. Evaluation of models trained with biased masking against those trained with random masking, at seven temporal dimensions, with two different masking scenarios — random and biased masking. masking displayed similar patterns -- they converged faster and achieved better results than the model trained with random masks. Those two scenarios are representative of dense and busy areas. We conjecture that biased masking avoids rewarding the model for trivially predicting zero in sparse regions and ignoring the dynamics in dense regions. We consider this result an initial foray: encoding domain knowledge and data patterns into the masking strategy appears to be a powerful, easy, and architecture-agnostic means of improving model performance, aligned with emerging principles of data-centric AI. The other three scenarios -- airport, lower east side, and Astoria, represent sparse regions with relatively light traffic. The convergence lines for them are less stable, and no benefit of biased masking is realized. We conjecture that variants of biased masking to weight both dense and sparse (yet non-zero) areas may further improve the model, as would specialized training on regions of interest (though that approach could be considered data leakage from training to test). ### Spatial distribution of errors (Q4) We hypothesized that the original 2D partial convolution architecture (corresponding to T=1, Figure 7(a)) would be insufficient to capture transient events. For example, taxi rides occur in the suburbs, but they are infrequent and less predictable; we expected the model to be less capable of accurately predicting these events. Increasing the temporal dimension is also expected to be helpful with the dense region as well. We can inspect the spatial distribution of the error for T=1 in Figure 9 to check this hypothesis: Each map is the average of 3000 timesteps, and is colored by the difference between the predicted value and the ground truth: a blue cell indicates an underestimate and a red cell represents an overestimate. We see that the suburban regions are consistently underestimated, while the dense region is overestimated. At T=5, we observe similar pattern, but with both underestimation and overestimation errors significantly reduced. The suburbs are still underestimated, but the dense regions are effectively improved when more temporal dimensions are incorporated. At T=15, the spatial error distribution is almost identical to T=5, with slightly higher underestimation and lower overestimation. However, T=15 requires prohibitive training time due to very large training samples, so this approach is undesirable with just slightly better performance. This tradeoff in temporal scope reflects a subtle characteristic of the source data; we hypothesize that T=5 corresponds to the window size needed to capture dynamic traffic periods; e.g., morning and evening commutes. ### Scenario Based Evaluation (Q5) Spatiotemporal patterns of missing data in practice are unlikely to resemble random walks. Instead, outages will correlate with environmental features: sensors may fail in certain weather conditions, transient events may prevent data acquisition, or legal restrictions on data availability may follow political boundaries. To demonstrate the applicability of our inpainting models in real-world situations, we evaluate the inpainting methods based on specific locations representing varying conditions. We tested five different scenarios to cover various spatial locations, temporal variances, and social events. The five scenarios include the masking of 5th Avenue, Penn Station, airport, lower east side, and Astoria. The masks are visualized in Figure 10. Figure 8. Convergence plots of the models trained with either biased or random masking, and tested with random masks, biased masks and other five additional scenarios maskings. Figure 10. Scenario masks overlaid on NYC map. Annotation: The ratio of masked-to-unmasked area. Figure 9. Aggregated spatial errors between predicted and ground truth values, from models trained with different temporal dimensions. Red areas indicate overestimation, while blue areas represent underestimation. As mentioned in Section 5.3, 5th Avenue and Penn station are representative of busy and dense areas with heavy traffic. 5th Avenue can also show the impacts of certain social events on traffic patterns: The Pride Parade showed an anomalous intervention where traffic was zero on the parade route. Lower East Side is away from central Manhattan, with relatively lighter traffic than the first two cases. The scenario of airport and Astoria represent the sparse regions where traffic is light. We chose two periods for those scenarios to cover temporal variance - Feb. 1st to Feb. 15th, 2016, and June, 18th to June 29th, 2016. A snowstorm from Feb 5th to 8th in New York City is evident in the data (Figure 11). On June 26th, 2016, the Pride Parade in New York City started at 5th Avenue, and moved downtown to 8th Street. The event blocked all traffic along the route and affected the surrounding traffic as well. Therefore, testing in the selected June period can help evaluate the model's response to anomalies. We test three inpainting models -- our model trained with biased masking at T=5, the same model but trained with random masking at T=5, and the global mean approach. We plotted the ground truth and predicted values at the average pixel level in the missing region, for each hour during the selected periods. The visualizations are provided in Figure 11. The average absolute errors between the ground truth and predicted values, over the missing region and during the evaluation periods, are reported in Table 3. We have the following observations: * For three scenarios -- 5th Avenue, Penn Station, and Lower East Side, our models -- whether trained with biased or random masking -- have much smaller gaps between the predicted values and the ground truth, compared with the temporal mean approach. This benefit holds for both evaluated periods, as shown in both Table 3 and Figure 11. For the airport and Astoria scenarios, the temporal mean is slightly better, with much smaller magnitude in comparison with other three cases. * rom Table 3, we see that for both evaluation periods, the model trained with biased masking has smaller average errors than the model trained with random masking, other than the scenario of airport during June. * During the snow days (02/05-02/08/2016), it is expected that the traffic in the dense regions would be significantly impacted, which can be supported by the trough seen from the ground truth line in the scenario of Penn Station (other scenarios are not heavily impacted by the snow.) The model trained with biased masking is responsive to the irregular traffic caused by extreme weather, unlike the temporal mean baseline. * During the event pride parade, the traffic on 5th Avenue was all diverted to other routes, creating an anomaly in the traffic patterns. Therefore, we saw a dip in the traffic counts. Similar observation as the snow day, the temporal mean baseline does not recover the missing values. However, even though the inpainting results from our model are close to the ground truth values, they slightly overestimate the results. Overall, the reconstruction accuracy is compelling at specific locations, but not perfect. For 5th Avenue scenario, the parade can be seen as an anomaly, which is rare in the training stage and hard to be detected. But this scenario represents another application Figure 11. Temporal line plots of evaluations for five scenarios. In each plot, we visualize the ground truth, prediction from model trained with biased masking and random masking, and predictions from temporal mean method. Two evaluation periods, Feb. and June are selected. The irregular events, extreme snow days and pride parade, are annotated with grey regions. \begin{table} \begin{tabular}{c c c c} **Seenarios** & **G.T. - Biased** & **G.T. - Random** & **G.T. - Mean** \\ \hline \hline \multicolumn{4}{c}{**02/01/2016 – 02/15/2016**} \\ \hline 5th Avenue & 4.2 & 6.2 & 17.0 \\ \hline Penn Station & 19.3 & 33.5 & 30.0 \\ \hline Lower East Side & 2.5 & 2.8 & 8.2 \\ \hline Airport & 2.3 & 1.6 & 1.8 \\ \hline Astoria & 0.8 & 0.7 & 0.4 \\ \hline \multicolumn{4}{c}{**06/18/2016 – 06/30/2016**} \\ \hline 5th Avenue & 3.6 & 4.8 & 22.53 \\ \hline Penn Station & 21.6 & 37.5 & 30.0 \\ \hline Lower East Side & 1.7 & 2.1 & 7.4 \\ \hline Airport & 2.4 & 1.9 & 2.0 \\ \hline Astoria & 0.8 & 0.7 & 0.4 \\ \hline \end{tabular} \end{table} Table 3. Average absolute error between the predicted values and ground truth, over the missing regions, and during the selected evaluation periods. usage of our model: rather than assuming that ground truth data is "correct". We use the masking to intentionally repair known bad data, and reconstruct global patterns in a semantically reasonable way. This "airbrushing" of flaws in the data can be used to improve the quality of training sets for downstream applications, such as biofouled or errant sensors and faulty telemetry. For example, from the top visualization in Figure 12, we visualize the 5th Avenue scenario: The first column shows the taxi counts along 5th Avenue during parade day, zoomed in on the Manhattan region. Several locations of missing data (white dots) can be seen on the avenue. We masked out the 5th Avenue altogether and used our inpainting model to reconstruct the missing values. The use case is to enable policymakers and researchers to conduct counterfactual studies: what would have taxi demand been like were it not for the parade? The results, as shown in the forth column, recover the missing regions in a realistic way. Alternatively, the model might be used to synthesize parade-day traffic rather than removing its effects. By masking the surrounding area and retaining the parade disruption, the model can attempt to represent the influence of the disruption elsewhere in the city. As shown from the bottom visualization in Figure 12, the generated results are smaller in magnitude, but overall the pattern is matched faithfully, suggesting this use case is viable for synthesizing scenarios that may not be present in the data record (natural disasters, proposed construction, accidents, etc.). Penn Station is a train station and represents a high-demand area for taxis. Our model tends to underestimate the high demand at this location, though biased masking improves the prediction. For Lower East Side, there are a few anomalous spikes, to which the proposed models are responsive. For airport and Astoria, our models are no better than the temporal mean approach. We conjecture that for airport, the highly variable rides in and out of the airport confound the model. For Astoria, the much lower demand is harder to predict; note the lower scale of the y-axis. ## 6. Discussion Our study is motivated by the inconsistent availability of urban data caused by missing, corrupt, or inaccurate data, which hinders their use in downstream tasks, especially learning tasks, that require coverage and accuracy. We designed and implemented a model based on partial convolutions that can tolerate irregular missing regions -- zip codes, geographical boundaries, congrssional districts, or other regions that may correlated with data absence or quality. To capture the temporal dependency in urban data, we replaced 2D convolutional layers in the model with 3D convolutional layers and experimented with varying the number of timesteps per training sample, finding non-trivial tradeoffs and a local optimum around T=5 for taxis and T=3 for bikeshare, potentially interpretable as the autocorrelation period of traffic (i.e., about 5 hours of rush hour). To address the spatial skew in human activity, we proposed a masking approach that can reflect the skew in the distribution. By encouraging the model to attend to dense, dynamic regions (via a percentile threshold), the model learns faster and is not rewarded for accurate predictions in trivially inactive areas. Biased masking showed improved performance across all values of \(T\), multiple global evaluation strategies, and most local evaluation scenarios. This approach suggests a broader family of related masking strategies to help users encode domain knowledge about the data and setting. For example, encoding correlations between high-traffic areas (e.g., subway stops and train stations during lunch time) as masks may help the model learn these correlations with less data. Qualitatively, we confirmed from the visual examples that image inpainting techniques can be used to reconstruct data in large, irregular regions in space and time. Quantitatively, we confirmed that extending the model architecture to 3D benefits improves performance, as supported by the sharp decrease in \(\ell_{1}\) when T changes from 1 to 2. Second, we observe that increasing the temporal dimension to a certain threshold improves performance in general, regardless of masking strategy; ignoring the temporal dimension in this setting is untenable. Additionally, we evaluated performance in local settings, demonstrating that the model is not just learning an average value, but is responsive to subtle spatial variation. The model captures irregular traffic patterns caused by transient events, such as extreme weather and the Pride Parade, and showed that biased masking can improve performance in local settings. Additionally, the scenario evaluations also showcased the better results introduced by the biased masking than the random masking. ## 7. Limitations & Future Work There are several limitations of our study that represent directions for future work. First, our results on mobility data may extend to other urban activity (e.g., 311 calls, crowd movement, business permits, public safety events, housing events, and more). We do not consider the generalizability of these methods to multiple variables, or variables that do not follow the same spatial patterns; there are opportunities to exploit correlations between variables to improve performance. Additionally, the taxi dataset is exceptionally large and complete; understanding how these techniques behave in low-data regimes is important for practical applications. Integration of masked multi-variate data may be an opportunity: given the shared built environment, models trained on one variable may transfer to predictions of other variables. Second, rasterizing event data to a form amenable to computer vision techniques involves a number of design choices we did not study: resolution, overlap, and Figure 12. Top: “Airbrushing” the parade event (white pixels) to remove its effect on the data. Bottom: Inferring traffic effects of the parade by reconstructing data everywhere except 5th Avenue to produce qualitatively realistic results. irregular boundaries may present opportunities or challenges. In particular, data associated with census blocks, tracts, or individual trajectories lose information when regridded as histograms. In these cases, graph neural networks may be more appropriate to represent the spatial adjacency relationships. Third, even with the best model configuration, we consistently overestimate in the city region and underestimate in the sparse suburban region. Some model architectures (attention mechanism, multi-view learning) or loss functions may improve performance, as may more specialized masking and training regimes. ## 8. Code Availability Our code is available at [anonymized for review].
2309.00677
Ho'oleilana: An Individual Baryon Acoustic Oscillation?
Theory of the physics of the early hot universe leads to a prediction of baryon acoustic oscillations that has received confirmation from the pair-wise separations of galaxies in samples of hundreds of thousands of objects. Evidence is presented here for the discovery of a remarkably strong individual contribution to the baryon acoustic oscillation (BAO) signal at z=0.068, an entity that is given the name Ho'oleilana. The radius of the 3D structure is 155/h_{75} Mpc. At its core is the Bootes supercluster. The Sloan Great Wall, CfA Great Wall, and Hercules complex all lie within the BAO shell. The interpretation of Ho'oleilana as a BAO structure with our preferred analysis implies a value of the Hubble constant of 76.9+8.2-4.8 km/s/Mpc.
R. Brent Tully, Cullan Howlett, Daniel Pomarede
2023-09-01T18:00:06Z
http://arxiv.org/abs/2309.00677v1
# Ho'oleilana: An Individual Baryon Acoustic Oscillation? ###### Abstract Theory of the physics of the early hot universe leads to a prediction of baryon acoustic oscillations that has received confirmation from the pair-wise separations of galaxies in samples of hundreds of thousands of objects. Evidence is presented here for the discovery of a remarkably strong _individual_ contribution to the baryon acoustic oscillation (BAO) signal at \(z=0.068\), an entity that is given the name Ho'oleilana. The radius of the 3D structure is \(155\,h_{75}^{-1}\) Mpc. At its core is the Bootes supercluster. The Sloan Great Wall, CfA Great Wall, and Hercules complex all lie within the BAO shell. The interpretation of Ho'oleilana as a BAO structure with our preferred analysis implies a value of the Hubble constant of \(76.9^{+8.2}_{-4.8}\) km s\({}^{-1}\) Mpc\({}^{-1}\). ## 1 Introduction Pressure waves generated in the hot plasma of the early universe become imprinted in baryon fluctuations approximately 390,000 years after the hot Big Bang (Peebles & Yu, 1970; Sunyaev & Zeldovich, 1970). The remnants of these waves create a ruler that, observed across time in the evolving universe, provides constraints on the physics governing cosmic evolution (Weinberg et al., 2013; Aubourg et al., 2015). Eisenstein et al. (1998) investigated the possibility that early universe fluctuations caused by the baryon component of matter might explain structure on scales of \(\sim 13,000\) km s\({}^{-1}\)(Tully, 1986; Tully et al., 1992; Broadhurst et al., 1990) and hints of baryon induced features in the power spectrum of galaxy correlations were first announced by Percival et al. (2001). Subsequently, compelling evidence for what have come to be called baryon acoustic oscillations (BAO) has been seen as a peak in the pair-wise separations of galaxies throughout cosmic history (Cole et al., 2005; Eisenstein et al., 2005; Beutler et al., 2011; Blake et al., 2011; Ross et al., 2015; Alam et al., 2017, 2021). In all published cases, the BAO feature is a _statistical_ imprint compounded by contributions from many locations. Studies such as Scrimgeour et al. (2012) and Goncalves et al. (2018) have identified the scale at which the Universe reaches one percent homogeneity as \(\sim 70-120\,h_{75}\) Mpc. By logical arguments, the density fluctuations anticipated in individual BAO shells (which exist on scales larger than the homogeneity scale) can then be only a few percent of the mean matter density. So it has not been expected that individual BAO can be discerned. It was demonstrated by Arnalte-Mur et al. (2012), though, that assuming BAO developed out of pre-recombination central dark matter concentrations identifiable today as rich clusters, the scales of associated BAO could be identified by wavelet analysis and the stacked density maps from \(\sim 800\) centers. These centers can be further studied to identify the structures that contribute most substantially to the total BAO signal. We were not looking for BAO. However visual examination of maps from the Cosmicflows-4 compilation of galaxy distances (Tully et al., 2023) revealed a structure that invited further inspection. By way of introduction, the two orthogonal views in supergalactic coordinates in Figure 1 show the distribution of galaxy groups north of the Milky Way equator in this data set.1 The SGY axis roughly tracks redshifts. An evident overdensity is seen at SGY\(\sim 20,000\) km s\({}^{-1}\), part of which is the Sloan Great Wall (Gott et al., 2005). The Center for Astrophysics Great Wall (de Lapparent et al., 1986) is seen at SGY\(\sim 7000\) km s\({}^{-1}\). Footnote 1: Distances are given in units of CMB frame velocities, \(V_{cmb}\). Distances in Mpc, \(d\), are directly related: \(d=f(\Omega_{m},\Omega_{\Lambda})V_{cmb}/H_{0}\) where \(f(\Omega_{m},\Omega_{\Lambda})\) is a small adjustment dependent on cosmological model. Restricting the velocity range to the interval \(19,000-26,000\) km s\({}^{-1}\), the domain including the Sloan Great Wall, a view from the third orthogonal direction emphasizes what appears to be a ring structure, shown in the top panel of Figure 2. A reasonable by-eye fit to the structure is given by the red ring of radius 11,300 km s\({}^{-1}\) centered at SGX\({}_{c}=-400\) km s\({}^{-1}\), SGZ\({}_{c}=5000\) km s\({}^{-1}\) in the bottom panel, which we show later in this work is also statistically justified when considering the full 3-dimensional galaxy distribution. This structure has been noted by Einasto et al. (2016) as the most prominent of several shell-like structures revealed in the main SDSS sample. These authors looked for features around clusters and groups of varying richness and found the richer groups provided the more convincing evidence as the centers for shell-like structures. These authors considered but did not favor that any of the features they detect are related to the BAO. As will be discussed further, we find contrary evidence that the ring seen in Figure 2 does indeed form part of a large coherent 'BAO shell' -- the biggest contribution to the overall BAO signal that we will report. In any event, this apparent ring structure at a distance of \(\sim 250\) Mpc from us is one of the largest structures observed in the nearby Universe to date and links together a number of hitherto Figure 1: Zoom of Fig 21 from Tully et al. (2023) in the galactic north sector showing the distribution of Cosmicflows-4 galaxy groups in supergalactic coordinates. Figure 2: _Top panel_: Supergalactic SGX\(-\)SGZ projection of all northern Cosmicflows-4 galaxies with \(19000<V_{cmb}<26000\) km s\({}^{-1}\). _Bottom panel_: Same as top panel with the addition in red of a circle of radius 11300 km/s centered at SGX\({}_{c}=-400\) km s\({}^{-1}\), SGZ\({}_{c}=5000\) km s\({}^{-1}\) and blue crosses at the locations of Abell clusters A1781, A1795, A1825, and A1831. disconnected components of our cosmic neighborhood. We name this remarkable structure "Ho'oleilana".2 Footnote 2: “Sent murmurs of awakening” from the Hawaiian Kumulipo creation chart: _Ho’oleiliek ka lana a ka Po ululi_, “From deep darkness came murmurs of awakening”. The two histograms in Figure 3 give more insight into the properties of Ho'oleilana. The upper panel gives the numbers of our galaxies in annular rings about the center illustrated in the bottom panel of Figure 2, normalized by the area in each ring. There is an evident peak centered at 11,300 km s\({}^{-1}\). The lower panel shows the numbers of groups of galaxies projected into the ring with radius spanning 10,600\(-\)12,000 km s\({}^{-1}\) as a function of systemic velocity in the CMB frame. Numbers peak at \(23,000\pm 300\) km s\({}^{-1}\).3 Footnote 3: A comment regarding our plots sometime using all galaxies in our sample and sometimes using the galaxy groupings: statistics are better with all galaxies but “finger of god” velocity dispersions are suppressed with the compression into groups. There is consistency between group and all-galaxy presentations. Several major features lie within this annulus. The region of high density from 7 o'clock to 10 o'clock in Figure 2 is the main component of the Sloan Great Wall (Gott et al., 2005). The Corona Borealis supercluster lies at \(\sim 12\) o'clock, the Ursa Majoris supercluster lies at \(\sim 4\) o'clock, and the Virgo-Coma supercluster lies at \(\sim 6:30\)(Einasto et al., 2001). Lesser structures include SCL 95 at \(\sim 5\) o'clock and SCL 154 at \(\sim 10:30\). The Bootes supercluster with 12 Abell clusters (Einasto et al., 2001) is found to be close to, but not quite at, the center. This supercluster contains the Abell Richness 2 cluster A1795 at \(V_{cmb}=19,204\) km s\({}^{-1}\) plus three Abell Richness 1 clusters. The projected positions of A1781, A1795, A1825, and A1831 are plotted in the lower panel of Figure 2. ## 2 Ho'oleilana in 3d The BAO are expected to be spherical shells, rather than just the ring that has been identified above. The galaxies in the relevant region in the Cosmicflows-4 collection are overwhelmingly contributed by the SDSS Peculiar Velocity (SDSS PV) catalog (Howlett et al., 2022). For three reasons the following statistical analysis of Ho'oleilana as a BAO feature is restricted to the elements of this catalog. This subset of the full Cosmicflows-4 collection has 1) a very well defined selection function; 2) a random, ungrouped catalog with the same selection function; and 3) an ensemble of \(2\times 256\) mock galaxy catalogs that also reproduce the large-scale structure, galaxy bias, and selection function of the data, but where in one half the BAO have been suppressed by generating the initial conditions of the simulations using a smoothed 'no-wiggle' linear power spectrum (Hinton et al., 2017). Figure 4 shows the power spectra of the SDSS PV data and mocks, as well as the ratio of the two sets of simulations highlighting the BAO feature/suppression. These additional data products are essential for interpreting the significance of Ho'oleilana and fitting its size and shape, and only available for the SDSS PV portion of this catalog in the region of the Universe containing Ho'oleilana. We explore the consistency of Ho'oleilana emerging from the physics of the BAO in two different ways: 1) by fitting the 3D radial distribution of galaxy groups in the SDSS Peculiar Velocity catalog with a physical model for the BAO feature; and 2) by blindly searching for similar contributors to the BAO signal in the simulations and data using the wavelet-convolution method of Arnalte-Mur et al. (2012). We report on these tests in separate sections below. Overall, our findings provide evidence that Ho'oleilana is not a chance arrangement of galaxies, but instead a part of the total BAO signal in our nearby Universe. ### Modelling Ho'oleilana as a BAO feature For our first test of the origin of Ho'oleilana, we begin by converting the SDSS PV catalogue from right ascension, declination and redshift to supergalactic cartesian coordinates assuming a fiducial flat Lambda Cold Dark Matter (\(\Lambda\)CDM) cosmological model with matter density \(\Omega_{m}=0.31\) and expansion rate \(H_{0}=75\,h_{75}\mathrm{km\,s^{-1}\,Mpc^{-1}}\). We use the 'group' redshifts from the data and simulations as this suppresses the impact of non-linear galaxy motions and is expected to make any large-scale coherent structures more prominent. Although each of the SDSS PV galaxies also has a measured distance from the Fundamental Plane, we do not use these for this analysis as their large uncertainties could'smear' out any apparent large-scale structures. Our use of'redshift-space' coordinates will do the same thing, but to a much lesser extent as the typical peculiar velocities of galaxies are much less than the typical distance uncertainties (the same reasoning is used when measuring the velocity clustering from such data; Howlett, 2019). From there, we evaluate \[N_{\mathrm{shell}}(r)=\frac{N_{D}(r)}{N_{D,tot}}\frac{N_{R,tot}}{N_{R}(r)}-1 \tag{1}\] in radial bins of width \(\Delta r=5\,h_{75}^{-1}\mathrm{Mpc}\) centered on some location. \(N_{D}(r)\) is the number of galaxies in each radial bin centered on \(r\), while \(N_{R}\) is the number of random, unclustered points. The subscript "tot" denotes the total number of galaxies in the data and random catalogs, \(N_{D,tot}=34,059\) and \(N_{R,tot}=4\times 10^{6}\) respectively. \(N_{\mathrm{shell}}\) is hence normalised, such that for a completely homogeneous distribution of galaxies we expect \(N_{\mathrm{shell}}(r)\equiv 0\). This normalization is verified by applying the same procedure to our simulations -- \(\langle N_{\mathrm{shell}}(r)\rangle\), computed from 256 histograms of \(N_{\mathrm{shell}}\) using the same central location and then averaged in each bin, is consistent with zero. This test indicates that the selection function (for instance, when approaching the edge of the available survey data) is correctly being mitigated by our method. The standard deviation in the 256 measurements in each bin is used as our uncertainty in the real data. The strength and properties of any features in \(N_{\mathrm{shell}}(r)\) will clearly depend on the assumed origin, so we explore a range of different possible central locations. Figure 4: _Top panel_: The power spectrum of the SDSS PV data compared to the average from sets of simulations with similar large-scale structure and selection effects, with and without the presence of BAO. Our method for populating the simulations with mock galaxies is tuned to this data, so we also report the \(\chi^{2}\) difference for the two sets of simulations. _Bottom panel_: The average of the ratio of the power spectra for our BAO and no-BAO simulations. For reference, the black dotted line shows the linear theory model smoothed with a non-linear damping of 13 \(h_{75}^{-1}\) Mpc (see Section 2.1.1), which clearly demonstrates that the BAO is being suppressed in the no-BAO simulations. Note that the linear theory model has not been corrected for selection effects and galaxy bias so is not expected to match the amplitude of the data. Based on our initial finding of Ho'oleilana in the 2D distribution of data seen in Figure 1, we choose a grid of 4225 nearby possible coordinates for the presumed center. In supergalactic X, Y and Z coordinates these span from \([-56,208,26]\,h_{75}^{-1}\) Mpc to \([8,272,90]\,h_{75}^{-1}\) Mpc in bins of width \([5,2.5,5]\,h_{75}^{-1}\) Mpc respectively.4 Footnote 4: The spacing of this grid is purposefully narrower in the SGY direction as in our preliminary work we found that the assumed central SGY coordinate was more correlated with the resulting constraints on the radius of Ho’oleilana, which arises from the fact that the most prominent aspect of Ho’oleilana is the 2D ring seen in the SGY plane of Figure 2. Figure 5 shows the normalised radial distribution of groups at the central location where the supposed BAO 'bump' is most prominent, \([\mathrm{SGX,SGY,SGZ}]=[-24,242,58]\,h_{75}^{-1}\) Mpc. This same figure shows our best-fitting model of the feature which is used to define the detection significance as elaborated on below. #### 2.1.1 BAO model The detection significance is derived from fitting a model with and without a BAO feature to \(N_{\mathrm{shell}}\) and comparing the difference in best-fitting \(\chi^{2}\) between the two. We generate our model based on the work of Eisenstein et al. (1998, 2007); Slepian and Eisenstein (2016), modelling the histogram of \(N_{\mathrm{shell}}\) as a mass-profile in configuration space generated from fourier/Hankel transforming a smooth, 'no-wiggle' transfer function describing the central overdensity (\(T_{nw}(k)\)) and a BAO 'wiggle' only transfer function with non-linear damping (\(T_{w}(k)\)). We model this damping as an additional Gaussian smoothing with a free parameter \(\Sigma_{nl}\) to account for evolution in the positions of galaxies since the time the BAO was frozen-in and \(z\approx 0.07\). Our model also needs to include flexibility in the radial distance of the feature from the central location, its amplitude (or equivalently, linear galaxy bias) and the possibility that the entire region we consider is over- or underdense compared to the full SDSS PV data. We do this by including three free parameters, \(\alpha\), \(B\) and \(N_{0}\) for these three characteristics. Finally, in our tests we also found that the relative amplitudes of Ho'oleilana and the central overdensity could not be well described with a single amplitude parameter -- the measured values of \(N_{\mathrm{shell}}\) for small \(r\) and near the peak of Ho'oleilana itself are larger than would be predicted by just our linear theory model leading to a poor fit. We attribute this mainly to non-linear clustering and galaxy bias and to account for this effect we included an additional linear term \(N_{1}\) on top of the constant \(N_{0}\), slightly increasing the order of the polynomial used to marginalise over the overall shape of the mass profile. This same procedure is used in standard BAO fitting, where a cubic or quartic polynomial is typically used to marginalise over the shape of the correlation function rather than attempting to model non-linear effects, which ensures the constraints on the BAO peak position itself are insensitive to the broadband shape of the clustering (e.g., Ross et al., 2015; Hinton et al., 2020; Alam et al., 2021). Overall, our BAO model hence contains five free parameters and can be summarised as \[N_{\mathrm{shell}}^{\mathrm{model}}(r)=Br^{2}\int_{0}^{\infty}\frac{k^{2}dk}{2 \pi^{2}}T(k)j_{0}(\alpha kr)+N_{1}r+N_{0}, \tag{2}\] where \[T(k)=T_{w}(k)e^{-1/2k^{2}\Sigma_{nl}^{2}}+T_{nw}(k) \tag{3}\] and \(j_{0}(x)\) is the zeroth-order spherical Bessel function. The model in the absence of BAO can be recognised from the above expressions as the case where we set \(T(k)=T_{nw}(k)\) or equivalently, \(\Sigma_{nl}\rightarrow\infty\) and fix \(\alpha=1\). The model without BAO hence has three free parameters and should only reproduce the central overdensity. Modelling the 'wiggle' and 'no-wiggle' transfer functions is not quite trivial. Firstly, one is required to assume a 'template' cosmological model to create a BAO feature that can then be dilated by the value of \(\alpha\). To allow for comparison/combination later with the Planck Collaboration et al. (2020) constraints on the sound horizon, we adopt a template cosmology close to the Figure 5: A normalized histogram of the number of galaxies as a function of distance from the center of Ho’oleilana. Error bars are computed using simulations fully reproducing the selection function of the data. Ho’oleilana, at a distance \(r\approx 150\)\(h_{75}^{-1}\) Mpc from the central location, is well fit by a physical BAO model (red line). Compared to the expectations for a random field of galaxies (blue line), Ho’oleilana is detected at greater than \(6\sigma\) significance as shown by the relative difference in \(\chi^{2}\) statistic. Planck Collaboration et al. (2020) best-fit. Parameter specifications are given in Table 1. Note that this cosmology is not the same as that used to convert our catalog redshifts to distance (nor does it have to be), but it is important that the template cosmology be the one used to compute any cosmological constraints from our fit to \(\alpha\). Secondly, the presence of baryons also introduces Silk damping effects in the transfer function, suppressing it on small scales, as well as adding the BAO. One cannot simply then take a numerical transfer function evaluated for a cosmology with and without baryons and difference the two to extract the wiggles. Eisenstein et al. (1998) provide fitting formulae for the smooth transfer function \(T_{EH}(k)\) that could be used, although the approximations used therein for the sound horizon and equality scales are considered not quite accurate enough for modern BAO analyses (Anderson et al., 2014). Instead, we take a hybrid approach, and compute both numerical and Eisenstein et al. (1998) smooth transfer functions for our fiducial cosmology, and a second 'no-baryon' cosmology (where \(\Omega_{b}\) is reduced by a factor of \(\sim 20\) as also shown in Table 1). We then compute the 'no-wiggle' and 'wiggle' transfer functions as \[T_{nw}(k)=T_{EH}(k)\frac{T_{no\,baryon}(k)}{T_{EH,no\,baryon}(k)},\quad T_{w}( k)=T(k)-T_{nw}(k) \tag{4}\] which effectively corrects the Eisenstein et al. (1998) smooth transfer function \(T_{EH}(k)\) so that its form is more representative of the broadband shape of the numerical transfer function before subtracting the two. We perform our fits with both BAO and no BAO models restricting to scales \(40\,h_{75}^{-1}\) Mpc \(<r<250\,h_{75}^{-1}\) Mpc, avoiding non-linearities at the core of the central overdensity. We perform a full MCMC fit using the dynesty sampler (Speagle, 2020)_at each_ proposed center from Section 2.1 and obtain the best-fit and errors on the five model parameters. #### 2.1.2 Results From the \(\chi^{2}\) difference between our two models, we conclude that Ho'oleilana, centered at the aforementioned coordinates, is detected at greater than \(6\sigma\) significance compared to the expectations of a random field of galaxies, and there is excellent agreement between the data and physical BAO model. Of all the possible centers we tested, 716/4225 and 127/4225 result in greater than \(4\sigma\) and \(5\sigma\) detections respectively. We caution however that our choice of centers was conditioned on our visual identification of a feature in the data and so the relative fraction of 4 and \(5\sigma\) centers should not be interpreted in the usual way chance events are interpreted. Of the five free parameters we fit for, \(B\), \(N_{0}\) and \(N_{1}\) are nuisance parameters and so not commented on further in this work. For the others we perform an average of the posteriors at each proposed center with greater than \(3.25\sigma\) significance (of which there are 1661). This threshold was chosen as it is close to the turnover point in a histogram of the BAO strengths across all the centers we tested. Combining fits in this way allows us to propagate the uncertainty in the central location into our constraints on the radius and damping within the BAO model. From just the fit using our most likely center for Ho'oleilana (see Fig. 5), we find \(\alpha=0.87\pm 0.01\), while averaging over all centers above our threshold gives \(\alpha=0.88^{+0.06}_{-0.09}\), indicating that a substantial portion of our error budget comes from the uncertainty in the central location. Our combined chain also gives \(\Sigma_{nl}=12.8^{+0.8}_{-5.8}\,h_{75}^{-1}\) Mpc. Under the condition that it is a remnant of the BAO, our constraint on \(\Sigma_{nl}\) implies Ho'oleilana has undergone additional Gaussian smoothing due to the bulk motions of galaxies in the intervening years between \(z\simeq 1060\) (when the BAO feature was first frozen in) and the observed \(z\approx 0.07\), with standard deviation \(\sim 13\,h_{75}^{-1}\) Mpc. This value of \(\Sigma_{nl}\) can be compared to the expected dispersion from linear theory. Given we are considering only relative separations from a fixed central point, rather than pairwise separations between objects, we expect this value to be comparable to the dispersion predicted to arise from the linear-scale velocities of galaxies. At \(z=0\) for a \(\Lambda\)CDM cosmological model assuming General Relativity, this is given by \[\frac{\sigma_{v}}{H_{0}}=\frac{\Omega_{m}^{0.55}}{\sqrt{6}\tau}\biggl{(}\int_ {0}^{\infty}P_{\rm lin}(k)dk\biggr{)}^{1/2}\!\!\approx 4.2\,h_{75}^{-1}\ {\rm Mpc}, \tag{5}\] \begin{table} \begin{tabular}{c|c|c} \hline \hline Parameter & Template & No Baryon Template \\ \hline \(\Omega_{b}h^{2}\) & 0.0224 & 0.001 \\ \(\Omega_{cdm}h^{2}\) & 0.1199 & 0.1412 \\ \(H_{0}\,({\rm km\ s^{-1}\ Mpc^{-1}})\) & 67.51 & 67.51 \\ \(n_{s}\) & 0.9653 & 0.9653 \\ \(A_{s}\) & \(2.25\times 10^{-9}\) & \(2.25\times 10^{-9}\) \\ \(N_{eff}\) & 3.046 & 3.046 \\ \(\sum M_{\nu}\,({\rm eV})\) & 0.06 & 0.06 \\ \hline \end{tabular} \end{table} Table 1: A summary of the cosmological parameters used in creating our BAO model template, and for converting our constraints on the BAO model parameter \(\alpha\) to cosmological constraints. where the latter approximation is obtained using our template cosmology.5 One would expect non-linear evolution and redshift-space distortions to increase this value somewhat. We consider our measurement consistent with our expectations of the typical distances traversed by individual galaxies from their formation to the present day. Our recovered value for \(\alpha\) can also be further interpreted, but this is done in Section 3. Footnote 5: Evaluating this expression at the proper redshift of \(z\approx 0.07\) gives the same answer to within the precision quoted here. ### Prevalence in simulations To quantify the true significance of Ho'oleilana without conditioning on our visual identification we make use of our simulations with and without BAO. The key questions are 1) whether Ho'oleilana could have been identified without an _a priori_ knowledge of such a structure in our nearby Universe; and 2) to what extent would we find similar features in simulations with large-scale structure, but where we know there are no BAO. To answer these questions, we turn to a variant of the algorithm developed in Arnalte-Mur et al. (2012), implemented in the publicly available BAO 'centerfinder' code of Brown et al. (2021). Our algorithm first uses a set of data and the random catalog to compute the overdensity field on a grid of cell size \(5\,h_{75}^{-1}\,\mathrm{Mpc}\). It then convolves this field with a 'BAO-wavelet'; a normalised, spherically-symmetric function that has a shape expected of a BAO feature. The wavelet has two free parameters, \(r_{\mathrm{BAO}}\) and \(s_{\mathrm{BAO}}\), which set the radius and width of the BAO feature. The result is, for each cell in the gridded overdensity field, a weight \(W_{r,s}\) that describes its likelihood as the center of a BAO-like feature with the given radius and width. In order to determine whether or not a given field evidences the presence of BAO, Arnalte-Mur et al. (2012) proposed to average the values of \(W_{r,s}\) over a subset of likely central locations (in their case halos, galaxy groups or galaxies likely to be found in the centers of large clusters), to find \(B_{r,s}=\sum_{N_{\mathrm{subset}}}W_{r,s}/N_{\mathrm{subset}}\), claiming a positive value for this coefficient indicates a positive detection of BAO. We do the same, computing \(W_{r,s}\) for each cell on our grid that contains at least one galaxy. We then construct \(B_{r,s}\) by summing over all these cells (weighted by the number of galaxies in that cell). Figure 6 shows the results of this procedure for a wide variety of wavelet radii and widths. There is evidence that, on average, positive values of \(B_{r,s}\) can be found preferentially in the BAO simulations at a scale around \(r_{\mathrm{BAO}}=135\,h_{75}^{-1}\,\mathrm{Mpc}\). This corresponds well with the Figure 6: BAO detection using the BAO-wavelet with different radii (\(r_{\mathrm{BAO}}\)) and widths (\(s_{\mathrm{BAO}}\)). The top panel of this figure shows the average values of the differences in the detection coefficient \(B_{r,s}\) between the BAO and no-BAO simulations weighted by the variance. The vertical green line is the expected BAO radius in the mocks. The middle shows the average over the variance for only the no-BAO simulations. The bottom panel shows the same applied to the data. In all cases, the contours simply follow the color map, where the solid contour corresponds to a value of zero while the dashed (dotted) lines correspond to positive (negative) \(0.5\) and \(1.0\) sigma values. Dashed lines/red regions hence indicate values of the wavelet parameters with stronger BAO-like features. Finally, the green cross in the bottom panel indicates the wavelet parameters with the single largest BAO weight \(W_{r,s}\), which coincides closely with the center, width and radius of Ho’oleilana derived via other means. expected BAO radius given the cosmology used to generate the simulations (\(132.4\,h_{75}^{-1}\,\mathrm{Mpc}\)), and indicates that the algorithm _can_ be used to identify BAO and the main contributors to the BAO. However, it is worth highlighting that the significance is relatively weak -- in an ideal scenario one would use a highly complete sample of galaxies to compute \(W_{r,s}\) before summing this quantity only over objects close to the expected BAO centers. However, we suspect that our use of the SDSS PV sample (consisting only of ellipticals likely to be found in dense clusters) weakens the fluctuations in \(W_{r,s}\) for larger radii and hence also weakens the signal in \(B_{r,s}\). The last panel in Figure 6 shows the same algorithm applied to the data. There is an excess of \(B_{r,s}\) at somewhat larger radii around \(r_{\mathrm{BAO}}=160\,h_{75}^{-1}\,\mathrm{Mpc}\), although this result is also only weakly significant -- it is possible to find similar values of \(B_{r,s}\) at these radii and widths even within simulations without BAO. In this same plot we identify with a green cross the values \(r_{\mathrm{BAO}}=155\,h_{75}^{-1}\,\mathrm{Mpc}\) and \(s_{\mathrm{BAO}}=27\,h_{75}^{-1}\,\mathrm{Mpc}\), which result in the largest single value for \(W_{r,s}\) for all BAO-wavelets and grid cells we consider. This wavelet also returns a positive value of \(B_{r,s}\) -- which is not guaranteed, given \(B_{r,s}\) is the mean across all \(\sim 23,000\) cells in our overdensity grid. Figure 7 shows a histogram of all the values of \(W_{r,s}\) we return (i.e., the value for each grid cell in our catalog containing at least one galaxy), for this choice of wavelet, for both the data and no-BAO simulations. There is a significant excess of grid cells with large weights. The most prominent excess corresponds closely to the center of Ho'oleilana identified in Section 2.1.1. Our conclusion is that, using the independent algorithm of Arnalte-Mur et al. (2012) and without conditioning on our _a priori_ visual inspection, there is evidence of BAO-like features in the SDSS PV catalog, and that the strongest contributor to these features is Ho'oleilana. To answer the second of our proposed questions, we similarly identify the largest single BAO-like feature (the grid cell with the largest \(W_{r,s}\)) within our 256 no-BAO simulations. We then generate radial profiles \(N_{\mathrm{shell}}(r)\) centered at each of these 256 grid centres and fit the apparent BAO feature using our model from Section 2.1.1. The aim is to compare the strength of these BAO-like features (in simulations which we know have heavily suppressed primordial BAO) to that inferred for Ho'oleilana. Figure 8 shows this comparison, as histograms of the \(\chi^{2}\) difference between the BAO model and a straight line fit for each of the no-BAO mocks, in the first instance, allowing for freedom in the values of \(r_{\mathrm{BAO}}\) and \(s_{\mathrm{BAO}}\) and, then, restricting to the radius and scale \(r_{\mathrm{BAO}}=155\,h_{75}^{-1}\,\mathrm{Mpc}\) and \(s_{\mathrm{BAO}}=27\,h_{75}^{-1}\,\mathrm{Mpc}\) identified Figure 8: Histograms of the \(\chi^{2}\) difference between models for the radial distributions of galaxies with and without the BAO feature fit to the central location with the largest value of \(W_{r,s}\) in each no-BAO mock. The dashed line shows the value from fitting Ho’oleilana in Section 2.1.1. The blue histogram shows results allowing for any value of \(r_{\mathrm{BAO}}\) and \(s_{\mathrm{BAO}}\). The red histogram is conditioned on \(r_{\mathrm{BAO}}=155\,h_{75}^{-1}\,\mathrm{Mpc}\) and \(s_{\mathrm{BAO}}=27\,h_{75}^{-1}\,\mathrm{Mpc}\), the radius and width which returns the largest value of \(W_{r,s}\) in the data, corresponding Ho’oleilana. Figure 7: A histogram of the weights \(W_{r,s}\) for a BAO-wavelet with radius \(r_{\mathrm{BAO}}=155\,h_{75}^{-1}\,\mathrm{Mpc}\) and width \(s_{\mathrm{BAO}}=27\,h_{75}^{-1}\,\mathrm{Mpc}\). The weights are normalised by the standard deviation across the full sample, and positive values correspond to grid locations that demonstrate an overabundance of galaxies distributed within a BAO-like shell. There is a clear excess of positive weights in the data (black) compared to the average and standard deviation of the no-BAO mocks (blue). The dotted line shows a Gaussian distribution — both the BAO and no-BAO mocks display a non-Gaussian tail of large overdensities. The largest positive weight corresponds to Ho’oleilana. previously. Of the 256 simulations we test, two of them have features of any radius or width that are more significant than Ho'oleilana. We can hence conclude that the probability of Ho'oleilana being a chance alignment, and not associated with the primordial BAO, is \(<1\%\). None of the simulations we test have a feature with the same radius and width as Ho'oleilana that is as significant. These tests provide reasonably strong evidence that Ho'oleilana is _not_ a chance occurrence, although the possibility cannot be ruled out completely. ## 3 Cosmological Constraints Under the assumption that Ho'oleilana is a significant contributor to the true, full, BAO signal, and that its properties are representative of the translationally-averaged BAO, we can use its radius to extract cosmological information. The parameter \(\alpha\) fit in Section 2.1.1 is proportional to the ratio between the size and distance to the BAO, and so encodes this information. From Eisenstein et al. (2005) \[\alpha=\frac{D_{v}}{r_{\rm drag}}\frac{r_{\rm drag}^{\rm fid}}{D_{v}^{\rm fid}}, \tag{6}\] where \(D_{v}\) is a measure of the distance to the center of the BAO, and \(r_{\rm drag}\) is its radius, or more formally, the size of the sound horizon at the baryon-drag epoch after the photons and baryons have decoupled and the baryon inertia has subsided. The superscript "fid" denotes these quantities in our fiducial cosmology. \(D_{v}\) is the 'volume averaged' distance, which is a hybrid of two parts angular diameter distance \(D_{A}\), and one part Hubble parameter \(H(z)\) arising from the spherical symmetry of the BAO, \[\frac{D_{v}(z)}{r_{\rm drag}}=\bigg{(}\frac{cz}{H(z)r_{\rm drag}}\bigg{)}^{1/3} \bigg{(}\frac{1+z}{2\sin^{-1}(r_{\rm drag}/2D_{A}(z))}\bigg{)}^{2/3}. \tag{7}\] This expression differs slightly to the conventional one (Eisenstein et al., 2005) as we have avoided using the small-angle approximation given the low redshift of Ho'oleilana. To test the validity of our approach, we first perform a similar analysis on our simulated catalogs with BAO. As was done for the no-BAO simulations, we identify the highest weighted central locations across the same range of \(r_{\rm BAO}\) and \(s_{\rm BAO}\) used previously in each of our 256 simulations (i.e., in each cell of Figure 6). We then measure \(N_{\rm shell}\) for each center and fit BAO and no-BAO models. Figure 9 shows a composite of a random subset of our detected features in simulations with \(B_{r,s}>0\), scaled by the best-fit value of \(\alpha\) to make the BAO more prominent. We also plot the average best-fit BAO model Figure 9: _Top panel_: A composite of single BAO features found in the 256 simulations we explore, restricting to simulations with positive BAO detections \(B_{r,s}>0\), where we have scaled the radius of the BAO shell by the parameter \(\alpha\) to align them for clarity. The red line shows the BAO model averaged over each of the fits to the individual detections. The dashed vertical line is the expected BAO radius given the input cosmology of the simulation. _Middle panel_: A histogram of the recovered \(\alpha\) values from fitting these single BAO features with the model from Section 2.1.1, where the solid line is a Gaussian fit to the distribution with mean given by the dotted line. _Bottom panel_: A histogram of the standard deviation in best-fit \(\alpha\) measured from features _within_ each single mock (i.e., we measure 256 values of the standard deviation from our 256 mocks.). The black shaded region shows our baseline error on \(\alpha\) from Ho’oleilana, while the dashed line is the standard deviation measured from across all features and mocks (i.e., all the measurements shown in the middle panel). and expected BAO radius given the input cosmology of the simulation. In addition, in Figure 9 we also show histogram of these BAO fits, again restricting to mocks with \(B_{r,s}>0\), but also normalising by the number of \(s_{\rm BAO}\) bins at each \(r_{\rm BAO}\).6 The average \(\alpha\) value is close to 1, indicating no bias in our method for identifying and fitting the individual BAO contributors. However, the uncertainty from the mocks both using only detections within a single mock realisation, or compared across realisations is slightly larger than that found from our fit to Ho'oleilana. This may simply be a result of the fact that Ho'oleilana is an exceptionally strong feature, even within mocks containing BAO. Or it may indicate an additional contribution that should be included in our error budget from sample variance. We hence elect to provide constraints and draw conclusions using both our fitted error on \(\alpha\) from Ho'oleilana (\(\alpha=0.88^{+0.06}_{-0.09}\)), and using the uncertainty derived from the scatter in the mocks (\(\alpha=0.88\pm 0.14\)) but fixed to our most likely center for Ho'oleilana. Footnote 6: This is important, because the requirements of the BAO wavelet to have \(r_{\rm BAO}\geq 2s_{\rm BAO}\) means our list of the strongest BAO contributions at each combination of \(r_{\rm BAO}\) and \(s_{\rm BAO}\) contains more features with larger \(r_{\rm BAO}\) and hence smaller \(\alpha\). Normalising the histogram of \(\alpha\) values by the number of \(s_{\rm BAO}\) bins corrects for this selection bias. Turning back to the data, we convert our values of \(\alpha\) to a distance ratio \(D_{v}/r_{\rm drag}\) using our fiducial cosmology. A substantial fraction of our uncertainty on \(\alpha\) comes from marginalising over the possible centers -- similarly we then have a range of possible fiducial distances to the center of Ho'oleilana, which translates to uncertain values for our redshift and fiducial distance ratio of \(z=0.068^{+0.003}_{-0.007}\) and \(D_{v}^{\rm fid}/r_{\rm drag}^{\rm fid}=1.99^{+0.07}_{-0.20}\) respectively. Propagating this constraint properly alongside the constraint on \(\alpha\) we recover \(D_{v}/r_{\rm drag}=1.63^{+0.07}_{-0.08}\).7. In the case where we fix the central location of Ho'oleilana but use the mock scatter as our error on \(\alpha\), we have a fixed \(D_{v}^{\rm fid}/r_{\rm drag}^{\rm fid}=1.88\) and find \(D_{v}/r_{\rm drag}=1.66\pm 0.26\). Footnote 7: Note that this propagation is done by converting each point of our MCMC chains — one cannot simply multiply the reported constraints on \(\alpha\) and \(D_{v}^{\rm fid}/r_{\rm drag}^{\rm fid}\) because these are extremely correlated, see Figure 10. Finally, we adopt a prior on the sound horizon \(r_{\rm drag}=147.13\pm 0.26\,\rm Mpc\) from Planck Collaboration et al. (2020), and fit our combined posterior for \(\Omega_{m}\) and \(H_{0}\). Although we have effectively allowed the redshift to vary in incorporating all our proposed centers, the general low redshift of Ho'oleilana makes it an almost pure probe of \(H_{0}\), with little constraining power on \(\Omega_{m}\). This is reflected in our final constraints, shown in Figure 10, where \(\Omega_{m}\) is unconstrained by our methodology. We find \(H_{0}=76.9^{+8.2}_{-4.8}\ \rm km\ s^{-1}\ Mpc^{-1}\) and \(H_{0}=74.7^{+12.4}_{-9.7}\ \rm km\ s^{-1}\ Mpc^{-1}\) using the uncertainties of \(\alpha\) from Ho'oleilana only and from the distribution of our simulations respectively. For the former, the majority of the uncertainty on \(H_{0}\) comes from the expected variance in \(N_{\rm shell}\) as measured from the mocks (reflected in the error bars in Fig. 5) and our uncertainty in the central location/redshift. Our constraints on the expansion rate of the Universe, and comparisons to other measurements, including statistical BAO, are shown in Figures 10 and 11. Being a single feature rather than a statistical average, the errors bars from Ho'oleilana are larger than those from other large-scale structure surveys, however are still constraining enough to provide a preference for larger expansion rates. Given the presence of Ho'oleilana, an interesting question is whether the clustering of the full SDSS PV sam Figure 10: Cosmological constraints from fitting Ho’oleilana with a BAO model using the uncertainty from Ho’oleilana alone allowing for uncertainty in the central location (blue) or using the the scatter seen in BAO simulations as the uncertainty fixing to our most likely center for Ho’oleilana (red). \(\alpha\) is our BAO scaling parameter, \(D_{v}/r_{\rm drag}\) is the ratio between the distance to the center of Ho’oleilana and its size, while \(z\) is the redshift to its center. From these pieces of information, and assuming constraints on \(r_{\rm drag}\) from early Universe physics, we obtain constraints on the matter content \(\Omega_{m}\) and present day expansion rate of the Universe \(H_{0}\) that favor other local direct measurements (orange band) rather than that propagated from models of the early universe (green band). ple contains any evidence of _statistical_ BAO. The power spectrum measurements presented in Fig. 4 were analysed using the BAO fitting code Barry (Hinton et al., 2020) taking into account the survey window function, however, unfortunately, we found no significant statistical BAO detection. Nonetheless, statistical BAO have been detected in both the 6dF and SDSS Main galaxy surveys (Beutler et al., 2011; Ross et al., 2015), which cover an effective volume only a few times larger than the catalogue we analyse here. The latter of these also covers the redshift range \(0.07<z<0.20\) and so partially overlaps with our sample. It is worth noting that the SDSS results of Ross et al. (2015) relied on BAO reconstruction to enable their detection. As such, further investigation of the SDSS PV sample, using both the correlation function and BAO reconstruction, is warranted. ## 4 Discussion and Summary Although Ho'oleilana was identified as a two-dimensional feature, a significant component of the signal comes from the foreground part of the three-dimensional shell. (The back side far edge falls slightly beyond the \(z=0.1\) limit of our sample.) Remarkably, the Coma Cluster, the Center for Astrophysics Great Wall (de Lapparent et al., 1986), and structure coursing up to the Hercules supercluster (Einasto et al., 2001; Shapley, 1934) lie along the foreground surface of the posited BAO phenomenon. The famous Bootes Void (Kirshner et al., 1981) lies within the embrace of Ho'oleilana. Near the center of Ho'oleilana is the Bootes supercluster (Einasto et al., 2001), presumed to be the manifestation of the matter concentration that gave birth to the BAO (Weinberg et al., 2013). In detail, the domain of the central supercluster about the dominant A1795 is diffused over \(\sim 50h_{75}^{-1}\) Mpc around the geometric center of the BAO shell The extent of Ho'oleilana is revealed in an accompanying video in the animated Figure 12. The cosmography of Ho'oleilana is further explored in the interactive Figure 13. The Cosmicflows-4 galaxy groups that lie within the Ho'oleilana shell are colored red in the interactive Figure 14. In each of these displays, the galaxy groups are located in 3D by their systemic velocities; ie. in redshift space. The significance of the detection of Ho'oleilana, its shape, its relation to other previously known structures in the local Universe, and the prominence of the feature compared to the expectations of both a random field of galaxies and simulations with large-scale structure but suppressed BAO, strongly suggest Ho'oleilana is itself a part of the BAO feature rather than a chance alignment. Marginalising over the uncertainty in the central position, width and amplitude, we are able to extract a measurement of the ratio of the distance to the center of Ho'oleilana relative to its size predicted by Figure 11: A comparison of the expansion rate of the Universe as a function of redshift from Ho’oleilana compared to other _statistical_ BAO measurements (Beutler et al., 2011; Ross et al., 2015; Alam et al., 2017), assuming a prior on the BAO size from early Universe physics. Bright and faint points represent our measurements using the uncertainty from Ho’oleilana alone, or taking the scatter from BAO simulations as the error, respectively. The solid band shows the predicted expansion rate for a \(\Lambda\)CDM cosmological model from Planck Collaboration et al. (2020). Figure 12: Video visualization of the cosmography of Ho’oleilana. All objects in the north galactic hemisphere of the Cosmicflows-4 collection of galaxy groups are seen as points in gray while those lying within the shell of Ho’oleilana, of radius 11,492 km s\({}^{-1}\) and width \(\pm 2w\) where \(w=837\) km s\({}^{-1}\) are highlighted in red. Major components in proximity to the shell are highlighted and identified by name. The Bootes supercluster lies near the center of Ho’oleilana and the Bootes void lies interior to the shell structure. Our home location is at the origin of the red, green, blue axes. These axes have lengths 10,000 km s\({}^{-1}\) and are directed toward positive SGX, SGY, SGZ respectively. early Universe physics, \(D_{v}=1.63^{+0.07}_{-0.08}\,r_{\rm drag}\). Fixing to the most likely central location, but using scatter in BAO simulations to infer the error, we find \(D_{v}=1.66\pm 0.26\,r_{\rm drag}\) Given the low redshift of the feature, and adopting a value of the BAO radius \(r_{\rm drag}\) expected from a Planck Collaboration et al. (2020) cosmological model, these distances can be almost directly converted to constraints on the Hubble Constant, \(H_{0}=76.9^{+8.2}_{-4.8}\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(H_{0}=74.7^{+12.4}_{-9.7}\) km s\({}^{-1}\) Mpc\({}^{-1}\) respectively. These values are more consistent with what is found from other direct local Universe probes -- \(H_{0}=69.8\pm 0.8\pm 1.7\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Freedman et al., 2020); \(H_{0}=73.1\pm 1.0\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Riess et al., 2022); \(74.6\pm 3.0\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Tully et al., 2023) -- rather than the value of \(H_{0}=67.4\pm 0.5\) km s\({}^{-1}\) Mpc\({}^{-1}\) inferred from propagating the early Universe constraints (Planck Collaboration et al., 2020). By implication, if Ho'oleilana is representative of the statistical population of BAO, additional late-time physics may be required to increase the expansion rate of the Universe towards the present day. Future deeper data, such as that from the Dark Energy Spectroscopic Instrument (DESI Collaboration et al., 2023) or the 4MOST Hemisphere Survey (Taylor et al., 2023) may allow for further validation of Ho'oleilana, or the detection of similar structures elsewhere in the nearby Universe. ###### Acknowledgements. We give thanks to collaborators in the assembly of the Cosmicflows-4 catalog of galaxy distances and velocities, with special thanks to Helene Courtois, Ehsan Kourkchi, and Khaled Said. A perceptive referee challenged us to justify our quantitative uncertainties. This paper was completed while RBT attended and benefited from discussions with Nick Kaiser8 and others at the workshop _The Cosmic Web: Connecting Galaxies to Cosmology at High and Low Redshift_ at the Kavli Institute of Theoretical Physics, University of California, Santa Barbara. Funding for the Cosmicflows project has been provided by the US National Science Foundation grant AST09-08846, the National Aeronautics and Space Administration grant NNX12AE70G, and multiple awards to support observations with the Hubble Space Telescope through the Space Telescope Science Institute. CH acknowledges support from the Australian Government through the Australian Research Coun Figure 14: Interactive 3D visualization of the Cosmicflows-4 galaxy groups, highlighting those associated with Ho’oleilana. Individual galaxy groups are seen as points in black, except those lying within the shell of Ho’oleilana, of radius 11,492 km s\({}^{-1}\) and width \(\pm 2w\) where \(w=837\) km s\({}^{-1}\), are highlighted in red. Double-click or single-click on the ”CF4” or ”Ho’oleilana” legends in the upper right corner of the viewer to isolate or hide the corresponding populations. Hover on objects to get positions and PGC ”Principal Galaxy Catalog” identifiers. Figure 13: Interactive 3D visualization of the cosmography of Ho’oleilana. All objects in the north galactic hemisphere of the Cosmicflows-4 collection of galaxy groups are seen as points in gray while those lying within the shell of Ho’oleilana, of radius 11,492 km s\({}^{-1}\) and width \(\pm 2w\) where \(w=837\) km s\({}^{-1}\), are highlighted in red. Major components in proximity to the shell are highlighted and identified by name. The Boötes supercluster lies near the center of Ho’oleilana. Our home location is at the origin of the red, green, blue axes. These axes have lengths 10,000 km s\({}^{-1}\) and are directed toward positive SGX, SGY, SGZ respectively. cil's Laureate Fellowship and Discovery Project funding schemes (projects FL180100168 and DP220101395).
2307.08787
Cyclic splittings of pro-p groups
In this paper we prove a pro-p version of the Rips-Sela's Theorems on splittings of a group as an amalgamated free product or HNN-extension over an infinite cyclic subgroup.
Jesus Berdugo, Pavel Zalesskii
2023-07-17T19:05:27Z
http://arxiv.org/abs/2307.08787v3
# Cyclic splittings of pro-\(p\) groups ###### Abstract In this paper, we prove a pro-\(p\) version of the Rips-Sela's Theorems on splittings of a group as an amalgamated free product or HNN-extension over an infinite cyclic subgroup. **Keywords:** pro-\(p\) groups, amalgamated free pro-\(p\) products, pro-\(p\) HNN-extensions ## 1 Introduction In 1997 Rips and Sela published the fundamental paper [7], where they studied infinite cyclic splittings (i.e. \(\mathbb{Z}\)-splittings) of groups as an amalgamated free product or an HNN-extension. They constructed a canonical JSJ decomposition for finitely presented groups that gives a complete description of all \(\mathbb{Z}\)-splittings of these groups. In order to understand all possible \(\mathbb{Z}\)-splittings of a group, they needed to study carefully the "interaction" between any two given elementary \(\mathbb{Z}\)-splittings of it. The objective of this paper is to study the "interaction" between any two given \(\mathbb{Z}_{p}\)-splittings of a pro-\(p\) group. Namely, we prove a pro-\(p\) version of Rips-Sela's theorems on \(\mathbb{Z}\)-splittings ([7, Theorem 2.1 and Theorem 3.6]). A splitting of a pro-\(p\) group \(G\) as an amalgamated free pro-\(p\) product or HNN-extension over \(\mathbb{Z}_{p}\) will be called a \(\mathbb{Z}_{p}\)-splitting in the paper. An element of \(G\) is called elliptic with respect to a splitting \(G=G_{1}\amalg_{\mathbb{Z}_{p}}G_{2}\) as an amalgamated free pro-\(p\) product (resp. pro-\(p\) HNN-extension \(G=HNN(G_{1},\mathbb{Z}_{p},t)\)) if it is conjugate into \(G_{1}\cup G_{2}\) (resp. into \(G_{1}\)) and is called hyperbolic otherwise. A pair of given \(\mathbb{Z}_{p}\)-splittings \(A_{1}\amalg_{C_{1}}B_{1}\) and \(A_{2}\amalg_{C_{2}}B_{2}\) over \(C_{1}=\langle c_{1}\rangle\), \(C_{2}=\langle c_{2}\rangle\) is called: * \(Elliptic-Elliptic:\) If \(c_{1}\) is elliptic in \(A_{2}\amalg_{C_{2}}B_{2}\) and \(c_{2}\) is elliptic in \(A_{1}\amalg_{C_{1}}B_{1}\). * \(Hyperbolic-Hyperbolic:\) If \(c_{1}\) is hyperbolic in \(A_{2}\amalg_{C_{2}}B_{2}\) and \(c_{2}\) is hyperbolic in \(A_{1}\amalg_{C_{1}}B_{1}\). * \(Hyperbolic-Elliptic:\) If \(c_{1}\) is hyperbolic in \(A_{2}\amalg_{C_{2}}B_{2}\) and \(c_{2}\) is elliptic in \(A_{1}\amalg_{C_{1}}B_{1}\). **Definition 1.1**.: _A pro-\(p\) group \(G\) is said to be freely indecomposable group if it can not be written as a free pro-\(p\) product of two non-trivial subgroups._ Our first result is the pro-\(p\) analog of [7, Theorem 2.1]. **Theorem 1.2**.: _Let \(G\) be a finitely generated freely indecomposable pro-\(p\) group. Then any two \(\mathbb{Z}_{p}\)-splittings of \(G\) are either elliptic-elliptic or hyperbolic-hyperbolic._ Next, given two hyperbolic-hyperbolic splittings over \(C_{1}\) and \(C_{2}\), we study the normalizer \(N_{G}(C_{i})\), \(i=1,2\). This study corresponds to the results of Section 3 of Rips-Sela's paper (where normalizers are called anti-centralizers). Note that Sela and Rips [7] use the existence of Tits' axis, on which the hyperbolic element acts. In the pro-\(p\) case, such an axis does not exist, so our argument is different from their argument. We divide our argument into two cases: \(p>2\) and \(p=2\) since for \(p=2\) the study of all possible normalizers is quite detailed and involves a sequence of case studies. We state here the case \(p>2\), since it has only two cases, and refer the reader to Proposition 5.5 for the \(p=2\) case. **Proposition 1.3**.: _Let \(p>2\) and suppose that \(G=A_{1}\amalg_{C_{1}}B_{1}\) (or \(G=HNN(A_{1},C_{1},t_{1})\)) and \(G=A_{2}\amalg_{C_{2}}B_{2}\) (or \(G=HNN(A_{2},C_{2},t_{2})\)) are two hyperbolic-hyperbolic \(\mathbb{Z}_{p}\)-splittings. Let \(H_{i}\neq 1\) be a subgroup of \(C_{i}\). Then \(N_{G}(H_{i})\) has one of the following types:_ 1. _cyclic group_ \(\mathbb{Z}_{p}\)_;_ 2. \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\)_;_ Next we show that if \(N_{G}(C_{1})\) is not an infinite cyclic or infinite dihedral group and G does not split over a subgroup of order \(\leq 2\), then \(N_{G}(C_{1})\) is in fact the ambient group \(G\); hence, for \(p>2\)\(G\) is \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) and for \(p=2\)\(G\) is \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) by \(\mathbb{Z}/2\mathbb{Z}\). We state again here the \(p>2\) case; for the \(p=2\) case the reader can consult Theorem 5.11. **Theorem 1.4**.: _Let \(p>2\) and let \(G\) be a finitely generated pro-\(p\) group that does not split as a free pro-\(p\) product. Let \(G=A_{1}\amalg_{C_{1}}B_{1}\) (or \(G=HNN(A_{1},C_{1},t_{1})\)), and \(G=A_{2}\amalg_{C_{2}}B_{2}\) (or \(G=HNN(A_{2},C_{2},t_{2})\)) be two hyperbolic-hyperbolic \(\mathbb{Z}_{p}\)-splittings of \(G\). Suppose that \(N_{G}(C_{1})\) is not cyclic. Then \(G\cong\mathbb{Z}_{p}\times\mathbb{Z}_{p}\)._ We deduce the following **Corollary 1.5**.: _With the hypothesis of Theorem 1.4 suppose \(G\) is non-abelian. Then \(C_{i}\) is malnormal in either \(A_{i}\) or \(B_{i}\) (Resp. \(C_{i}\) or \(C_{i}^{t_{i}}\) is malnormal in \(A_{i}\)), \(i=1,2\)._ As in classical Bass-Serre's theory, a pro-\(p\) group that splits as an amalgamated free pro-\(p\) product or HNN-extension acts on the corresponding pro-\(p\) tree. The action is called \(k\)-acylindrical if any non-trivial element can fix at most \(k\) consecutive edges. We deduce from Corollary 1.5 the 2-acylindricity of the action. **Theorem 1.6**.: _Let \(p>2\) and \(G\) be a non-abelian finitely generated pro-\(p\) group that does not split as a free pro-\(p\) product. Let \(G=A_{1}\amalg_{C_{1}}B_{1}\) (or \(G=HNN(A_{1},C_{1},t_{1})\)), and \(G=A_{2}\amalg_{C_{2}}B_{2}\) (or \(G=HNN(A_{2},C_{2},t_{2})\)) be two hyperbolic-hyperbolic \(\mathbb{Z}_{p}\)-splittings of \(G\). Then the action of \(G\) on the standard pro-\(p\) trees of these splittings is 2-acylindrical._ The proofs of the results use the pro-\(p\) version of the Bass-Serre theory that can be found in [6]. The structure of the paper is as follows: In Section 2 we give basic definitions that will be used throughout the article. In Section 4 we prove Theorem 1.2 and in Section 5 we prove Theorem 1.4. **Conventions.** Throughout the paper, unless otherwise stated, groups are pro-\(p\), subgroups are closed, and homomorphisms are continuous. In particular, \(\langle S\rangle\) will mean the topological generation in the paper and presentations are taking in the category of pro-\(p\) groups; \(a^{g}\) will stand for \(g^{-1}ag\) in the paper. ## 2 Preliminaries ### Basic definitions **Definition 2.1**.: _A graph \(\Gamma\) is a disjoint union \(E(\Gamma)\cup V(\Gamma)\), with the two maps \(d_{0},d_{1}:\Gamma\longrightarrow V(\Gamma)\), whose restriction to \(V(\Gamma)\) are the identity map and for any element \(e\in E(\Gamma)\), \(d_{0}(e)\) and \(d_{1}(e)\) are the initial and the terminal vertices of \(e\) respectively._ **Definition 2.2**.: _A graph \(\Gamma\) is called a profinite graph if \(\Gamma\) is a profinite space with a non-empty subset \(V(\Gamma)\) such that:_ 1. \(V(\Gamma)\) _is closed,_ 2. _the maps_ \(d_{0},d_{1}:\Gamma\longrightarrow V(\Gamma)\) _are continuous._ We call \(V(\Gamma):=d_{0}(\Gamma)\cup d_{1}(\Gamma)\) the set of vertices of \(\Gamma\) and \(E(\Gamma):=\Gamma\setminus V(\Gamma)\) the set of edges of \(\Gamma\). For \(e\in E(\Gamma)\) we call \(d_{0}(e)\) and \(d_{1}(e)\) the initial and terminal vertices of the edge \(e\). A _morphism_\(\alpha:\Gamma\longrightarrow\Delta\) of profinite graphs is a continuous map with \(\alpha d_{i}=d_{i}\alpha\) for \(i=0,1\). If \(\alpha\) is injective, the image is called a subgraph of profinite graph \(\Delta\), if \(\alpha\) is surjective, then \(\Delta\) is called a quotient graph of \(\Gamma\). A profinite graph is called connected if its every finite quotient graph is connected. Let \(\Gamma\) be connected profinite graph. If \(\Gamma=\varprojlim\Gamma_{i}\) is the inverse limit of the finite graphs \(\Gamma_{i}\), then it induces the inverse system \(\{\pi_{1}(\Gamma_{i})=\widehat{\pi}_{1}^{abs}(\Gamma_{i})\}\) of the pro-\(p\) completions of the abstract (usual) fundamental groups \(\pi_{1}^{abs}(\Gamma_{i})\). So the pro-\(p\) fundamental group \(\pi_{1}(\Gamma)\) can be defined as \(\pi_{1}(\Gamma)=\varprojlim_{i}\pi_{1}(\Gamma_{i})\). **Definition 2.3**.: _If \(\pi_{1}(\Gamma)=1\), then \(\Gamma\) is called a pro-\(p\) tree._ If \(v,w\in V(\Gamma)\), the smallest pro-\(p\) subtree of \(\Gamma\) containing \(\{v,w\}\) is called the \(geodesic\) connecting \(v\) and \(w\), and is denoted \([v,w]\) (the definition is in pag. 83 of [6]). By definition a pro-\(p\) group \(G\)\(acts\) on a profinite graph \(\Gamma\) if we have a continuous action of \(G\) on the profinite space \(\Gamma\), such that \(d_{0}\) and \(d_{1}\) are \(G\)-maps. We shall denote by \(G_{m}\) the stabilizer of \(m\in\Gamma\) in \(G\). The action of a pro-\(p\) group \(G\) on a pro-\(p\) tree \(T\) is \(irreducible\), if \(T\) is the unique minimal \(G\)-invariant pro-\(p\) subtree of \(T\). The action is said to be \(faithful\) if the Kernel \(K\) of the action is trivial. If \(G\) acts on \(T\) irreducibly then the resulting action of \(G/K\) on \(T\) is faithful and irreducible. **Definition 2.4**.: Let \(G\) be a pro-\(p\) group, and \(T\) a pro-\(p\) tree on which \(G\) acts continuously. For \(g\in G\), * \(g\) is \(elliptic\) if it fixes a vertex in \(T\); if a subgroup of \(G\) fixes a vertex we also call it elliptic. * \(g\) is \(hyperbolic\) if it does not fixes a vertex in \(T\). Observe that if \(g\in G\) is hyperbolic, and \(\langle g\rangle\) is the subgroup generated by \(g\), then \(\langle g\rangle\) acts freely on \(T\). By [6, Lemma 3.11] there exists a unique nonempty minimal \(\langle g\rangle\)-invariant pro-\(p\) subtree \(D_{g}\subseteq T\). This holds also for any non-elliptic subgroup of \(G\). Note that in the classical Bass-Serre theory a group \(G\) acting on a tree without global fixed point splits as an amalgamated free product or an HNN-extension over the stabilizer of an edge. This is not true in the pro-\(p\) case in general but is true for finitely generated case. As we shall often use this result we state it here. **Theorem 2.5**.: _[_2_, Theorem 4.2]_ _Let \(G\) be a finitely generated pro-\(p\) group acting on a pro-\(p\) tree \(T\) without global fixed points. Then \(G\) splits non-trivially as a free amalgamated pro-\(p\) product or pro-\(p\) HNN-extension over some stabiliser of an edge of \(T\)._ ### Free pro-\(p\) products with amalgamation **Definition 2.6** ([5], Section 9.2).: _Let \(G_{1}\) and \(G_{2}\) be pro-\(p\) groups and let \(f_{i}:H\longrightarrow G_{i}\)\((i=1,2)\) be continuous monomorphisms of pro-\(p\) groups. An amalgamated free pro-\(p\) _product of \(G_{1}\) and \(G_{2}\) with amalgamated subgroup \(H\) is defined to be a pushout of \(f_{i}\)\((i=1,2)\)_ _in the category of pro-\(p\) groups, i.e., a pro-\(p\) group \(G\) together with continuous homomorphisms \(\varphi_{i}:G_{i}\longrightarrow G\)\((i=1,2)\) satisfying the following universal property: for any pair of continuous homomorphisms \(\psi_{i}:G_{i}\longrightarrow K\)\((i=1,2)\) into a pro-\(p\) group \(K\) with \(\psi_{1}f_{1}=\psi_{2}f_{2}\), there exists a unique continuous homomorphism \(\psi:G\longrightarrow K\) such that the following diagram is commutative:_ _An amalgamated free pro-\(p\) product will be denoted by \(G=G_{1}\amalg_{H}G_{2}\)._ Note that an amalgamated free pro-\(p\) product can be also defined by presentation \[G_{1}\amalg_{H}G_{2}=\langle G_{1},G_{2}\mid rel(G_{i}),f_{1}(h)=f_{2}(h),h \in H,i=1,2\rangle.\] Following the abstract notion, we can consider \(H\) as a common subgroup of \(G_{1}\) and \(G_{2}\) and think of \(f_{1}\) and \(f_{2}\) as inclusions. However, unlike the abstract case where the canonical homomorphisms \[\varphi_{i}^{abs}:G_{i}\longrightarrow G_{1}\star_{H}G_{2}\] (\(i=1,2\)) are always monomorphisms (cf. Theorem I.1 in [8]), the corresponding maps in the category of pro-\(p\) groups \[\varphi_{i}:G_{i}\longrightarrow G_{1}\amalg_{H}G_{2}\] (\(i=1,2\)) are not always injective, i.e. it is not always proper (in the terminology of [5]). However, we can make it proper by replacing \(G_{1},G_{2},H\) with their images in \(G\) (as explained in [6, Chapter 4]). If \(G_{1}\amalg_{H}G_{2}\) is proper we shall identify \(G_{1}\), \(G_{2}\) and \(H\) with their images in \(G\) and say that \(G\) splits as the amalgamated free pro-\(p\) product \(G=G_{1}\amalg_{H}G_{2}\). A free pro-\(p\) product with cyclic amalgamation is always proper (see [4]). Throughout the paper all free pro-\(p\) products with amalgamation will be proper. If \(G=G_{1}\amalg_{H}G_{2}\) and \(H=G_{1}\) then \(G=G_{2}\) and we call such splitting fictitious. All considered splittings in the paper will be non-fictitious. We define the standard trees \(S(G)\) on which \(G=G_{1}\amalg_{H}G_{2}\) acts. * Let \(G=G_{1}\amalg_{H}G_{2}\). Then the vertex set is \(V(S(G))=G/G_{1}\cup G/G_{2}\), the edge set is \(E(S(G))=G/H\), and the initial and terminal vertices of an edge \(gH\) are respectively \(gG_{1}\) and \(gG_{2}\). By [6, Theorem 4.1]\(S(G)\) is a pro-\(p\) tree and the quotient graph \(S(G)/G\) is an edge with two vertices. ### Pro-\(p\) HNN-extensions **Definition 2.7** ([5], Section 9.4).: _Let \(H\) be a pro-\(p\) group and let \(f:A\longrightarrow B\) be a continuous isomorphism between closed subgroups \(A\), \(B\) of \(H\). A pro-\(p\)\(HNN\)-extension of \(H\) with associated groups \(A\), \(B\) consists of a pro-\(p\) group \(G=HNN(H,A,t)\), an element \(t\in G\) called the stable letter, and a continuous homomorphism \(\varphi:H\longrightarrow G\) with \(t(\varphi(a))t^{-1}=\varphi f(a)\) and satisfying the following universal property: for any pro-\(p\) group \(K\), any \(k\in K\) and any continuous homomorphism \(\psi:H\longrightarrow K\) satisfying \(k(\psi(a))k^{-1}=\psi f(a)\) for all \(a\in A\), there is a continuous homomorphism \(\omega:G\longrightarrow K\) with \(\omega(t)=k\) such that the diagram_ _is commutative._ A pro-\(p\) HNN-extension \(HNN((H,A,t)\) has the following presentation \[HNN(H,A,t)=\langle H,t\mid rel(H),a^{t}=f(a),a\in A\rangle.\] In contrast with the abstract situation, the canonical homomorphism \(\varphi:H\longrightarrow G=HNN(H,A,t)\) is not always a monomorphism, i.e. it is not always proper (in terminology of [5]). However, we can make it proper by replacing \(H,A\) and \(f(A)\) with their images in \(G\) (as explained in [6, Chapter 4]). Throughout the paper all pro-\(p\)\(HNN\)-extensions will be proper and in this case we shall idetify \(H,A\) and \(f(A)\) with their images in \(G\) and say that \(G\) splits as a pro-\(p\) HNN-extension \(G=HNN(H,A,t)\). We define a standard pro-\(p\) tree on which HNN-extension acts. Let \(G=HNN(G_{1},H,t)\). Then the vertex set is \(V(S(G))=G/G_{1}\), the edge set is \(E(S(G))=G/H\), and the initial and terminal vertices of an edge \(gH\) are respectively \(gG_{1}\) and \(gtG_{1}\). By [6, Theorem 4.1]\(S(G)\) is a pro-\(p\) tree and the quotient graph \(S/G\) is just a loop. ### \(\mathbb{Z}_{p}\)-splittings Let \(G\) be a pro-\(p\) group, \(C_{1}\) and \(C_{2}\) be subgroups of \(G\) isomorphic to \(\mathbb{Z}_{p}\), the group of \(p\)-adic integers. A \(\mathbb{Z}_{p}\)-splitting of \(G\) is a splitting as non-fictitious free pro-\(p\) product with infinite cyclic amalgamation or as a proper pro-\(p\) HNN-extension with infinite cyclic associated subgroup. Consider the following two \(\mathbb{Z}_{p}\)-splittings for \(G\): * \(G=A_{1}\amalg_{C_{1}}B_{1}\) or \(G=HNN(A_{1},C_{1},t_{1}))\). * \(G=A_{2}\amalg_{C_{2}}B_{2}\) or \(G=HNN(A_{2},C_{2},t_{2}))\). Let \(T_{1},T_{2}\) be the standard pro-\(p\) trees corresponding to the first \(\mathbb{Z}_{p}\)-splitting and the second \(\mathbb{Z}_{p}\)-splitting, respectively. Two given \(\mathbb{Z}_{p}\)-splittings are called: * \(Elliptic-Elliptic:\) if \(c_{1}\) is elliptic in \(T_{2}\) and \(c_{2}\) is elliptic in \(T_{1}\). * \(Hyperbolic-hyperbolic:\) if \(c_{1}\) is hyperbolic in \(T_{2}\) and \(c_{2}\) is hyperbolic in \(T_{1}\). * \(Hyperbolic-elliptic:\) if \(c_{1}\) is hyperbolic in \(T_{2}\) and \(c_{2}\) is elliptic in \(T_{1}\). We shall see in Section 4 that the last possibility in fact does not occur. ### Normalizer The following Proposition was proved in [2, Proposition 8.1]. **Proposition 2.8**.: _Let \(p\) be a prime number and \(C\) an infinite cyclic pro-\(p\) group. Then:_ * _If the_ \(\mathbb{Z}_{p}\)_-splitting is_ \(G=A\amalg_{C}B\)_, then_ \(N_{G}(C)=N_{A}(C)\amalg_{C}N_{B}(C)\)_._ * _If the_ \(\mathbb{Z}_{p}\)_-splitting is_ \(G=HNN(A,C,t)\)_, then:_ * _If_ \(C\) _and_ \(C^{t}\) _are conjugate in_ \(A\)_, then_ \(N_{G}(C)=HNN(N_{A}(C),C,t^{\prime})\) _and_ \(G=HNN(A,C,t^{\prime})\)_._ * _If_ \(C\) _and_ \(C^{t}\) _are not conjugate in_ \(A\) _then_ \(N_{G}(C)=N_{A^{t-1}}(C)\amalg_{C}N_{A}(C)\)_._ **Proposition 2.9**.: _([2, Proposition 8.2]) Let \(G\) be a pro-\(p\) group acting on a pro-\(p\) tree \(T\) and \(U\) be a cyclic subgroup of \(G\) that does not stabilize any edge. Then one of the following happens:_ 1. _For some_ \(g\in G\) _and vertex_ \(v\)_,_ \(U\leq G_{v}\)_: then_ \(N_{G}(U)=N_{G_{v}}(U)\)_._ 2. _For all_ \(g\in G\) _and vertex_ \(v\)_,_ \(U\cap G_{v}=\{1\}\)_. Then_ \(N_{G}(U)/K\) _is either isomorphic to_ \(\mathbb{Z}_{p}\) _or to a dihedral pro-2 group_ \(\mathbb{Z}/2\amalg\mathbb{Z}/2\mathbb{Z}\)_, where_ \(K\) _is some normal subgroup of_ \(N_{G}(U)\) _contained in the stabilizer of an edge._ ## 3 Cyclic amalgamation In this section we shall consider a \(\mathbb{Z}_{p}\)-splitting \(G=A\amalg_{C}B\) (resp. \(G=HNN(A,C,t)\)) as an amalgamated free pro-\(p\) product or HNN-extension. Assuming that \(N_{G}(H)\) is cyclic for every nontrivial \(H\leq C\) we show \(2\)-acylidricity of the action of \(G\) on its standard pro-\(p\) tree. **Definition 3.1**.: _Let \(G\) be a pro-\(p\) group and \(H\) a subgroup of \(G\). We say that \(H\) is malnormal in \(G\) if \(H\cap H^{g}=1\), for any \(g\in G-H\)._ **Proposition 3.2**.: _Let \(G\) be a pro-\(p\) group and \(G=A\amalg_{C}B\) (resp. \(G=HNN(A,C,t)\)) be a \(\mathbb{Z}_{p}\)-splitting of \(G\). Suppose \(N_{G}(H)\) is cyclic for every nontrivial open subgroup \(H\) of \(C\). Then \(C\) is malnormal in \(A\) or \(B\) (resp. \(C\) or \(C^{t}\) is malnormal in \(A\))._ Proof.: Case 1. \(G=A\amalg_{C}B\). By cotradiction assume that \(C\) is not malnormal in \(A\) and \(B\). Then there exist \(a\in A-C\), \(b\in B-C\) such that \(C^{a}\cap C\neq 1\neq C\cap C^{b}\). Consider \(H=C\cap C^{a}\cap C^{b}\). Since \([C:H]=[C:H^{a}]=[C:H^{b}]\) (as can be seen looking at finite quotients of \(G\)) one deduces that \(a\in N_{A}(H)\), \(b\in N_{B}(H)\). By Proposition 2.8\(N_{G}(H)=N_{A}(H)\amalg_{H}N_{B}(H)\). Therefore \(N_{G}(H)\) is not cyclic. Case 2. Now suppose that \(G=HNN(A,C,t)\). Assume on the contrary that \(C\) and \(C^{t}\) are both not malnormal in \(A\). Then there exist \(a_{1}\in A-C\), \(a_{2}\in A-C^{t}\) such that \(C^{a_{1}}\cap C\neq 1\neq C^{ta_{2}}\cap C^{t}\). Put \(H=C\cap C^{a_{1}}\cap C^{ta_{2}t^{-1}}\). Since in every finite group conjugate subgroups have the same order, looking at finite quotients of \(G\) one deduces that \([C:H]=[C:H^{a_{1}}]=[C^{t}:H^{ta_{2}}]\) and so \(H=H^{a_{1}}=H^{ta_{2}t^{-1}}\), i.e. \(a_{1},a_{2}^{t^{-1}}\in N_{A}(H)\). By Proposition 2.8\(N_{G}(H)\) can be a free amalgamated product \[N_{G}(H)=N_{A}(H)\amalg_{H}N_{A^{t-1}}(H))\] or a HNN-extension \[N_{G}(H)=HNN(N_{A}(H),H,t).\] In the first case observing that \(a_{2}\in N_{A^{t-1}}(H))\) we see that \[N_{G}(H)=N_{A}(H)\amalg_{H}N_{A^{t-1}}(H)\] is not cyclic. In the second case \[N_{G}(H)=HNN(N_{A}(H),H,t)\] is not cyclic because \(N_{A}(H)\) is non-trivial. Thus we arrived at contradiction with the hypothesis and the proposition is proved. **Definition 3.3**.: _The action of a pro-\(p\) group \(G\) on a pro-\(p\) tree \(T\) is said to be k-acylindrical, for \(k\) a constant, if the set of fixed points of \(g\) has diameter at most \(k\) whenever \(g\neq 1\)._ **Corollary 3.4**.: _Let \(G\) be a pro-\(p\) group such that \(G=A\amalg_{C}B\) (resp. \(G=HNN(A,C,t)\)) be a \(\mathbb{Z}_{p}\)-splitting of \(G\). Suppose \(N_{G}(H)\) is cyclic for any nontrivial open subgroup \(H\) of \(C\). Then the action of \(G\) on the standard pro-\(p\) tree \(S(G)\) of this splitting is 2-acylindrical._ Proof.: In case of an amalgamated free pro-\(p\) product this is a direct consequence of Proposition 3.2 and definitions of the standard pro-\(p\) tree for a free amalgamated pro-\(p\) product (cf. Sections 2.2). Indeed, if \(g\) fixes three edges, it fixes three consecutive edges \(e_{1},e_{2},e_{3}\) by [6, Corollary 3.8] and then conjugating \(g\) if necessary we may assume that \(e_{2}=1\cdot C\) (cf. the end of Sections 2.2) and so \(e_{1}=aC\), \(e_{3}=bC\) for some \(a\in A,b\in B\). Then \(g\in C\cap C^{a}\cap C^{b}=1\) by Proposition 3.2. Suppose now \(G\) is a pro-\(p\) HNN-extensions \(G=HNN(A,C,t)\)). If \(C\) and \(C^{t}\) are conjugate in \(A\) then by Proposition 2.8 the normalizer \(N_{G}(C)\leq N_{G}(H)\) is not cyclic. Otherwise, \(C\cap C^{at}=1\) for any \(a\in A\) (since otherwise \(t\) normalizes \(C\cap C^{at}\) and so \(N_{G}(C\cap C^{at})\) is not cyclic). Again if \(g\in G\) fixes three edges, it fixes three consecutive edges \(e_{1},e_{2},e_{3}\) by [6, Corollary 3.8] and then conjugating \(g\) if necessary we may assume that \(e_{2}=1\cdot C\) (cf. the end of Sections 2.3) and so \(e_{1}=a_{1}t^{-\epsilon}C\), \(e_{3}=ta_{2}t^{-\epsilon}C\) for some \(a_{1},a_{2}\in A\), \(\epsilon=0,1\) with \(a_{1}t^{-\epsilon},ta_{2}t^{-\epsilon}\not\in C\). Since as was mentioned above \(C\cap C^{at}=1\) for any \(a\in A\) and \(C\cap C^{a_{1}}\cap C^{ta_{2}t^{-1}}=1\) by Proposition 3.2, \(g\) has to be 1. ## 4 Excluding Hyperbolic-Elliptic \(\mathbb{Z}_{p}\)-Splitting **Proposition 4.1**.: _Let \(G\) be a finitely generated pro-\(p\) group admitting \(\mathbb{Z}_{p}\)-splitting \(G=A\amalg_{C}B\) (resp. \(G=HNN(A,C,t)\)). If \(A\) admit an action on an infinite pro-\(p\) tree with trivial edge stabilizers where \(C\) is elliptic, then so does \(G\)._ Proof.: By [3, Theorem 9.6.1] a finitely generated pro-\(p\) group acts on an infinite pro-\(p\) tree with trivial edge stabilizers if and only if it splits as a non-trivial free pro-\(p\) product; we shall use it freely in this proof. Suppose \(A\) acts on a pro-\(p\) tree \(T\) with trivial edge stabilizers. By [3, Theorem 9.6.1]\(A\) is a non-trivial free pro-\(p\) product \(A=\coprod_{v\in V}A_{v}\amalg F\), where \(V\) is a transversal of \(V(T)/G\) in \(V(T)\) and \(F\) a free pro-\(p\) group acting freely on \(T\). Since \(C\) is elliptic in \(T\), it is conjugate to \(G_{w}\) for some \(w\in V\), so w.l.o.g we may assume that \(C\leq G_{w}\). If \(G=A\amalg_{C}B\), then we can rewrite \(G\) as \((\coprod_{v\in V\setminus\{w\}}A_{v}\amalg F)\amalg(G_{w}\amalg_{C}B)\) and the proposition is proved in this case. Suppose now \(G=HNN(A,C,t)\). Then there exists \(u\in V\) and \(a\in A\) such that \((C^{t})^{a}\) is conjugate into \(A_{u}\) and so replacing \(t\) with \(ta\) we may assume that \(C^{t}\in A_{u}\). Then \(G=(\coprod_{v\in V\setminus\{u,w\}}A_{v}\amalg F)\amalg HNN(A_{w}\amalg A_{u},C,t)\). If \(V\neq\{w,u\}\) then we are done, so we may assume that \(G=HNN(A_{w}\amalg A_{u},C,t)\) and so \(u\neq w\) since otherwise \(\amalg_{v\in V(\Gamma)-\{u=w\}}\mathcal{A}(v)\neq 1\) because the splitting of \(A\) into the free product above is non-trivial. Hence \(G=A_{w}\amalg_{C}A_{u}^{t^{-1}}\amalg\left\langle t\right\rangle\) (as follows from the presentation of HNN-extension) and so \(G\) acts on a pro-\(p\) tree with trivial edge stabilizers (cf. [6, Section 4]. The proposition is proved. **Theorem 4.2**.: _Let \(G\) be a finitely generated pro-\(p\) group that does not split as a free pro-\(p\) product. Then any two \(\mathbb{Z}_{p}\)-splittings of \(G\) are either elliptic-elliptic or hyperbolic._ Proof.: We are going to prove it by contradiction. Let \(G=A_{i}\amalg_{C_{i}}B_{i}\) or \(G=HNN(A_{i},C_{i},t_{i})\), \(i=1,2\) be two \(\mathbb{Z}_{p}\)-splittings and \(T_{1}\), \(T_{2}\) their standard pro-\(p\) trees such that w.l.o.g \(c_{1}\) is hyperbolic in \(T_{2}\) and \(c_{2}\) is elliptic in \(T_{1}\). Since \(c_{2}\) is elliptic in \(T_{1}\), \(C_{2}\) stabilizes a vertex of \(T_{1}\). By Proposition 4.1 combined with Theorem 2.5\(A_{2}\) must fix a vertex \(v\) in \(T_{1}\). Now we will consider 2 possible cases. **Case 1:** (The second splittings is an amalgamation, say \(G=A_{2}\amalg_{C_{2}}B_{2}\)) By symmetry \(B_{2}\) stabilize a vertex in \(T_{1}\) as well. Hence \(A_{2},B_{2}\) are contained in some conjugate of \(A_{1}\) or \(B_{1}\), say that \(A_{2}\leq A_{1}\) (note that for HNN-extension case of the first splitting there is no \(B_{1}\)). Then \(B_{2}\) also has to be in \(A_{1}\) (because otherwise \(A_{2}\cap B_{2}=1\) by [6, Corollary 3.8]). Hence \(A_{2}\), \(B_{2}\leq A_{1}\) and so \(G=A_{2}\amalg_{C_{2}}B_{2}\leq A_{1}\), a contradiction. **Case 2:** (The second splitting is a pro-\(p\) HNN-extension) Since \(A_{2}\) fixes a vertex in \(T_{1}\), w.l.o.g we may assume that \(A_{2}\leq A_{1}\), and we are going to prove that \(t_{2}\) is in \(A_{1}\). Since \(c_{2}\) and \(c_{2}^{t_{2}}\) are in \(A_{2}<A_{1}\), \(c_{2}\in(A_{1})^{t_{2}^{-1}}\). Hence \(c_{2}\in A_{1}\cap A_{1}^{t_{2}^{-1}}\). If \(t_{2}\not\in A_{1}\), by [6, Theorem 4.3 (b),(c)] \(c_{2}\in A_{1}\cap A_{1}^{t_{2}^{-1}}<C_{1}^{a}\), for some \(a\in A_{1}\). Then \(c_{2}^{a^{-1}}\in C_{1}\), which is absurd, because \(c_{1}\) is hyperbolic in \(T_{2}\). Thus \(A_{2}\) and \(t_{2}\) are in \(A_{1}\), and \(G=HNN(A_{2},C_{2},t_{2})=\langle A_{2},t_{2}\rangle\leq A_{1}\). So \(G\leq A_{1}\), a final contradiction. ## 5 Hyperbolic-Hyperbolic \(\mathbb{Z}_{p}\)-splitting Our objective in this section is to prove the pro-p case of the Theorem 3.6 of [7]. Thus in this section we fix two hyperbolic-hyperbolic \(\mathbb{Z}_{p}\)-splittings \(G=A_{1}\amalg_{C_{1}}B_{1}\) (\(G=HNN(A_{1},C_{1},t_{1})\)) and \(G=A_{2}\amalg_{C_{2}}B_{2}\) (\(G=HNN(A_{2},C_{2},t_{2})\)). As in the preceding section \(T_{1},T_{2}\) stand for the standard pro-\(p\) trees of the first and second \(\mathbb{Z}_{p}\)-splittings, respectively. Note that if \(c_{1}\) is hyperbolic element of \(T_{2}\) there exist a unique minimal \(C_{1}\)-invariant pro-\(p\)\(D_{1}\) of \(T_{2}\), such that \(C_{1}\) acts irreducibly on \(D_{1}\) (see [6, Lemma 3.11]). Hence \(D_{1}\) is \(N_{G}(C_{1})\)-invariant. Since \(C_{1}\lhd_{C}N_{G}(C_{1})\) and \(C_{1}\) acts irreducibly on \(D_{1}\), so does \(N_{G}(C_{1})\) (by Remark 4.2.1 (b) [3]). Let \(1\neq H_{1}\leq C_{1}\). Denoting by \(K_{1}\) the kernel of the action of \(N_{G}(H_{1})\) on \(D_{1}\) we deduce that \(N_{G}(H_{1})/K_{1}\) acts irreducibly and faithfully on \(D_{1}\). Observe that as \(D_{1}\subseteq T_{2}\), then \(K_{1}\leq C_{2}^{g}\) for some \(g\in G\), that is \(K_{1}=1\) or \(K_{1}\cong\mathbb{Z}_{p}\). Moreover, replacing the second splitting by the \(g\)-conjugate we may assume for the rest of the section that \(K_{1}\leq C_{2}\). Of course similarly \(C_{2}\) acts on a unique minimal \(C_{2}\)-invariant subtree \(D_{2}\) of \(T_{1}\) and all said above symmetrically holds here. We split our consideration into two subsections \(p>2\) and \(p=2\). ### \(p>2\) In this subsection we consider \(p>2\); in this case the idea of the proof is more explicit and proofs our more elegant. The following Proposition is the pro-\(p\) analog of the Proposition 3.3 of [7]. **Proposition 5.1**.: _Let \(H_{i}\neq 1\) be a subgroup of \(C_{i}\). Then \(N_{G}(H_{i})\) has one of the following types:_ 1. _cyclic group_ \(\mathbb{Z}_{p}\)_;_ 2. \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\)_; in this case_ \(C_{1}\) _and_ \(C_{2}\) _commute._ Proof.: Put \(N=N_{G}(H_{1})\) and let \(K_{1}\) be the kernel of its action on \(D_{1}\). By Proposition 2.9\(N/K_{1}\cong\mathbb{Z}_{p}\). If \(K_{1}=1\) then we have item (i). Suppose \(K_{1}\cong\mathbb{Z}_{p}\). Since \(K_{1}\) is open normal in \(C_{2}\) it acts freely and so irreducibly on \(D_{2}\). Hence \(N_{G}(K_{1})\) acts irreducibly on \(D_{2}\). Note that \(C_{2},N_{G}(H_{1})\leq N_{G}(K_{1})\) and \(N_{G}(K_{1})\) can not act on \(D_{2}\) faithfully by Proposition 2.9 (2), so there exists a non-trivial kernel \(K\) of this action and \(N_{G}(K_{1})\cong K\rtimes\mathbb{Z}_{p}\). It follows that \(N\) is open in \(N_{G}(K_{1})\) and \(K\cap K_{1}=1\) since \(K_{1}\leq C_{2}\) acts freely on \(D_{2}\). Since \(K_{1}\) is normal in \(N\) it must centralize \(K\cap N\). For \(p>2\) this means \(N_{G}(K_{1})=K\times\mathbb{Z}_{p}\) (indeed, a non-trivial action on \(K\cong\mathbb{Z}_{p}\) is by multiplication by units and since \(Aut(\mathbb{Z}_{p})\cong\mathbb{Z}_{p}\times\mathbb{Z}/(p-1)\mathbb{Z}\) the action is faithful). Hence \(N_{G}(K_{1})\) is abelian; in particular \(C_{1}\) and \(C_{2}\) commute. **Theorem 5.2**.: _Let \(G\) be a finitely generated pro-\(p\) group that does not split as a free pro-\(p\) product. Let \(G=A_{1}\amalg_{C_{1}}B_{1}\) (or \(G=HNN(A_{1},C_{1},t_{1})\)), and \(G=A_{2}\amalg_{C_{2}}B_{2}\) (or \(G=HNN(A_{2},C_{2},t_{2})\)) be two hyperbolic-hyperbolic \(\mathbb{Z}_{p}\)-splittings of \(G\). Suppose that \(N_{G}(C_{1})\) is not cyclic. Then \(G\cong\mathbb{Z}_{p}\times\mathbb{Z}_{p}\)._ Proof.: We begin the proof with showing that the first \(\mathbb{Z}_{p}\)-splitting is an HNN-extension. Indeed, by Proposition 5.1\(C_{2}\leq N_{G_{1}}(C_{1})\cong\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) and so we can write \(N_{G}(C_{i})\) as a pro-\(p\) HNN-extension \(HNN(\mathbb{Z}_{p},\mathbb{Z}_{p},t^{\prime})\cong\mathbb{Z}_{p}\times\mathbb{ Z}_{p}\) for \(i=1,2\), but not as a non-fictitious free amalgamated pro-\(p\) product. Moreover, if the splitting of \(N\) as a pro-\(p\) amalgamated free product is fictiotio, i.e. either \(N_{A_{1}}(C_{1})=C_{1}\) or \(N_{B_{1}}(C_{1})=C_{1}\), say \(N_{B_{1}}(C_{1})=C_{1}\), then \(N_{G}(C_{1})=N_{A_{1}}(C_{1})\) and so \(C_{2}\leq A_{1}\) contradicting that \(C_{2}\) is hyperbolic with rrespect to the first splitting. Hence by Proposition 2.8 the first \(\mathbb{Z}_{p}\)-splittings of \(G\) have to be an HNN-extension and we can write \(G=HNN(A_{1},C_{1},t_{1})\) such that \(N_{G}(C_{1})=HNN(N_{A_{1}}(C_{1}),C_{1},t_{1})=N_{A_{1}}(C_{1})\amalg_{C_{1}}(C_ {1}\times\langle t_{1}\rangle)\), i.e. \(C_{1}\) is normalized by \(t_{1}\). Then \[G=A_{1}\amalg_{C_{1}}(C_{1}\times\langle t_{1}\rangle)=A_{1}\amalg_{N_{A_{1}}( C_{1})}N_{A_{1}}(C_{1})\amalg_{C_{1}}(C_{1}\times\langle t_{1}\rangle)=A_{1} \amalg_{N_{A_{1}}(C_{1})}N_{G}(C_{1}).\] Thus \(G\) can be rewritten as a free amalgamated pro-\(p\) product \[G=A_{1}\amalg_{N_{A_{1}}(C_{1})}N_{G}(C_{1}) \tag{1}\] Moreover, since \(C_{2}\leq N_{G}(C_{1})\), \(C_{2}\) is elliptic in \(S_{1}\), where \(S_{1}\) is the standard pro-\(p\) tree of (1). Since \(C_{1}\) acts freely on \(T_{2}\) and \(A_{1}\) does not intersect any conjugate of \(C_{2}\), by Proposition 2.9 the normalizer \(N_{A_{1}}(C_{1})\) is infinite cyclic. Thus if (1) is non-fictitious, then (1) and the second splitting is a pair of elliptic-hyperbolic \(\mathbb{Z}_{p}\)-splittings of \(G\) contradicting Theorem 1.2. Hence the splitting (1) is fictitious, i.e. \(A_{1}=N_{A_{1}}(C_{1})\) and so \(G=N_{G}(C_{1})\). Finally by Proposition 5.1 (ii) \(G\cong\mathbb{Z}_{p}\times\mathbb{Z}_{p}\). Combining this theorem with Proposition 3.2 we deduce **Corollary 5.3**.: _With the hypothesis of Theorem 5.2 suppose \(G\) is non-abelian. Then \(C_{i}\) is malnormal in either \(A_{i}\) or \(B_{i}\) (resp. \(C_{i}\) or \(C_{i}^{t_{i}}\) is malnormal in \(A_{i}\)), \(i=1,2\)._ **Theorem 5.4**.: _Let \(G\) be a non-abelian finitely generated pro-\(p\) group that does not split as a free pro-\(p\) product. Let \(G=A_{1}\am **Remark 5.6**.: _The subcases (a),(b) and (c) of Proposition 5.5 (iii) are the pro-2 completion of the Klein bottle, a euclidean 4-branched sphere and an euclidean 2-branched (real) projective plane respectively. In particular in all this cases \(N\) has a normal subgroup of index 2 isomorphic to \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) and so \(C_{1}\) and \(C_{2}\) virtually commute._ **Corollary 5.7**.: _With the hypothesis of Proposition 5.1 assume \(N_{G}(H_{i})\) is torsion free. Then one of the following holds._ 1. \(N_{G}(H_{i})\cong\mathbb{Z}_{2}\)_;_ 2. \(N_{G}(H_{i})\cong\mathbb{Z}_{2}\times\mathbb{Z}_{2}\)_;_ 3. \(N_{G}(H_{i})\) _is isomorphic to the pro-2 Klein bottle_ \(\mathbb{Z}_{2}\rtimes\mathbb{Z}_{2}\)_._ **Lemma 5.8**.: _Let \(G\) be a finitely generated pro-\(2\) group and \(G=A\amalg_{C}B\) ( resp. \(G=HNN(A,C,t)\)) is its \(\mathbb{Z}_{2}\)-splitting. Suppose \(A\) admits an action on a pro-\(2\) tree \(T\) with edge stabilizers of order \(\leq 2\) and without global fixed point, and \(C\) is elliptic in \(T\) (resp \(C\) and \(C^{t}\) are elliptic in \(T_{2}\)). Then \(G\) splits over a group of order \(\leq 2\). (i.e, \(G\) can be written as \(G=G_{1}\amalg_{H}G_{2}\) or \(HNN(G_{1},H,t)\) with \(|H|\leq 2\))._ Proof.: Since \(G=A\amalg_{C}B\) (resp. \(G=HNN(A,C,t)\)) is finitely generated, the subgroup \(A\) is finitely generated and \(A\) acts on a pro-\(2\) tree \(T\) without global fixed point having edge stabilizers of order at most \(2\) by hypothesis. Then by Theorem 2.5\(A\) splits as an amalgamated free product \(A=J_{0}\amalg_{H}J_{1}\) or as an HNN-extension \(A=HNN(J_{1},H,t)\) over a group \(H\) of order at most \(2\). Moreover by [2, Corollary 4.4]\(C\) (resp. \(C\) and \(C^{t}\)) is contained in \(J_{0}\) or \(J_{1}\) up to conjugation, say \(J_{1}\), so w.l.o.g we may assume that \(C\leq J_{1}\). Now if \(G=A\amalg_{C}B\), then \(G=J_{0}\amalg_{H}J_{1}\amalg_{C}B\) or \(G=HNN(J_{1},H,t_{1})\amalg_{C}B=HNN(J_{1}\amalg_{C}B,H,t_{1})\). Suppose now \(G=HNN(A,C,t)\). Since \(C^{tg}\) is contained in \(J_{0}\) or \(J_{1}\) for some \(g\in G\), by replacing \(t\) with \(tg\) we may assume that \(C^{t}\subseteq J_{0}\cup J_{1}\). Then \(G\) has one of the following splittings: 1) If \(A=J_{0}\amalg_{H}J_{1}\) and \(C,C^{t}\leq J_{1}\) then \(G=J_{0}\amalg_{H}HNN(J_{1},C,t)\). 2) If \(A=J_{0}\amalg_{H}J_{1}\) and \(C^{t}\leq J_{0}\) then \(G=HNN(J_{1}\amalg_{C}J_{0}^{t^{-1}}),H,t^{-1})\). 3) If \(A=HNN(J_{1},H,t_{1})\) then \(G=HNN(HNN(J_{1},C,t),H,t_{1})\). The lemma is proved. **Lemma 5.9**.: _Let \(G\) be a pro-\(p\) and \(G=A_{1}\amalg_{C_{1}}B_{1}\) (or \(G=HNN(A_{1},C_{1},t_{1})\)) is a \(\mathbb{Z}_{p}\)-splitting of \(G\). Then_ 1. \(G\) _splits as an amalgamated free pro-_\(p\) _product_ \(G=A_{1}\amalg_{N_{A_{1}}(C_{1})}N\) _over_ \(N_{A_{1}}(C_{1})\)_._ 2. _If_ \(G=A_{2}\amalg_{C_{2}}B_{2}\) _(or_ \(G=HNN(A_{2},C_{2},t_{2})\)_) is the second splitting such that together with_ \(G=A_{1}\amalg_{C_{1}}B_{1}\) _(or_ \(G=HNN(A_{1},C_{1},t_{1})\)_) this pair of splittings is hyperbolic-hyperbolic and_ \(N_{A_{1}}(C_{1})\neq A_{1}\)_, then_ \(A_{2}\) _is not elliptic in the splittig_ \(G=A_{1}\amalg_{N_{A_{1}}(C_{1})}N\)_._ Proof.: (i) Suppose the \(\mathbb{Z}_{p}\)-splitting is an amalgamated free pro-\(p\) product \(G=A_{1}\amalg_{C_{1}}B_{1}\). Then by Proposition 2.8\(N_{G}(C_{1})=N_{A_{1}}(C_{1})\amalg_{C_{1}}N_{B_{1}}(C_{1})\) and thus \(G\) admits a decomposition as follows: \[\begin{array}{ll}G&=A_{1}\amalg_{C_{1}}B_{1}\\ &=\boxed{A_{1}\amalg_{N_{A_{1}}(C_{1})}(N_{G}(C_{1})\amalg_{N_{B_{1}}(C_{1})} B_{1})}\end{array} \tag{2}\] Thus \(G\) splits as a free pro-\(p\) product with \(N_{A_{1}}(C_{1})\) amalgamated in this case. On the other hand if \(G=HNN(A_{1},C_{1},t_{1})\) is an HNN-extension, by Proposition 2.8 the normalizer \(N_{G}(C_{1})\) is an amalgamated free pro-\(p\) product \(N_{G}(C_{1})=N_{A_{1}}(C_{1})\amalg_{C_{1}}N_{A_{1}^{t_{1}^{-1}}}(C_{1})\) or an HNN-extension \(N_{G}(C_{1})=HNN(N_{A_{1}}(C_{1}),C_{1},t_{1})\), and so either \[\begin{array}{ll}G&=HNN(A_{1},C_{1},t_{1})\\ &=HNN(A_{1}\amalg_{N_{A_{1}}(C_{1})}N_{G}(C_{1}),N_{A_{1}^{t_{1}^{-1}}}(C_{1}),t_{1})\\ =&\boxed{A_{1}\amalg_{N_{A_{1}}(C_{1})}HNN(N_{G}(C_{1}),N_{A_{1}^{t_{1}^{-1}}} (C_{1}),t_{1})}\end{array} \tag{3}\] or \[\begin{array}{ll}G&=HNN(A_{1},C_{1},t_{1})\\ &=\overline{\langle A_{1},t_{1}:C_{1}^{t_{1}}=C_{1}\rangle}\\ &=A_{1}\amalg_{C_{1}}C_{1}\times\overline{\langle t_{1}\rangle}\\ &=A_{1}\amalg_{N_{A_{1}}(C_{1})}N_{A_{1}}(C_{1})\amalg_{C_{1}}C_{1}\times \overline{\langle t_{1}\rangle}\\ &=A_{1}\amalg_{N_{A_{1}}(C_{1})}HNN(N_{A_{1}}(C_{1}),C_{1},t_{1})\\ =&\boxed{A_{1}\amalg_{N_{A_{1}}(C_{1})}N_{G}(C_{1})}\end{array} \tag{4}\] In both cases \(G\) splits as a free pro-\(p\) product with \(N_{A_{1}}(C_{1})\) as the amalgamated subgroup. (ii) Consider \(S_{1}\) the standard tree of decompositions (2)-(4). As \(A_{1}\neq N_{A_{1}}(C_{1})\), the second factor of these decompositions is a proper subgroup of \(G\). Assume that the second \(\mathbb{Z}_{2}\)-decomposition is an amalgamated free product pro-\(p\), \(G=A_{2}\amalg_{C_{2}}B_{2}\). Then \(A_{2}\) and \(B_{2}\) act on \(S_{1}\). Note that \(A_{2}\), and \(B_{2}\) are not contained in a conjugate of \(A_{1}\) because if it was, \(C_{2}\) would be contained in a conjugate of \(A_{1}\) contradicitng the hypothesis. On the other hand, if \(A_{2}\) is contained in a conjugate of the second factor of (2)-(4), then by [3, Corollary 4.1.6]\(B_{2}\) must be in the same conjugate that \(A_{2}\), since otherwise \(C_{2}=A_{2}\cap B_{2}\) is contained in a conjugate of \(N_{A_{1}}(C_{1})\leq A_{1}\) by [3, Corollary 4.1.6] contradicting the hypothesis. Then \(G=A_{2}\amalg_{C_{2}}B_{2}\) would be contained in some conjugate of the second factor of (2)-(4) which is absurd because the second factor is a proper subgroup in \(G\). Thus \(A_{2}\) is not elliptic in \(S_{1}\). On the other hand if the second \(\mathbb{Z}_{2}\)-decomposition is pro-\(2\) HNN-extension, \(G=HNN(A_{2},C_{2},t_{2})\), then \(A_{2}\) and \((A_{2})^{t_{2}}\) act on \(S_{1}\). We know that \(A_{2}\) is not contained in a conjugate of \(A_{1}\) because if it is, then \(C_{2}\) would be contained in a conjugate of \(A_{1}\) contradicting the hypothesis. If \(A_{2}\) is contained in a conjugate of the second factor of (2)-(4)), the by [3, Corollary 4.1.6]\(t_{2}\) must be in the same conjugate as \(A_{2}\) (indeed, if not then by [3, Corollary 4.1.6]\(C_{2}=A_{2}\cap(A_{2})^{(t_{2})^{-1}}\) is in a conjugate of \(N_{A_{1}}(C_{1})\leq A_{1}\) contradicting the hypothesis). Hence \(G=HNN(A_{2},C_{2},t_{2})\) is contained in some conjugate of the second factor of (2)-(4) which is absurd because the second factor is proper subgroup of \(G\). Thus \(A_{2}\) is not elliptic in \(S_{1}\). **Lemma 5.10**.: _Let \(G\) be a finitely generated pro-\(2\) group with \(G=A_{1}\amalg_{C_{1}}B_{1}\) (or \(G=HNN(A_{1},C_{1},t_{1})\)), and \(G=A_{2}\amalg_{C_{2}}B_{2}\) (or \(G=HNN(A_{2},C_{2},t_{2})\)) two Hyperbolic-Hyperbolic \(\mathbb{Z}_{2}\)-splittings of \(G\), such that \(N_{G}(C_{1})\) is neither cyclic nor dihedral. Assume that \(N_{A_{1}}(C_{1})\neq A_{1}\). Then \(G\) splits over a group of order \(\leq 2\). (i.e., \(G\) can be written as \(G=G_{1}\amalg_{H}G_{2}\) or \(HNN(G_{1},H,t)\) with \(|H|\leq 2\))._ Proof.: By Lemma 5.9\(G\) splits as an amalgamated free pro-\(p\) product \(G=A_{1}\amalg_{N_{A_{1}}(C_{1})}N\) over \(N_{A_{1}}(C_{1})\) and \(A_{2}\) is not elliptic with respect to this splitting. Let \(S_{1}\) be the standard tree of this splitting. Since \(N_{G}(C_{1})\) is neither cylic nor dihedral, the kernel \(K_{1}\) of its action on \(D_{2}\) is non-trivial. The group \(K_{1}\) is elliptic in \(S_{1}\) and therefore so is \(C_{2}\). Note that by Proposition 2.9 (2)\(N_{A_{1}}(C_{1})\) is either cyclic or infinite dihedral. If the normalizer \(N_{A_{1}}(C_{1})\) is infinite cyclic, then by Theorem 4.2\(G\) admits a decomposition as a free pro-\(p\) product. If the normalizer \(N_{A_{1}}(C_{1})\cong\mathbb{Z}_{2}\rtimes\mathbb{Z}/2\mathbb{Z}\), then \(N_{A_{1}}(C_{1})\) contains a cyclic normal subgroup \(C\) of index at most \(2\) generated by hyperbolic element in \(T_{2}\). Then \([N_{A_{1}}(C_{1}):C]\leq 2\) and so considering the action of \(A_{2}\) on \(S_{1}\) we see that \(|A_{2}\cap N_{A_{1}}(C_{1})|\leq 2\) since \(A_{2}\cap C=1\). It follows that \(A_{2}\)-stabilizers of edges of \(S_{1}\) are either trivial or groups of two order. Then by Lemma 5.8 the group \(G\) splits over a group of order \(\leq 2\). **Theorem 5.11**.: _Let be \(G\) a finitely generated pro-\(2\) group which does not split over a cyclic group of order \(\leq 2\). Let \(G=A_{1}\amalg_{C_{1}}B_{1}\) (or \(G=HNN(A_{1},C_{1},t_{1})\)), and \(G=A_{2}\amalg_{C_{2}}B_{2}\) (or \(G=HNN(A_{2},C_{2},t_{2})\)) be two hyperbolic-hyperbolic \(\mathbb{Z}_{2}\)-splittings of \(G\). Suppose that \(N_{G}(C_{1})\) is neither cyclic nor dihedral. Then \(G=N_{G}(C_{1})=N_{G}(C_{2})\) is virtually abelian isomorphic to one of pro-\(2\) groups listed in (iii) or (iv) of Proposition 5.5._ Proof.: If \(G=A_{1}\amalg_{C_{1}}B_{1}\), then \(G\) admits a decomposition \[G=A_{1}\amalg_{N_{A_{1}}(C_{1})}(N_{G}(C_{1})\amalg_{N_{B_{1}}(C_{1})}B_{1}) \tag{5}\] If \(G=HNN(A_{1},C_{1},t)\), then one of the following holds * \[G=A_{1}\amalg_{N_{A_{1}}(C_{1})}HNN(N_{G}(C_{1}),N_{A_{1}^{t_{1}^{-1}}}(C_{1}),t_{1})\] (6) * \[G=A_{1}\amalg_{N_{A_{1}}(C_{1})}N_{G}(C_{1})\] (7) Therefore by Lemma 5.10 the splitting over \(N_{A_{1}}(C_{1})\) has to be fictitious, i.e. \(A_{1}=N_{A_{1}}(C_{1})\). Suppose now the first \(\mathbb{Z}_{2}\)-splitting of \(G\) is an amalgamated free pro-\(2\) product \(G=A_{1}\amalg_{C_{1}}B_{1}\). We have \(A_{1}=N_{A_{1}}(C_{1})\) and swapping \(A_{1}\) and \(B_{1}\) in the first splitting of \(G\) we also have \(N_{B_{1}}(C_{1})=B_{1}\). Thus \(G=N_{G}(C_{1})=N_{A_{1}}(C_{1})\amalg_{C_{1}}N_{B_{1}}(C_{1})\) is isomorphic to one of the pro-\(2\) groups of Proposition 5.5(iv). If the first \(\mathbb{Z}_{2}\)-splitting of \(G\) is a pro-\(2\) HNN-extension, then in the case of \(N_{G}(C_{1})=HNN(N_{A_{1}}(C_{1}),C_{1},t)\) we deduce from (7) \(G=N_{G}(C_{1})\), since \(N_{A_{1}}(C_{1})=A_{1}\). Otherwise \(G=HNN(N_{G}(C_{1}),N_{A_{1}^{t_{1}^{-1}}}(C_{1}),t)\). Note that \([N_{A_{1}}(C_{1}):C_{1}]\leq 2\geq[N_{A_{1}^{t_{1}^{-1}}}(C_{1}):C_{1}]\) since \(N_{G}(C_{1})=N_{A_{1}}(C_{1})\amalg_{C_{1}}N_{A_{1}^{t_{1}^{-1}}}(C_{1})\) is virtually abelian by Remark 5.6 and otherwise by Proposition 2.8 it will be not. Then denoting by \(S_{1}^{\prime}\) the standard pro-\(2\) tree for this decomposition we deduce as above from Lemma 5.8 that \(A_{2}\) fixes a vertex of \(S^{\prime}\) and hence is conjugate into \(N_{G}(C_{1})\). Thus w.l.o.g we may assume that \(A_{2}\leq N_{G}(C_{1})\). If the second splitting is an amalgamation then by [6, Theorem 4.3]\(C_{2}=A_{2}\cap B_{2}\leq(N_{A_{1}^{t_{1}^{-1}}}(C_{1}))^{h}\) for some \(h\in G\), a contradiction with \(C_{1}\cap C_{2}^{h}=1\). If the second splitting is an HNN-extension, then by [6, Theorem 4.3]\(C_{2}=A_{2}\cap A_{2}^{t_{2}}\leq(N_{A_{1}^{t_{1}^{-1}}}(C_{1}))^{h}\) for some \(h\in G\) contradicting \(C_{1}\cap C_{2}^{h}=1\) again, unless \(t_{2}\in N_{G}(C_{1})\) in which case \(G=N_{G}(C_{1})\). Thus, \(G=N_{G}(C_{1})\) and we have either Case (iii) or Case (iv) of Proposition 5.5.
2309.01569
Rail Crack Propagation Forecasting Using Multi-horizons RNNs
The prediction of rail crack length propagation plays a crucial role in the maintenance and safety assessment of materials and structures. Traditional methods rely on physical models and empirical equations such as Paris law, which often have limitations in capturing the complex nature of crack growth. In recent years, machine learning techniques, particularly Recurrent Neural Networks (RNNs), have emerged as promising methods for time series forecasting. They allow to model time series data, and to incorporate exogenous variables into the model. The proposed approach involves collecting real data on the French rail network that includes historical crack length measurements, along with relevant exogenous factors that may influence crack growth. First, a pre-processing phase was performed to prepare a consistent data set for learning. Then, a suitable Bayesian multi-horizons recurrent architecture was designed to model the crack propagation phenomenon. Obtained results show that the Multi-horizons model outperforms state-of-the-art models such as LSTM and GRU.
Sara Yasmine Ouerk, Olivier Vo Van, Mouadh Yagoubi
2023-09-04T12:44:21Z
http://arxiv.org/abs/2309.01569v1
# Rail Crack Propagation Forecasting Using Multi-horizons RNNs ###### Abstract The prediction of rail crack length propagation plays a crucial role in the maintenance and safety assessment of materials and structures. Traditional methods rely on physical models and empirical equations such as Paris' law, which often have limitations in capturing the complex nature of crack growth. In recent years, machine learning techniques, particularly Recurrent Neural Networks (RNNs), have emerged as promising methods for time series forecasting. They allow to model time series data, and to incorporate exogenous variables into the model. The proposed approach involves collecting real data on the French rail network that includes historical crack length measurements, along with relevant exogenous factors that may influence crack growth. First, a pre-processing phase was performed to prepare a consistent data set for learning. Then, a suitable Bayesian multi-horizons recurrent architecture was designed to model the crack propagation phenomenon. Obtained results show that the Multi-horizons model outperforms state-of-the-art models such as LSTM and GRU. Keywords:Crack propagation Machine Learning Time series. ## 1 Introduction The French rail network has over 100,000 km of rail, including around 10,000 km for high-speed lines (LGV). The passage of rolling stock over these rails generates stresses in the rail, on the wheel-rail contact zone, which eventually leads to rolling contact fatigue. Defects resulting from this fatigue are monitored, and crack propagation is periodically checked, as a defect can propagate over several decades or a few months. When the length or depth of the crack becomes critical, it is imperative to correct the defect, otherwise there is a risk of rail break and potential derailment. Rolling contact fatigue is thus separated into two distinct phases, first the crack initiation, and then the crack propagation. In this paper, we focus on the latter and propose to build a predictive model that allows to evaluate the residual life of an already existing crack before reaching the critical threshold. This phenomenon can be partially explained by physical models and many studies have been led to understand the impact of various parameters. Bonniot et al. showed that the crack propagation in the rail is complex and follow mixed non proportional propagation modes [1]. Crack propagation speed depends on Stress Intensity Factor (SIF) identified from laboratory experiment, plastic deformation, friction between crack lips, its wear and corrosion and many other geometrical parameters such as initial crack width and direction, as shown by Fang et al. [2]. Moreover, other parameters in-situ are known to have an impact, such as track flexibility or acceleration and breaking and others still not quantified such as material decay over time. To deal with the lack of representativity of physical simulation in crack propagation modeling, we need to consider other parameters and phenomena that can lead to a more and more computationally expensive simulations, prohibiting thus their use to solve real world problems. At the same time, the mass of real data collected on various characteristics such as "infrastructure" and "traffic" makes it possible to investigate the potential of data-driven models. The problem can be seen as a time series forecasting of the crack length. In this paper, we propose a multi-horizon approach to predict the propagation of rail crack based on historical data that we compare with state of the art time series machine learning methods. The remainder of this paper is organized as follows. In section 2 we present some recent related works. Section 3 describes the data processing analysis required to build the different models that are presented in section 4. The comparative results are discussed in Section 5, and as usual Section 6 summarizes the contribution of this work and suggests directions for future research. ## 2 Related work Time series forecasting is a fundamental task in various domains, encompassing finance, weather prediction, demand forecasting, and more. Over the years, traditional and deep learning models have played a pivotal role in advancing the accuracy and effectiveness of time series forecasting. Traditional approaches for time series forecasting have been widely used especially for univariate time series forecasting. Holt et al. introduced a method commonly employed for time series forecasting, Exponential Smoothing (ES) [3]. They involve recursively updating the forecasted values by assigning exponentially decreasing weights to past observations. Simple Exponential Smoothing [4], Holt's Linear Exponential Smoothing [5], and Holt-Winters' Seasonal Exponential Smoothing [6] are variations of this approach. Autoregressive Integrated Moving Average (ARIMA) [7] is also a popular method for time series forecasting. It models the time series as a combination of autoregressive (AR), differencing (I), and moving average (MA) components. ARIMA models are widely used for stationary time series data. These traditional approaches have been widely used in time series forecasting and have provided valuable insights in various domains. However, they have certain limitations that can impact their effectiveness and accuracy. In fact, many traditional time series forecasting methods assume that the underlying data follows a stationary process, where the statistical properties remain constant over time. However, real-world data often exhibits non-stationarity, such as trends, seasonality, and changing statistical properties. Failing to account for non-stationarity can lead to inaccurate forecasts. Moreover, these methods primarily focus on historical time series data and may not naturally incorporate external factors. However, many forecasting problems benefit from including additional variables, such as weather data. While traditional time series forecasting approaches have their limitations, recent advancements in machine learning, such as deep learning models aim to address some of these challenges and provide more accurate and flexible forecasting capabilities. Neural Networks (NN) have been widely used for time series forecasting and have achieved state-of-the-art performance in many applications. Neural networks, especially recurrent neural networks (RNNs) and their variants, have proven to be effective in capturing temporal dependencies and patterns in time series data. Moreover, there have been efforts to incorporate external factors or exogenous variables into time series forecasting models. These factors can include contextual information or additional time series that may influence the target variable. One of the most popular RNN architectures for time series forecasting is the Long Short-Term Memory (LSTM) network [8]. LSTMs are designed to address the vanishing gradient problem and are capable of learning long-term dependencies in sequential data. They have been successfully applied to various time series forecasting tasks, including stock market prediction, energy load forecasting, and weather forecasting. In recent years, other advanced variants of RNNs, such as Gated Recurrent Units (GRUs) [9] and Transformers [10], have also shown promising results in time series forecasting. GRUs are similar to LSTMs but have a simpler architecture, which makes them computationally more efficient. Transformers, originally introduced for natural language processing tasks, have been adapted for time series forecasting by leveraging self-attention mechanisms. Transformers have the advantage of parallel processing and have shown competitive performance in several domains. ## 3 Data description and processing Collected real data can be divided in four different categories. Each time it was possible, categorical data were converted to numerical data. * **Infrastructure data**: These data correspond to the network description. The interesting features to consider are all parameters that can change the vehicle dynamic, namely the rail linear mass, to take into account rail profile and vertical flexibility, sleeper type, rail grade, radius of curvature, cant, slope and side of the rail (left or right); * **Traffic data**: These data correspond to the use of network. The dynamic impact of rolling stock is considered by maximal velocity allowed and quantity and number of acceleration and breaking. The rail loading is considered using annual tonnage (number of ton of vehicle seen by the rail) and number and type of vehicle (passenger or goods); * **Environment data**: These are data not related to railway environment. The only environment data used here are temperatures and rain classified by type (low rain, strong storm, ice, snow,...) * **Defect**: These data correspond to the state of the network. Here, three different defects were selected, which represent most of rail defects in french railway, namely squats (in three different parts of the rail). Each defect is discovered at a recorded date and regularly visited to check its evolution. Each time, parameters such as crack length and measurement date are recorded. One last parameter is considered and called "UIC Group". It is strongly correlated with speed limit and tonnage and defines maintenance conditions. Through this parameter are thus included other unavailable data at the time of the study such as grinding works. These data present a number of anomalies (inconsistent format, missing values, etc.), which necessitated a data preprocessing phase to obtain a consistent database to train the Machine Learning models. Note that crack data was the most challenging to process for several reasons: * Crack length values also present anomalies linked to database filling errors (negative values, exceeding certain thresholds, or considerable falls in values); * Discovery date happened between 2008 and 2018 and crack life before it is removed can vary from several months to several years; * Visit dates at which the crack length is measured are manually and empirically planed, the duration between two visits can thus vary from one week to a couple of years; * The perceived high risk cracks are frequently visited and lead to sequence length (the time series) longer than others; * Abrupt propagation have been observed for some defects. This behavior may be physically explained (caused by an extremely cold day), or simply based on a human judgement to merge two spatially close defects; * Abrupt reduction of the crack length, which can be due to rail grinding; * Measurement uncertainty, which is a known issue and led to approximate the measured length to the closest multiple of 5. #### 2.2.1 Data processing All the above information have been crossed to create a single training dataset containing all the information. The anomalies mentioned above were also addressed based on experts knowledge on the data. To overcome the problem of irregular time steps in the time series, an interpolation was performed. A frequency of 3 months was chosen and a linear average was computed on all series, resulting in 3-month time-step series with a maximum length of 59 time steps. After this step, defects with a fall in values greater than \(15mm\) were removed from the database, to avoid introducing errors into the learning model. Drops in values of less than \(15mm\) are tolerated, as it is possible to have variations in measurement conditions such as temperature variation that can lead to crack closure and reduce the size measured as explained in [11]. The measurement is also subject to operator interpretation of the observed signal and can thus vary with operators. #### 3.3.3 Feature extraction In the collected data, defect discovery dates vary widely, with some defects being more recent than others. To consider this information in the learning process, we set up an input variable that calculates the elapsed time since the defect was discovered. The crack propagation speed was also calculated between time steps, which can give an indication of how fast the crack length propagates in a given context for the learning model. This information can only be used in the past horizon (the notion of horizon will be introduced in section 4.2) and not in the future horizon, to avoid giving information on the lengths to be predicted. This feature extraction and selection resulted in 37 exogenous features for each time step in the time series. ## 4 Modeling approaches ### Feature based modeling Initially, crack length values are considered unknown to the model. Only exogenous variables will be taken into account by the model to predict the corresponding crack lengths. As mentioned in the previous section, several variables are available. The time series are therefore multivariate, with several dynamic (evolving over time) or static features for the different time steps. For this configuration, sequences were created using a sliding window of size \(t\). The goal is to model the distribution of the crack length sequence, knowing its current context \(X_{1:t}\), as \[P(Y_{1:t}|X_{1:t}). \tag{1}\] Were \(X_{1:t}\) represents the exogenous feature (static and dynamic) by time step, and \(Y_{1:t}\) their corresponding crack length values to be predicted. Static features are encoded using Fully Connected (FC) layers, the dynamical features are encoded also using Fully Connected (FC) layers and then passed to one type of recurrent layers (RNN, LSTM or GRU) which can handle the time dependency between time steps. These models are respectively called **RNN-FC**, **LSTM-FC** and **GRU-FC**. ### Considering the historical crack length values For this new setup, a dataset containing crack length sequences was created using a sliding window of length \(t+k\) over our time series. Each position of the sliding window contains a sample in our dataset with the first \(t\) values of crack lengths (the history), and their corresponding contextual features being the input of the past horizon and the last \(k\) values (the forecasting horizon) being the output. The goal in this case is to model the distribution of the crack length sequence, knowing its historical features \(X_{1:t}\) and measurements \(Y_{1:t}\), as \[P(Y_{t+1:t+k}|Y_{1:t},X_{1:t}). \tag{2}\] As mentioned above, interpolation is used to deal with the problem of irregular time series. The interpolated values are calculated using a linear average. For some time series, the past horizon may contain interpolated length values after the last measured value. These values are calculated using crack length values from the prediction horizon, as explained in figure 1. So, introducing them to the learning model will give information about the future values that are supposed to be unknown for the model, and thus may introduce a bias for the learning process. To avoid this problem, only interpolated values before the last measured value are included. For time steps interpolated after this step, the last measured value is used to replace the interpolated steps. As an example in table 1, we assume that crack length values of the defect corresponding to the past horizon are the values in the first row. The "Last measured value" variable indicates the last measured crack value (not an interpolated value), the "Step is interpolated" variable indicates whether the time step corresponds to an interpolated or non-interpolated (measured) crack length value. The fifth time step is interpolated, and is the last time step before the prediction horizon, so its value can give information about the first value in the prediction horizon. Consequently, this value is replaced by the last measured crack length value. The model input for the "historical crack length values" feature will then be the "model input" variable in the table. It should be noted that the model will be less accurate with this modification, but at least it will avoid biasing it with information it is not supposed to know. Some variables have been added to indicate whether the time step is interpolated and, if so, the number of time steps since the last measurement. This will reduce the effect of this replacement on performance. Figure 1: Example of interpolation for the last step before the prediction horizon #### 3.2.2 Simple Recurrent model For this model, only historical exogenous characteristics and corresponding crack length values are considered. These variables are passed on to the recurrent layer (LSTM/GRU), then their latent representation is passed on to some fully connected layers in order to infer crack length values in the future. These models are called **LSTM-FC-LH and GRU-FC-LH**, where LH refers to the historical crack lengths. #### 3.2.3 Multi-horizons recurrent model In a second step, a model was implemented to consider both historical context \(X_{1:t}\) and lengths \(Y_{1:t}\), as well as the current context \(X_{t+1:t+k}\). The aim is to model the distribution, \[P(Y_{t+1:t+k}|Y_{1:t},X_{1:t},X_{t+1:t+k}). \tag{3}\] This model is a recurrent neural network with multiple time horizons. It consists of a past horizon which takes as input exogenous variables and historical crack length measurements, and a future prediction horizon which takes as input the encoded output from the past horizon as well as current contextual variables in order to infer future crack length values, as described in Figure 2. The general architecture of the multi-horizon model is shown in Figure 3. For all the described models above, a customized Mean Squared Errors (\(MSE\)) has been used for learning. This loss is an \(MSE\) loss that ignores the padded time steps in order to avoid introducing bias to the model. #### 3.2.4 Bayesian Multi-horizons recurrent model As mentioned above, crack length measurements are subject to uncertainty. This uncertainty is related to the data quality that cannot be reduced by adding more data, but it can be quantified. This type of uncertainty is called \(Aleatoric\) uncertainty and captures inherent noise in the observations. The learning model itself may be also uncertain regarding its predictions, due to a lack of learning data for example. This is called \(epistemic\) uncertainty and can be reduced by observing more data. The multi-horizons model described above has been adapted, based on a Bayesian approach suggested by Kendall et al. [12], to allow uncertainty estimation in parallel with model prediction. This model is called the Bayesian Multi-horizons model (B-MH). The B-MH model output, is composed of predictive mean \(\hat{y}\) as well as predictive \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Crack length & 30 & 32.5 & 35 & 35 & **38:125** \\ \hline Last measured value & 30 & 30 & 35 & 35 & **35** \\ \hline Time step is interpolated & No & Yes & No & No & **Yes** \\ \hline Model input & 30 & 32.5 & 35 & 35 & **35** \\ \hline \end{tabular} \end{table} Table 1: Example of model input of crack length values in the past horizon with last crack length value replacement variance \(\hat{\sigma}^{2}\). The general architecture of the model remains unchanged, with only the last fully connected layers duplicated in order to output both \(\hat{y}\) and \(\hat{\sigma}^{2}\). \(\hat{y}\) represents the predictive mean crack length and \(\hat{\sigma}^{2}\) its predictive variance. A Gaussian likelihood is used to model the aleatoric uncertainty, as the available crack length values follow a Gaussian distribution. This induces the minimization loss function for a given sequence \(x_{i}\), \[L_{\text{B\_MH}}\ =\ \frac{1}{N_{i}}\sum_{j=1}^{N_{i}}\frac{1}{2\hat{\sigma}(x_ {ij})}||y_{ij}-\hat{y}_{ij}||^{2}+\frac{1}{2}log(\hat{\sigma}(x_{ij})^{2}), \tag{4}\] were \(\hat{\sigma}(x_{ij})^{2}\) is the predictive variance for the the time step \(j\) of the sequence \(x_{i}\), \(\hat{y}_{ij}\) its predictive mean and \(N_{i}\) the number of time steps in the sequence \(x_{i}\). The variance \(\hat{\sigma}^{2}\) is implicitly learnt from the loss function. The division of the residual loss \(||y_{ij}-\hat{y}_{ij}||^{2}\) (which represent the \(MSE\) loss) by \(\hat{\sigma}(x_{ij})^{2}\) makes the model more robust to noisy data. In fact, data for which the model has learned Figure 3: Architecture of the multi-horizons recurrent model Figure 2: Scheme of the prediction model to predict a high uncertainty will have lower effect on loss. The second regularization term prevents the network from predicting infinite uncertainty. For numerical stability, and to avoid dividing by zero or either predicting negative variance, the term \(\hat{\sigma}(x_{ij})^{2}\) is replaced by the term \(s_{ij}=log(\hat{\sigma}(x_{ij})^{2})\). The weights of the two terms in the equation have been set to \(\frac{2}{3}\) and\(\frac{1}{3}\) respectively, to give more weight to the \(MSE\) than to the regularization term, resulting in the minimization function, \[L_{\text{B\_MH}}\ =\ \frac{1}{N_{i}}\sum_{j=1}^{N_{i}}\frac{2}{3}exp(-s_{ij})|| y_{ij}-\hat{y}_{ij}||^{2}+\frac{1}{3}s_{ij}. \tag{5}\] To quantify the uncertainty, a dropout approach [13] is used as Bayesian approximation. The model is trained with dropout before every weight layer. Contrary to what is usually done for a network trained with dropout layers, dropout remains activated during inference to generate stochastic rather than deterministic outputs. \(T\) stochastic prediction samples are performed using Dropout, allowing to approximate the predictive uncertainty for one observation as \[\text{Var}(y)\ \approx\ (\frac{1}{T}\sum_{t=1}^{T}\hat{y}_{t}-(\frac{1}{T}\sum_{t =1}^{T}\hat{y}_{t})^{2})+\frac{1}{T}\sum_{t=1}^{T}\hat{\sigma}_{t}^{2}, \tag{6}\] with \(\{\hat{y}_{t},\hat{\sigma}_{t}^{2}\}_{t=1}^{T}\) the set of \(T\) sampled outputs after each forward pass. The first term of this total variance corresponds to the epistemic uncertainty and the second one corresponds to the aleatoric uncertainty. ## 5 Experiments ### Data preparation for Learning Whatever the model used for learning, the generated time series have been pre-processed to ensure that the learning models function correctly. By choosing a maximum size for the prediction horizon at a given value, not all the series generated have the same length. As some are shorter than the maximum length, these series have been completed by adding zeros at the end, so that they all have the same length. These completed time steps will be ignored when calculating the cost functions by implementing custom functions that ignore these time steps for backpropagation. After this step, the dataset is divided into three parts: 60 % training set for the learning procedure, 20 % validation set for hyperparameter optimization and convergence control, and 20 % test set for performance evaluation. The division strategy adopted ensures that the subsequences of a given defect series belong to only one of the three previous sets. The time series are then normalized using a custom time series standard scaler, so that their mean is 0 and their standard deviation is 1. This makes the model much more robust to outliers. Min-max normalization has also been tested, but gives slightly poorer results. ### Settings The work has been implemented in Python using Pytorch. All the experiments are conducted using an NVIDIA A40 GPU. Adam optimizer is used to perform the gradient descent minimization of the loss function. The activation function used is the \(Tanh\) function for all hidden layers. The convergence of the models is checked on learning rates from \(10^{-1}\) to \(10^{-4}\), and on different batch sizes. The models perform best with the learning rate of \(0.001\) and batch size of \(128\). The models are also fitted over a variable number of epochs, the classical recurrent models converge after about \(25\) epochs, and the multi-horizons models converge after \(10\) epochs. To benchmark the different models, many ML and physical metrics are used to compare their performances. \(MAE\) and \(RMSE\) errors are used as machine learning metrics. Other physical criteria are considered to avoid some physical constraints violations such as the drop in the crack length, a phenomenon that should not occur physically (the crack can either progress or remain constant). These physical criteria are: * **MSQNS**, for Mean SeQuence Negative Slope, is the percentage of sequences that contains at least one fall in the predicted values; * **MSTNS**, for Mean STeps Negative Slope, is the percentage of steps that contains at least one fall in the predicted values. * **MLNS**, for Mean Length Negative Slope, is the mean value of the fall in predicted length values. As a reminder, the observation time series themselves contain drops in values of up to \(15\)mm. The computation of evaluation criteria for all reported experiments in this paper is performed using the recently proposed LIPS Framework for benchmarking learned physical systems [14]. ### Experiments with simple configuration (without historical crack length values) For this modeling, there is no notion of horizons in the generation of sequences. Generated sequences are of size \(4\) (we need to anticipate crack lengths values over a period of one year with a time step of \(3\) months). As previously stated, only exogenous variables are considered for prediction. Recurrent models were compared using the various ML and physical criteria defined above. This comparison is made in particular for the average score over the \(4\) time steps to be predicted (mean MAE and mean RMSE), as well as for the scores linked to the prediction of the first time step (MAE \(1^{st}\) and RMSE \(1^{st}\)) as shown in Table 2. The results show that the GRU-FC model outperforms LSTM-FC and RNN-FC in terms of machine learning criteria. The LSTM-FC and RNN-FC models have quite similar ML results, but the LSTM-FC model gives the best results in terms of physical criteria. ### Experiments considering historical crack length values Experiments with recurrent modelsFor this modeling, time series were created using a sliding window of size 9: with a past horizon of size 5 and a prediction horizon of size 4. The size of the past horizon containing historical crack values was chosen at 5 time steps, inspired by [15] which suggests that a past horizon of size \(1.25\times k\) (\(k\) being the size of the prediction horizon) gives the best prediction results. Table 3 shows ML and physical criteria for the recurrent models that considers historical crack length values. ML scores include the MAE for the different time steps in the prediction horizon (from \(t+1\) to \(t+4\)) and their average value, and the RMSE score for the first time step in the the prediction horizon and the average score over the entire prediction horizon. The LSTM-FC-LH model gives slightly better results than the GRU-FC-LH. For the physical criteria, this time it is the GRU-FC-LH model that gives slightly better results. Experiments with the Multi-horizons and Bayesian Multi-horizons modelsFor this modelling, a number of past horizon sizes were tested to see their effect on the various criteria to be minimized. Tables 4 and 5 show the results of the different ML and physical criteria of the multi-horizons model and the Bayesian multi-horizons model with different past horizon sizes. Good results can already be obtained from a single measurement in the past horizon. The size of the training set decreases as the size of the past horizon increases, due to the filtering of sequences to respect the minimum size. The choice of the size of the past horizon is conditioned both by the criteria to be minimized as far as possible and by industrial use. Indeed, information on historical measurements is sometimes available for just 1 or 2 time steps, which corresponds to three months or less, but we still want to predict crack lengths \begin{table} \begin{tabular}{c|c|c c c|c|c|c|c} \hline \multirow{2}{*}{Model} & MAE & Mean & RMSE & Mean & \multirow{2}{*}{MLNS} & \multirow{2}{*}{MSQNS} & MSTNS \\ & \(1^{st}\) & MAE & \(1^{st}\) & RMSE & & & \\ \hline **RNN-FC** & 10.48 & 10.47 & 13.67 & 13.66 & 1.72 & 29\% & 8\% \\ \hline **GRU-FC** & **9.65** & **9.45** & **12.60** & **12.38** & 2.59 & 24\% & 6\% \\ \hline **LSTM-FC** & 10.54 & 10.53 & 13.75 & 13.72 & **1.18** & **3\%** & **1\%** \\ \hline \end{tabular} \end{table} Table 2: ML and Physical results for the recurrent models without using historical crack length values \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Model} & MAE & MAE & MAE & MAE & Mean & RMSE & Mean & \multirow{2}{*}{MLNS} & \multirow{2}{*}{MSQNS} & MSTNS \\ & 1 & 2 & 3 & 4 & MAE & \(1^{st}\) & RMSE & & \\ \hline **LSTM-FC-LH** & **2.37** & **3.05** & **3.85** & **4.51** & **3.45** & **4.72** & **6.01** & 1.16 mm & 1\% & 0.15\% \\ \hline **GRU-FC-LH** & **2.37** & 3.11 & **3.85** & 4.58 & 3.49 & 4.77 & 6.06 & **1.07** & **0.5\%** & **0.13\%** \\ \hline \end{tabular} \end{table} Table 3: ML and Physical results for the recurrent models considering historical crack length values in the future because some cracks might have exceeded the security threshold before 6 months. The model must therefore be able to make predictions even with a limited past horizon size. Figure 4 and 5 show ML scores (MAE and RMSE) for both the multi-horizons and Bayesian multi-horizons models using different past horizons lengths, these scores are presented in detail over the entire prediction horizon. Models errors increase with distance from the past horizon. The Bayesian multi-horizons model outperforms the multi-horizons model over the entire forecast horizon. Figure 6 shows the scatter plots for each time step in the prediction horizon. The x-axis and the y-axis correspond to the measured and predicted values of crack length respectively. There is a high density around the \(y=x\) line which explains the good prediction scores. There are, however, some miss-predicted \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{**dim.hp**} & nb\_sequences & MAE & Mean & RMSE & Mean & MSQNS & MSTNS & MLNS \\ & train & \(1^{st}\) & MAE & \(1^{st}\) & RMSE & \% & \% & mm \\ \hline \hline **1** & 294018 & 1.22 & 2.41 & 2.50 & 4.38 & 2.95 & 0.79 & 1.14 \\ \hline **2** & 265519 & 1.15 & 2.29 & 2.39 & 4.22 & 2.85 & 0.76 & 1.13 \\ \hline **3** & 238222 & 1.51 & 2.58 & 2.82 & 4.54 & 4.58 & 1.20 & 1.18 \\ \hline **4** & 216021 & 1.28 & 2.26 & 2.50 & 4.26 & 14.09 & 3.61 & 1.11 \\ \hline **5** & 193901 & 1.54 & 2.64 & 2.62 & 4.33 & 2.87 & 0.74 & 1.13 \\ \hline **6** & 173598 & 1.27 & 2.29 & 2.43 & 4.13 & 6.58 & 1.69 & 1.08 \\ \hline **7** & 158040 & 1.39 & 2.31 & 2.58 & 4.13 & 4.80 & 1.22 & 1.17 \\ \hline **8** & 141407 & 1.33 & 2.17 & 2.43 & 4.05 & 8.61 & 2.22 & 1.08 \\ \hline **9** & 126847 & 1.33 & 2.10 & 2.39 & 3.91 & 6.78 & 1.74 & 1.16 \\ \hline **10** & 113175 & 1.23 & 2.15 & 2.33 & 3.96 & 14.91 & 3.80 & 1.14 \\ \hline \end{tabular} \end{table} Table 4: ML and physical criteria results for the multi-horizons model considering different past horizons lengths for prediction \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{**dim.hp**} & nb\_sequences & MAE & Mean & RMSE & Mean & MSQNS & MSTNS & MLNS \\ & train & \(1^{st}\) & MAE & \(1^{st}\) & RMSE & \% & \% & mm \\ \hline **1** & 294018 & 0.86 & 2.21 & 2.40 & 4.28 & 1.99 & 0.52 & 1.09 \\ \hline **2** & 265519 & 0.94 & 2.26 & 2.32 & 4.22 & 1.51 & 0.39 & 1.14 \\ \hline **3** & 238222 & 0.90 & 2.21 & 2.44 & 4.30 & 1.05 & 0.29 & 1.13 \\ \hline **4** & 216021 & 1.20 & 2.23 & 2.63 & 4.29 & 3.56 & 0.91 & 1.05 \\ \hline **5** & 193901 & 0.94 & 2.19 & 2.44 & 4.19 & 1.30 & 0.35 & 1.09 \\ \hline **6** & 173598 & 0.98 & 2.13 & 2.37 & 4.06 & 2.87 & 0.73 & 1.04 \\ \hline **7** & 158040 & 1.28 & 2.31 & 2.58 & 4.13 & 1.41 & 0.37 & 1.07 \\ \hline **8** & 141407 & 1.13 & 2.15 & 2.38 & 4.00 & 9.83 & 2.49 & 1.04 \\ \hline **9** & 126847 & 1.13 & 2.06 & 2.31 & 3.87 & 6.20 & 1.58 & 1.02 \\ \hline **10** & 113175 & 1.02 & 1.96 & 2.34 & 3.93 & 2.94 & 0.78 & 1.09 \\ \hline \end{tabular} \end{table} Table 5: ML and physical criteria results for the Bayesian multi-horizons model (B-MH) considering different past horizons lengths for prediction values, especially when crack lengths become large, where the model tends to underestimate them. This result is mainly due to the small percentage of large crack length values in the dataset. #### 4.2.2 Uncertainty quantification using the Bayesian multi-horizons model As described above, uncertainty quantification is performed after the training of the model using Monte Carlo dropout sampling. Dropout is set after each layer (except the last one) and 50 Monte Carlo samples were generated for each Figure 4: MAE and RMSE scores for the prediction horizon using the multi-horizons model with different past horizon lengths. Figure 5: MAE and RMSE scores for the prediction horizon using the Bayesian multi-horizons model with different past horizon lengths time series. Then, the sum of the two types of uncertainty is calculated using equation 6. The dropout rate was varied from 10% to 50%. Aleatoric uncertainty does not vary widely, as it is linked to the inherent noise of the data. Epistemic uncertainty, on the other hand, increases as the dropout rate is increased, since it is linked to the learning model. As a result, total uncertainty increases, as does the size of the confidence interval, resulting in higher coverage. For the rest of this study, a dropout rate of 10% is set after each layer, and an approximate 95%-level prediction confidence interval is constructed. Results show that only 48% of time steps are covered by this confidence interval. Indeed, as mentioned above, all the measured length values were approximated to the closest multiple of 5, which led us to add threshold of 5 to the confidence interval. This time, about 93% of time steps are covered by the new confidence interval. Figure 7 shows some example of crack length propagation, the corresponding predicted values and uncertainty estimated values. Example 1 is a case of a crack whose final value becomes significant (around \(80mm\)). The predicted values are very close to the measurements but the corresponding epistemic uncertainty is quite high. This can be explained by the fact that the training set contains less than 3% of measurements \(\geq 80mm\). Example 2 is an example of propagation with decreasing values. The model underestimates crack lengths for the first few predicted values, then converges to the measured values at the end. However, falling values can be considered as in Figure 6: Actual vs predicted crack length values over the prediction horizon herent data noise or measurement errors, resulting in a high aleatoric uncertainty for this example. ## 6 Conclusion and future works Predicting the propagation of cracks in rails is a critical issue for optimizing the maintenance operations across the rail network. This task is intrinsically complex, and cannot be handled simply with physical simulations. In this paper, we proposed a deep learning approach based on real data collected on the rail. Obtained results show that the multi-horizons model outperforms conventional recurrent models such as GRU. The Bayesian multi-horizons model performs even better, and allows to quantify both aleatoric and epistemic uncertainties. Several avenues of improvement can be investigated in future work, in particular the calibration of models to predict more accurate uncertainties, as proposed in [16]. We aim also at combining recurrent layers with attention layers that assign different weights to the hidden states based on their significance for forecasting the crack lengths. Finally, the hybridization of ML methods and physical simulations is also part of the work in progress. Indeed, information provided from physical simulation can contribute in enriching the variables of the learned model such as the wheel load of the vehicle rolling on the rail, and thus improving the prediction performance.
2301.02642
Triple-stream Deep Metric Learning of Great Ape Behavioural Actions
We propose the first metric learning system for the recognition of great ape behavioural actions. Our proposed triple stream embedding architecture works on camera trap videos taken directly in the wild and demonstrates that the utilisation of an explicit DensePose-C chimpanzee body part segmentation stream effectively complements traditional RGB appearance and optical flow streams. We evaluate system variants with different feature fusion techniques and long-tail recognition approaches. Results and ablations show performance improvements of ~12% in top-1 accuracy over previous results achieved on the PanAf-500 dataset containing 180,000 manually annotated frames across nine behavioural actions. Furthermore, we provide a qualitative analysis of our findings and augment the metric learning system with long-tail recognition techniques showing that average per class accuracy -- critical in the domain -- can be improved by ~23% compared to the literature on that dataset. Finally, since our embedding spaces are constructed as metric, we provide first data-driven visualisations of the great ape behavioural action spaces revealing emerging geometry and topology. We hope that the work sparks further interest in this vital application area of computer vision for the benefit of endangered great apes.
Otto Brookes, Majid Mirmehdi, Hjalmar Kühl, Tilo Burghardt
2023-01-06T18:36:04Z
http://arxiv.org/abs/2301.02642v1
# Triple-stream Deep Metric Learning of Great Ape Behavioural Actions ###### Abstract We propose the first metric learning system for the recognition of great ape behavioural actions. Our proposed triple stream embedding architecture works on camera trap videos taken directly in the wild and demonstrates that the utilisation of an explicit DensePose-C chimpanzee body part segmentation stream effectively complements traditional RGB appearance and optical flow streams. We evaluate system variants with different feature fusion techniques and long-tail recognition approaches. Results and ablations show performance improvements of \(\sim\) 12% in top-1 accuracy over previous results achieved on the PanAf-500 dataset containing 180,000 manually annotated frames across nine behavioural actions. Furthermore, we provide a qualitative analysis of our findings and augment the metric learning system with long-tail recognition techniques showing that average per class accuracy - critical in the domain - can be improved by \(\sim\) 23% compared to the literature on that dataset. Finally, since our embedding spaces are constructed as metric, we provide first data-driven visualisations of the great ape behavioural action spaces revealing emerging geometry and topology. We hope that the work sparks further interest in this vital application area of computer vision for the benefit of endangered great apes. We provide all key source code and network weights alongside this publication. Animal Biometrics, Multi-stream Deep Metric Learning, Animal Behaviour, Great Apes, PanAf-500 Dataset ## 1 Introduction As the climate crisis gathers pace, the threat to many endangered species grows ever more perilous (Almond et al., 2022). All species of great apes are, for instance, listed as endangered or critically endangered according to the IUCN Red List (IUCN, 2022) Consequently, there is urgent need for methods that can help to monitor population status and assess the effectiveness of conservation interventions (Kuhl and Burghardt, 2013; Congdon et al., 2022; Tuia et al., 2022). This includes the recognition of behaviors and variation therein, as an integral part of biological diversity (Dominoni et al., 2020; Carvalho et al., 2022). Figure 1: **System Overview**. Our proposed triple-stream metric learning approach utilises all RGB appearance, optical flow, and DensePose-C segmentations of chimps in videos. Exploiting hybrid reciprocal triplet and cross entropy losses, the model is then trained to map embeddings representing great ape behavioural actions onto a metric space, where semantically similar representations are geometrically close forming natural clusters. This pipeline improves on state-of-the-art classification performance and allows for visualisations of the underpinning space of behavioural actions. (best viewed zoomed) Previous works have employed deep neural networks which leverage multiple modalities, such as RGB, optical flow, and audio (Sakib and Burghardt, 2020; Bain et al., 2021), for the classification of great ape behaviours and actions. However, higher level abstractions such as _pose_ or _body part_ information have remained unexplored for addressing this task. In response, we propose utilising the latter _together_ with RGB and optical flow in a triple-stream metric learning system (see Fig. 1) for improved classification results and domain visualisations relevant to biologists. **Great Ape Activities -** This paper will focus on _great ape activity recognition_, where the coarse activity classes used are illustrated in Fig. 2 for the utilised PanAf-500 dataset (see Sec. 3). Note that computer vision would traditionally categorise these classes as actions whilst in the biological realm they represent behaviour (or aspects thereof) often captured in ethograms (Nishida et al., 1999; Zamma and Matsusaka, 2015). For clarity, in this paper we will refer to these classes as _behavioural actions_ recognising historical traditions in both disciplines. We will approach the classification task via a deep _metric_ learning system (Karaderi et al., 2022) that embeds inputs into a latent space and uses geometric distances to form distributions that align with the semantic similarity captured by the classes (Hermans et al., 2017; Musgrave et al., 2020). A major advantage over standard supervised systems is that sample distances in visualisations of the latent space always relate to learned similarity and, thus, are more naturally interpretable by experts. We will also analyse the role that additional DensePose-Chimp information (Sanakoyeu et al., 2020) can play in improving recognition performance compared to systems that utilise RGB and optical flow only. Lastly, as shown by Sakib and Burghardt (Sakib and Burghardt, 2020), there are significant challenges in correctly classifying behavioural actions which occur infrequently and form the distribution tail (see Fig. 2). To address this, we will employ three long-tailed recognition (LTR) techniques to improve performance on tail classes; (i) logit adjustment (Menon et al., 2020); (ii) class balanced focal loss (Cui et al., 2019); and (iii) weight balancing (Alshammari et al., 2022). In summary, our contributions are as follows: (i) we implement the first deep _metric_ learning system for recognising great ape behavioural actions; (ii) we show that utilising explicit pose information has a significant positive effect on recognition performance in this domain; and (iii) we establish that existing LTR techniques can be applied in a metric learning setting to improve performance on tail classes for the problem. The proposed approaches improve the state-of-the-art performance benchmarks with respect to top-1 (\(\sim\) 85%) and average per class (\(\sim\) 65%) accuracy on the PanAf-500 dataset. ## 2 Related Work Action recognition aims to classify actions observed in video (Kalfaoglu et al., 2020; Shaikh and Chai, 2021). Learning spatio-temporal features characteristic for actions (Simonyan and Zisserman, 2014) via various deep learning paradigms forms the approach of choice in the domain of human action recognition (HAR). We will briefly review concepts from this field, before discussing specifc relevant great ape behavioural action recognition and LTR methods. **Human Action Recognition -** Although there are numerous deep learning approaches to action recognition (Zhou et al., 2018; Lin et al., 2019; Tran et al., 2019; Kalfaoglu et al., 2020; Pan et al., 2019; Majd and Safabakhsh, 2020; Sharir et al., 2021; Zhang et al., 2021) this work focuses on multi-stream architectures, which address key aspects of the action recognition problem (e.g., spatial and temporal) in Figure 2: **Behavioural Actions in the PanAf-500 Data**. Examples of each one of the nine behavioural action classes (_top_) and their distribution across the approx. 180k frames in the dataset (_bottom_). Note the imbalance of two orders of magnitude in the distribution. (best viewed zoomed) dependently and explicitly. Feichtenhofer et al. (Feichtenhofer et al., 2019) introduced the SlowFast architecture which employs two streams, each operating at different frame rates; a slow, low frame-rate pathway captures spatial information while the fast, high frame-rate pathway captures fine temporal detail. Other types of multi-stream networks process different visual modalities. Simonyan (Simonyan and Zisserman, 2014) introduced a two-stream network that processes RGB and optical flow to exploit spatial and temporal semantics, respectively. Since then, several networks that utilise additional modalities, such as motion saliency (Zong et al., 2021) and audio (Wang et al., 2021), have been introduced. Recently, the introduction of pose, which is critical for the perception of actions (Le et al., 2022), has shown promising results in multi-stream architectures (Hong et al., 2019; Hayakawa and Dariush, 2020; Duan et al., 2021; Li et al., 2022). In particular, the DensePose format provides an opportunity to exploit fine-grained, segmentation map-based pose representations for action recognition. Hayakawa et al. (Hayakawa and Dariush, 2020) combine RGB and DensePose estimations in a two-stream network and demonstrate strong performance on egocentric footage of humans. Whilst such significant progress has been made in the domain of HAR, research into great ape behavioural action recognition is still in its infancy and few systems have been tested on natural datasets. **Great Ape Domain -** To date, two systems have attempted automated great ape behavioural action recognition, both are multi-stream architectures. The first (Sakib and Burghardt, 2020) is based on the two-stream convolutional architecture by Simonyan et al. (Simonyan and Zisserman, 2014) and used 3D ResNet-18s for feature extraction and LSTM-based fusion of RGB and optical flow features. They report top-1 accuracy of 73.52% across the nine behavioural actions in the PanAf-500 dataset (see Sec. 3) and a relatively low average per class accuracy (42.33%), highlighting the issue of tail class performance. The second, proposed by Bain et al. (Bain et al., 2021), is a deep learning system that requires both audio and video inputs and detects two specific behaviours; buttress drumming and nut cracking. Their system utilised a 3D ResNet-18 and a 2D ResNet-18 for extraction of visual and assisting audio features, respectively, in different streams. They achieved an average precision of 87% for buttress drumming and 85% for nut cracking on their unpublished dataset. However, the multi-modal method is not applicable to all camera trap settings since many older models do not provide audio. It cannot be utilised on the PanAf-500 dataset since many clips there do not contain audio. **Long-tailed Recognition -** Most natural recorded data exhibits long-tailed class distributions (Liu et al., 2019). This is true of great ape camera-trap footage which is dominated by commonly occurring behaviours - even with only the nine classes of the PanAf-500 data the distribution shows a clear tail (see Fig. 2). Without addressing this issue, models trained on such data often exhibit poor performance on rare classes. Various counter-measures have been proposed (Verma et al., 2018; Kang et al., 2019; Zhang et al., 2021). Class balanced losses assign additional weights, typically determined by inverse class frequencies, to samples from rare classes and have yielded strong results when coupled with techniques to reduce per-class redundancy (Cui et al., 2019). Similarly, logit adjustment uses class frequencies to directly offset output logits in favour of minority classes during training (Menon et al., 2020). An orthogonal approach, based on the observation that weight norms for rare classes are smaller in naively trained classifiers, is to perform weight balancing (Alshammari et al., 2022). These techniques have achieved strong results on several LTR benchmarks. Before detailing how we use triple-stream metric learning with explicit DensePose-Chimp processing and LTR extensions for behavioural action recognition, we will briefly outline the utilised dataset. ## 3 Dataset The _Pan-African_ dataset, gathered by the Pan African Programme: 'The Cultured Chimpanzee', comprises \(\sim\) 20,000 videos from footage gathered at 39 study sites spanning 15 African countries. Here we utilise a 500 video subset, PanAf-500, specifically ground-truth labelled for use in computer vision under reproducible and comparable benchmarks. It includes frame-by-frame annotations for full-body locations of great apes and nine behavioural actions (Sakib and Burghardt, 2020) across approximately 180k frames (see. Fig. 3). Fig. 2 displays the behavioural actions classes in focus together with their distribution. We utilised the PanAf-500 dataset for all experiments and employ the same training and test partitions described in (Sakib and Burghardt, 2020). ## 4 Method The proposed system utilises three visual modalities as input; RGB, optical flow, and DensePose-C estimations (Sanakoyeu et al., 2020), as illustrated in Fig. 1). All optical flow images are pre-computed using OpenCV's implementation of the Dual TV L1 algorithm (Zach et al., 2007). We employ the model developed by Sanakoyeu et al. (Sanakoyeu et al., 2020) to generate DensePose-C segmentations describing chimpanzee pose. The model predicts dense correspondences between image pixels and a 3-D object mesh where each mesh represents a chimpanzee body part specified by a selector \(I\) and local surface coordinates within each mesh indexed by \(U\) and \(V\). Frame-by-frame application to each of the PanAf-500 videos yields DensePose-C estimates expressed in \(IUV\) coordinates. Each of the three input modalities is fed into a 3D ResNet-50 (Du Tran et al., 2017) backbone, which together act as a feature extractor (see Fig. 1). The input tensors into the backbones are 3D since inputs are processed in snippets, that is each stream accepts a sequence of \(n\) consecutive RGB frames, optical flow images, or \(IUV\) coordinates, respectively. The final fully-connected layer outputs an \(n\)-dimensional encoding for each stream. These are fused into a single embedding using three popular approaches; (i) simple averaging across streams; (ii) convolutional fusion whereby stream features are concatenated and passed to a 3D convolutional layer as a volume; and (iii) element-wise multiplication of all three embedding vectors followed by \(L2\) normalisation. The latter two approaches are illustrated in detail in Fig. 4. A linear layer at the end of the fusion head finally outputs the unified embedding as logits. Whilst this system was trained via metric learning - visually sketched in Fig. 1 (right) - a \(k\)-NN classifier is used to perform inference in the embedding space during evaluation. Let the parameters of this network \(f_{\theta}(\cdot)\) be denoted by \(\theta\). Furthermore, let \(f_{\theta}(x)=x\) be the shorthand for referring to embeddings. Our metric learning objective is, thus, to minimise the distance between anchor-positive embedding pairs \(d(x_{a},x_{p})\) and maximise distance between anchor-negative embedding pairs \(d(x_{a},x_{n})\), where \(d\) represents a Euclidean. Instead of using standard triplet loss (Hermans et al., 2017)\(L_{TL}\), we use an improved version (Andrew et al., 2021), where the model is optimised via a hybrid reciprocal triplet and softmax cross-entropy loss: \[L_{RC}=L_{CE}+\lambda\ L_{RT}. \tag{1}\] It is assembled from two components balanced by \(\lambda=0.1\) as given in (Andrew et al., 2021). The two components themselves are evaluated as: \[L_{RT}=d(x_{a},x_{p})+\frac{1}{d(x_{a},x_{n})} \tag{2}\] \[L_{CE}=-\log\left(\frac{e^{x_{y}}}{\sum_{i=1}^{C}e^{x_{i}}}\right), \tag{3}\] where \(C\) denotes the total number of classes and \(y\) are the class labels. In order to extend this system into the LTR domain we substitute the softmax cross-entropy term for losses calculated using; (i) cross-entropy softmax with logit adjustment (Menon et al., 2020)\(L_{LA}\); (ii) class-balanced focal loss (Cui et al., 2019)\(L_{CB}\); and (iii) class-balanced focal loss with weight balancing (Alshammari et al., 2022). The first two losses are evaluated as follows: \[L_{LA}=-\log\left(\frac{e^{x_{y}}+\tau\cdot\log\pi_{y}}{\sum_{i=1}^{C}e^{x_{i} +\tau\cdot\log\pi_{i}}}\right), \tag{4}\] \[L_{CB}=-\ \frac{1-\beta}{1-\beta^{n_{y}}}\sum_{i=1}^{C}(1-p_{i})^{\gamma}\log(p_ {i}), \tag{5}\] Figure 4: **Fusion Head Schematics**. A component breakdown of fusion by element-wise multiplication (_left_) and convolutional fusion (_right_) as applied for our work to explore their impact on performance. Figure 3: **Frame-by-frame Ground Truth Annotations**. Four still frames from PanAf-500 videos with annotations of location (green boxes) and behavioural actions (visualised as text) of the apes in-frame. (best viewed zoomed) where \(\pi\) represents the class priors (i.e., class frequencies in the training set) and temperature factor \(\tau=1\), \(\beta=0.99\) is the re-weighting hyper-parameter, \(n\) is the total number of samples, \(y\) are the classes, \(\gamma=1\) is the focal loss hyper-parameter and \(p_{i}=\sigma(x_{i})\). Balancing the network weights \(\theta\) is performed via a MaxNorm constraint \(\|\theta_{i,j}\|_{2}^{2}\leq\delta^{2},\forall i\) given in (Alshammari et al., 2022) imposed on each class filter \(i\) in the last layer \(l\) of the network where \(\delta\) is the L2-norm ball radius. We will reference a \(L_{CB}\)-based optimisation where weight balancing is performed via \(L_{WB}\). Methodologically, this described architecture approaches the learning of behavioural great ape actions via five key capabilities: 1) utilisation of multiple relevant input modalities across an entire video snippet; 2) effective streamed content encoding; 3) fusion into a single embedding space; 4) metric space optimisation so that distances naturally reflect semantic similarity; and 5) taking into account class imbalances common to the domain content. ## 5 Experiments ### General Training Setup We train our architecture via SGD optimisation using batch size 32 and learning rate \(10^{-4}\). Feature extractor backbones are initialised with Kinetics-400 (Kay et al., 2017) pre-trained weights and training runs are distributed over 8 Tesla V100 GPUs for 100 epochs. ### Baselines and Stream Ablations As shown in Tab. 1, we first establish performance benchmarks for one and two stream baseline architectures of our system (rows 2-5) against the current state-of-the-art (row 1), which uses a ResNet-18 backbone with focal loss \(L_{FL}\), SGD, and LSTM-based frame fusion (Sakib and Burghardt, 2020). As expected, we confirmed that - using identical setups and losses - adding an optical flow stream is beneficial in the great ape domain mirroring HAR results (see rows 2 vs 4, and 3 vs 5). Additionally, models trained using \(L_{RC}\) consistently outperformed standard triplet loss \(L_{RC}\) scenarios (see rows 2 vs 3, and 4 vs 5). Finally, a dual-stream version of our proposed architecture trained with \(L_{RC}\) outperforms the state-of-the-art by a small margin (see rows 1 vs 5). ### Triple-Stream Recognition As given in Tab. 1 rows 6-8, our proposed triple-stream architecture significantly outperforms all baselines with regards to top-1 accuracy, achieving up to 85.86%. Thus, explicit DensePose-C information appears a useful information source for boosting behavioural action recognition in great apes. However, without LTR techniques all our triple-stream models are significantly outperformed by a dual-stream setting (row 5) with regards to average per-class accuracy. This reduction is caused by significantly poorer performance on minority classes (see Sec. 5.4). Since the learned behavioural action embeddings are constructed as metric from the outset, they can be visualised meaningfully - we note that such data-driven visualisations are novel in the primatology domain. Fig. 5 depicts such learned spaces for our data and architecture where, independent of stream cardinality, embeddings cluster the training data cleanly. This is of course expected given above 99% top-1 _training_ accuracy in all settings. Yet, behavioural actions of great apes are highly intricate as well as variable and, even with approx. \(144,000\) training frames used, the model clearly shows signs of overfitting. As a result, test set embeddings exhibit significant cluster overlap. Sample groups representing sitting, standing, and walking, for instance, blend into one another. In addition to overfitting, this also highlights the transitional nature of these often temporarily adjacent and smoothly changing actions. Thus, future temporally transitional ground truth labelling may be needed to represent behavioural great ape action in the PanAf-500 dataset more authentically. ### Fusing Streams When looking at the impact of information fusion methods on performance in more detail, we find that benchmarks vary significantly (see Tab. 1 rows 6-8) when we test averaging, element-wise multiplication, and convolutional fusion, as described in Sec. 4. Results show that convolution and element-wise multiplication improve performance slightly across both metrics when compared with averaging: top-1 accu \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Models/Streams** & **Fusion** & **Loss** & **Top-1** & **C-Avg** \\ \hline \hline **Sakib et al. 2020** & & & & & \\ 1 _RGB\(+\)OF_ & LSTM & \(L_{FL}\) & **73.52\%** & **42.33\%** \\ \hline **Up to Dual-Stream** & & & & & \\ 2 _RGB only_ & None & \(L_{TL}\) & 55.50\% & 32.67\% \\ 3 _RGB only_ & None & \(L_{RC}\) & 74.24\% & 55.76\% \\ 4 _RGB\(+\)OF_ & Avg & \(L_{TL}\) & 62.90\% & 39.10\% \\ 5 _RGB\(+\)OF_ & Avg & \(L_{RC}\) & **75.02\%** & **61.97\%** \\ \hline \hline \multicolumn{6}{l}{**Triple-Stream (Ours)**} & & & \\ 6 _RGB\(+\)OF\(+\)DP_ & Avg & \(L_{RC}\) & 81.71\% & 46.61\% \\ 7 _RGB\(+\)OF\(+\)DP_ & Conv & \(L_{RC}\) & 82.04\% & **56.31\%** \\ 8 _RGB\(+\)OF\(+\)DP_ & Elem & \(L_{RC}\) & **85.86\%** & 50.50\% \\ \hline \hline \end{tabular} \end{table} Table 1: **Behavioural Action Recognition Benchmarks. Top-1 and average per-class (C-Avg) accuracy performance on the PanAf-500 dataset for the current state-of-the-art (row 1), single and dual-stream baselines (rows 2–5), and our triple-stream networks (rows 6–8) for different fusion methodologies and losses tested.** racy improves by 0.33% and 4.1%, respectively (see rows 6-8). However, the most significant gains are observed with respect to average per class accuracy which increases by 3.44% for element-wise multiplication and 9.7% for convolutional fusion. Learnable parameters in the convolution method clearly help blending information even when only fewer samples are available for training. Building on this improvement, we will next investigate the impact of LTR methods in order to benefit tail class performance. ### Long-tail Recognition When grouping behavioural actions into _head_ (covering sitting, standing, and walking) and remaining _tail_ classes based on frequency in the data (see Fig. 2), a significant performance gap becomes apparent even when using the so far best C-Avg performing model (see Tab. 2 row 1). Employing LTR techniques can, however, reduce this gap and improve average per-class accuracy further as quantified across rows 2-4 in Tab. 2). Fig. 6 shows t-SNE visualisations of the three LTR triple-stream approaches when trained with convolutional feature fusion. Particularly for the class-balanced approaches and weight-balancing setups (two rightmost), _tail_ class clusters appear more clearly separated and class overlap is generally reduced. Thus, for the great ape domain underrepresented classes are indeed an effective source of information for improving action separability in general. ## 6 Conclusion In this work we introduced the first deep metric learning system for great ape behavioural action recognition. We demonstrated that the proposed triple-stream architecture can provide leading state-of-the-art performance when tested on the PanAf-500 camera trap dataset covering 180,000 annotated frames across 500 videos taken in the wild. We demonstrated that the addition of a DensePose-C chimpanzee pose estimation stream into the embedding architecture is highly effective and leads to system performance of 85.86% top-1 accuracy on the data. We also showed that adding LTR techniques that address poor tail class performance to the system can improve the average per-class accuracy to 65.66% on the dataset. Despite these improvements we note that both larger annotated datasets to counteract overfitting as well as more temporally blended forms of annotation (e.g. action transition annotations) would benefit the authenticity of data-driven great ape behavioural representations. We hope that the research presented here sparks further interest in this vital application area for the benefit of endangered species such as great apes. ## Acknowledgements We thank the Pan African Programme: 'The Cultured Chimpanzee' team and its collaborators for allowing the use of their data for this paper. We thank Amelie Pettrich, Antonio Buzharevski, Eva Martinez Garcia, Ivana Kirchmair, Figure 5: **Visualisations of Great Ape Behavioural Action Spaces. A 2D t-SNE (Wattenberg et al., 2016) visualisation of the 128-dimensional training (top-right) and test (bottom-right) embeddings produced by the single, dual and three-stream network with convolutional fusion. We can see that training set embeddings from all classes are clustered cleanly. In contrast, test set embeddings show significant overlap and only embeddings from majority classes form distinct clusters. This is consistent with the high top-1 accuracy and relatively low average per-class accuracy reported in Tab. 1** Sebastian Schutte, Linda Gerlach and Fabina Haas. We also thank management and support staff across all sites; specifically Yasmin Moebius, Geoffrey Muhanguzi, Martha Robbins, Henk Eshuis, Sergio Marrocoli and John Hart. Thanks to the team at [https://www.chimpandsee.org](https://www.chimpandsee.org) particularly Briana Harder, Anja Landsmann, Laura K. Lynn, Zuzana Machackova, Heidi Pfund, Kristeena Sigler and Jane Widness. The work that allowed for the collection of the dataset was funded by the Max Planck Society, Max Planck Society Innovation Fund, and Heinz L. Krekeler. In this respect we would like to thank: Ministre des Eaux et Forets, Ministere de l'Enseignement superieur et de la Recherche scientifique in Cote d'Ivoire; Institut Congolais pour la Conservation de la Nature, Ministere de la Recherche Scientifique in Democratic Republic of Congo; Forestry Development Authority in Liberia; Direction Des Eaux Et Forets, Chasses Et Conservation Des Sols in Senegal; Makerere University Biological Field Station, Uganda National Council for Science and Technology, Uganda Wildlife Authority, National Forestry Authority in Uganda; National Institute for Forestry Development and Protected Area Management, Ministry of Agriculture and Forests, Ministry of Fisheries and Environment in Equatorial Guinea. This work was supported by the UKRI CDT in Interactive AI under grant EP/S022937/1.
2304.10498
Regret-Minimizing Double Oracle for Extensive-Form Games
By incorporating regret minimization, double oracle methods have demonstrated rapid convergence to Nash Equilibrium (NE) in normal-form games and extensive-form games, through algorithms such as online double oracle (ODO) and extensive-form double oracle (XDO), respectively. In this study, we further examine the theoretical convergence rate and sample complexity of such regret minimization-based double oracle methods, utilizing a unified framework called Regret-Minimizing Double Oracle. Based on this framework, we extend ODO to extensive-form games and determine its sample complexity. Moreover, we demonstrate that the sample complexity of XDO can be exponential in the number of information sets $|S|$, owing to the exponentially decaying stopping threshold of restricted games. To solve this problem, we propose the Periodic Double Oracle (PDO) method, which has the lowest sample complexity among regret minimization-based double oracle methods, being only polynomial in $|S|$. Empirical evaluations on multiple poker and board games show that PDO achieves significantly faster convergence than previous double oracle algorithms and reaches a competitive level with state-of-the-art regret minimization methods.
Xiaohang Tang, Le Cong Dinh, Stephen Marcus McAleer, Yaodong Yang
2023-04-20T17:39:02Z
http://arxiv.org/abs/2304.10498v2
# Regret-Minimizing Double Oracle for Extensive-Form Games ###### Abstract By incorporating regret minimization, double oracle methods have demonstrated rapid convergence to Nash Equilibrium (NE) in normal-form games and extensive-form games, through algorithms such as online double oracle (ODO) and extensive-form double oracle (XDO), respectively. In this study, we further examine the theoretical convergence rate and sample complexity of such regret minimization-based double oracle methods, utilizing a unified framework called Regret-Minimizing Double Oracle. Based on this framework, we extend ODO to extensive-form games and determine its sample complexity. Moreover, we demonstrate that the sample complexity of XDO can be exponential in the number of information sets \(|S|\), owing to the exponentially decaying stopping threshold of restricted games. To solve this problem, we propose the Periodic Double Oracle (PDO) method, which has the lowest sample complexity among all existing double oracle methods, being only polynomial in \(|S|\). Empirical evaluations on multiple poker and board games show that PDO achieves significantly faster convergence than previous double oracle algorithms and reaches a competitive level with state-of-the-art regret minimization methods. Machine Learning, ICML ## 1 Introduction Extensive-form games (EFGs) have been extensively employed in modeling sequential decision-making problems, including auctions, security games, and poker. However, solving such games remains challenging in real-world applications due to various complexities, including the game's large size and imperfect information. While linear programming (LP) (Von Neumann & Morgenstern, 1947) and sequence-form LP (Koller & Megiddo, 1992) can be used to compute the Nash equilibrium (NE) of EFGs (Nash Jr, 1950), direct calculation of the exact NE via LP can become infeasible for large real-world games due to memory constraints and the high cost of matrix inversion. Therefore, more efficient methods are required to solve EFGs for real-world applications. The Double Oracle (DO) (McMahan et al., 2003) algorithm family has been developed to address the complexity of solving large Extensive-Form Games (EFGs) by solving a sequence of restricted games, where players can only select actions from a subset of pure strategies in the original game. These restricted games are typically much smaller than the original EFG, especially when the support of the Nash equilibrium (NE) is small (Wilson, 1972; Koller & Megiddo, 1996; Bosansky et al., 2014). For instance, in symmetric normal-form games with random entries, the NE's support size is only half of the game size (Jonasson et al., 2004). By constructing a solution to the large game using only a small subset of pure strategies, the DO algorithm can effectively solve large EFGs. The DO algorithms can also be combined with deep reinforcement learning (Lanctot et al., 2017; Muller et al., 2019; McAleer et al., 2022b;a) to achieve state-of-the-art performance on large games like Stratego (McAleer et al., 2020) and Starcraft (Vinyals et al., 2019). Recent advances in game theory have led to the development of methods that integrate the Double Oracle (DO) algorithm with regret minimization techniques. For example, in large normal-form games, the Online Double Oracle (ODO) algorithm combines the DO approach with the Multiplicative Weights Update (MWU) (Freund & Schapire, 1999) regret minimizer for restricted games (Dinh et al., 2022). In extensive-form games, the Counterfactual Regret Minimization (CFR) algorithm has been successfully used to achieve superhuman performance in Texas Hold'em Poker and other games (Zinkevich et al., 2007; Brown et al., 2019; Lanctot et al., 2009; Farina et al., 2020; McAleer et al., 2023). To further improve the performance of DO in this setting, the Extensive-Form DO (XDO) algorithm applies CFR to the DO framework and can support games with high-dimensional continuous action spaces (McAleer et al., 2021). However, theoretical questions regarding the convergence speed, expected iterations, and sample complexity of DO methods in EFGs have yet to be thoroughly investigated and remain an open area of research (Bosansky et al., 2014). In this study, a general theoretical framework is presented to investigate algorithms that combine DO and regret minimization, providing their expected number of iterations and sample complexity to reach \(\epsilon\)-NE. This is to our knowledge the first work analyzing the theoretical convergence rate of DO in EFGs. The paper presents two applications of the framework. In the first example, we extend ODO to solve EFGs and determine the corresponding sample complexity. In the second example, we prove that the stopping condition for restricted games in the Extensive-Form DO (XDO) algorithm may lead to a worst-case convergence in a number of iterations exponential in the number of information sets \(|S|\). To address this issue, we propose Periodic Double Oracle (PDO), which only has a polynomial sample complexity in \(|S|\), lower than that of both ODO and XDO. Empirical assessments on typical poker and board games demonstrate that PDO achieves much faster convergence compared to XDO and ODO, reaching a level competitive with state-of-the-art regret minimization methods. ## 2 Preliminaries ### Two-Player Zero-Sum Extensive-Form Games In this paper, we focus on Two-Player Zero-Sum Extensive-Form Games (**EFGs**) with perfect recall. Our notation is based on that of Brown (2020). EFGs are represented by a game tree where nodes correspond to players \(i\in\mathcal{P}=\{1,2\}\). Imperfect Information EFGs employ **chance player** c to model the stochastic events like card dealing in Poker. **History** (\(h\)) is a sequence of actions the players took and events uniquely attached to a node on the game tree. \(A(h)\) is the set of available actions at \(h\), and \(P(h)\) denotes the player who needs to make a decision at \(h\). Terminal histories, which are all in the set \(Z\), are attached to nodes where the game's payoff can be determined. The payoff at the terminal history \(z\in Z\) is treated as value at \(z\), denoted as \(v_{i}(z)\). The range of the payoff is represented by \(\Delta\). **Information Set (Infoset/Infostate \(s_{i}\))** of each player \(i\in\mathcal{P}\) is the set of indistinguishable histories from player \(i\)'s perspective. The set \(S_{i}\) contains infosets where player \(i\) must make decisions, and \(S\) represents the union of information sets for all players: \(S=\cup_{i\in\mathcal{P}}S_{i}\). The set \(A(s_{i})\) includes all available actions of player \(i\) at \(s_{i}\). The **Strategy** of player \(i\) is represented by \(\pi_{i}\), with \(\pi_{i}(s_{i},a)\) representing the probability of player \(i\)'s taking action \(a\in A(s_{i})\) at the information set \(s_{i}\). Joint strategy \(\pi=(\pi_{1},\pi_{2})\). **Reaching probability**\(x^{\pi}(h)\) is the probability to reach \(h\) when players use joint strategy \(\pi\). Specifically, \(x^{\pi}(h)=\prod_{i\in\mathcal{P}\bigcup\{c\}}x_{i}^{\pi}(h)\), where \(x_{i}^{\pi}(h)=\prod_{h^{{}^{\prime}}\cdot a\subset h}\pi_{\mathcal{P}(h^{{ }^{\prime}})}(h^{{}^{\prime}},a)\) is player \(i\)'s contribution. Given joint strategy \(\pi=(\pi_{1},\pi_{2})\), we can define the **expected value** of history \(h\) of player \(i\), denoted by \(v_{i}(h)\). If reaching probability \(x^{\pi}(h)=0\), then \(v_{i}^{\pi}(h)=0\); Otherwise, we have \[v_{i}^{\pi}(h)=\sum_{z\in Z}\frac{x^{\pi}(z)}{x^{\pi}(h)}v_{i}(z),\;x^{\pi}(h)>0, \tag{1}\] and the _bilinear_ value function of strategy \(\pi\): \[v_{i}(\pi_{i},\pi_{-i})=\sum_{z\in Z}v_{i}^{\pi}(z)x^{\pi}(z). \tag{2}\] Then **best response (BR)** is \(\mathbb{BR}_{i}(\pi_{-i})=\arg\max_{\pi_{i}}v_{i}(\pi_{i},\pi_{-i})\). \(\epsilon\)**-Nash Equilibrium (NE)** strategy \(\pi^{*}\) satisfies for every \(i\in\mathcal{P}\), \(\min_{\pi_{-i}}v_{i}(\pi_{+}^{*},\pi_{-i})+\epsilon\geq v_{i}(\pi^{*})\geq \max_{\pi_{i}}v_{i}(\pi_{i},\pi_{-i}^{*})-\epsilon\). In particular, exact NE is \(\epsilon\)-NE when \(\epsilon=0\). **Exploitability**\(e(\pi)=\sum_{i\in\mathcal{P}}v_{i}(\mathbb{BR}(\pi_{-i}),\pi_{i})-v_{i}(\pi)\). **Support Size** of NE (\(\pi^{*}\)) denotes the number of actions that have positive probabilities at infoset \(s\) in NE \(\pi^{*}\), denoted by \(\text{supp}^{\pi^{*}}(s)\) (Schmid et al., 2014). ### Regret Minimization Algorithms Regret minimization methods approximate NE in EFGs if the algorithm has a sublinear regret upper bound or an average regret converging to zero as iteration \(T\) goes to infinity. In this context, we introduce different regret minimization algorithms, and we will demonstrate how these algorithms ensure convergence to NE. **Definition 2.1** (Regret).: Given \(\{\pi^{t}|\;t=1,\cdots,T\}\) is a sequence of strategies delivered by an algorithm, the regret of this algorithm is defined as: \[R_{i}^{T}=\sum_{t=1}^{T}\max_{\pi}v_{i}(\pi,\pi_{-i}^{t})-v_{i}(\pi_{i}^{t}, \pi_{-i}^{t}), \tag{3}\] and _average_ regret \(\bar{R}_{i}^{T}=R_{i}^{T}/T\). Vanilla Counterfactual Regret Minimization (CFR) (Zinkevich et al., 2007) is a regret minimization algorithm that aims to minimize counterfactual regret at each infoset by traversing the full game tree depth-firstly. It achieves this by calculating player's expected values \(v_{i}(\cdot)\) based on equation 1 and computing instantaneous regrets at iteration \(t\leq T\) of taking action \(a\) in infoset \(s\) following \(r_{i}^{t}(s,a)=\sum_{h\in s}x_{-i}^{\pi^{t}}(h)[v_{i}^{\pi^{t}}(h\cdot a)-v_{i} ^{\pi^{t}}(h)]\). The counterfactual regrets of CFR can be computed by uniform average of \(r_{i}\) over all iterations: \(R_{i}^{T}(s,a)=\sum_{t=1}^{T}r_{i}^{t}(s,a)/T\). In two-player games, if both players apply regret matching (Zinkevich et al., 2007) to strategy updates, the regret of CFR is bounded by\(\Delta|S_{i}|\sqrt{|A_{i}|}\cdot T^{1/2}\). Discounted regret minimization methods aim to minimize a weighted-average regret in which the strategy in each iteration is discounted: \[\bar{R}_{i}^{T}=\sum_{t=1}^{T}w_{t}\left[\max_{\pi}v_{i}(\pi,\pi_{-i}^{t})-v_{i}( \pi_{i}^{t},\pi_{-i}^{t})\right]/\sum_{t=1}^{T}w_{t}. \tag{4}\] The discounted CFR (DCFR) (Brown and Sandholm, 2019) is a regret minimization framework that belongs to this family and is based on the counterfactual regret minimization algorithm. DCFR employs weighted-average counterfactual regrets and weighted-average strategy to achieve faster convergence. It has an upper bound on weighted-average regret, where \(w_{t}\) satisfy \(\sum_{t=1}^{\infty}w_{t}=\infty\). The weighted-average regret's upper bound is \(6\Delta|S_{i}|(\sqrt{|A|}+\frac{1}{T})\cdot T^{-1/2}\). DCFR can be generalized to other CFR variants such as vanilla CFR, CFR+, and Linear CFR with appropriate hyperparameters (Tammelin, 2014; Brown and Sandholm, 2019; Brown, 2020). For the purpose of facilitating analysis, we adopt a uniform notation for the upper bound of regret throughout the remaining sections of this paper. Specifically, we use \(\mathcal{O}(|S_{i}|\sqrt{|A_{i}|}T^{-1/2})\) as the _average_ regret bound of CFR algorithms including vanilla CFR and DCFR. #### 2.2.1 Regret to Nash Equilibrium A widely accepted folk lemma suggests that if both players in a two-player zero-sum game adopt an algorithm with a sublinear regret bound, then the average strategies of both players will converge to a Nash Equilibrium (Cesa-Bianchi and Lugosi, 2006; Blum and Monsour, 2007). Here we prove the discounted regret version with the same idea. **Lemma 2.2**.: _Given the weighted-average regret of an algorithm \(\mathcal{O}(|S_{i}|\sqrt{|A_{i}|}/\sqrt{T})\), the weighted-average strategy of this algorithm is a \(\mathcal{O}(|S|\sqrt{|A|})/\sqrt{T})\)-NE._ The proof is in Appendix B.2. Therefore, if the weighted-average regret can converge to zero, the resulting weighted-average strategy is a reliable approximation that converges to NE. ### Double Oracle Methods Double Oracle (DO) (McMahan et al., 2003) is a technique initially developed for solving Normal-form Games (NFGs) which maintains a population of pure strategies for both players, denoted as \(\Pi_{t}\) at time \(t\), and creates a restricted game by considering only the actions in \(\Pi_{t}\). Then the restricted game's NE is obtained using linear programming, and the best response to this NE is added to the population. This process repeats until the population no longer changes. DO's advantage is that it solves large games by only solving some small restricted games, since the restricted game usually stops growing when it is still small, particularly when the support of the NE is small (i.e., non-zero probabilities of NE strategy's actions are few) (Wilson, 1972; Koller and Megiddo, 1996). Table 2 in Appendix C includes the support size of NE in some common games. However, traditional algorithms such as linear programming can still be intractable in large restricted games. To address this issue, Online Double Oracle (ODO) (Dinh et al., 2022) uses regret minimization for strategy updates in the restricted games, leading to better empirical performance than DO in large normal-form games. The conversion of extensive-form games (EFGs) into normal-form games and solving them with normal-form Double Oracle (DO) is theoretically feasible. However, representing EFGs in normal-form will result in an exponential increase in the number of required iterations for convergence (McAleer et al., 2021). Therefore, a specific DO algorithm for EFGs is necessary. The first DO method for EFGs, Sequence-form DO (SDO), was proposed by Bosansky et al. (2014). This approach uses the Sequence-form LP to compute the exact NE of the restricted game. Moreover, SDO introduces an efficient technique for expanding the restricted game in extensive-form algorithms, where each iteration involves adding sequences instead of one pure strategy to the population. To address the challenge of solving large restricted games, McAleer et al. (2021) extends SDO to Extensive-Form DO (XDO), which employs CFR to approximate the restricted game's NE and reinforcement learning methods to approximate the best response. Since XDO adds actions at every information set, it expands the restricted game much faster than SDO. The formal process of XDO is in Algorithm 1, which is rearranged to count one iteration as the completion of one strategy update in the restricted game, followed by computing a BR if required. ``` Input: initial threshold \(\epsilon_{0}\), time window index \(j=0\), uniform random strategy \(\pi^{0}\). \(\Pi_{1}=\mathbf{B}\mathbf{R}_{i}(\pi^{0})\) for \(i\in\{1,2\}\). Construct restricted game \(\mathbf{G}_{1}\) with \(\Pi_{1}\). for\(t=1,\cdots,\infty\)do Run one iteration of CFR in \(\mathbf{G}_{t}\). if Exploitability in \(\mathbf{G}_{t}\), \(e(\bar{\pi}^{t})\leq\epsilon_{0}/2^{j}\)then \(\Pi_{t+1}=\Pi_{t}\cup\mathbf{B}\mathbf{R}(\bar{\pi}_{-i}^{t})\) for \(i\in\{1,2\}\). \(j=j+1\). Reset \(\pi^{t+1}\). Construct restricted game \(\mathbf{G}_{t+1}\) with \(\Pi_{t+1}\). endif endfor ``` **Algorithm 1** XDO (McAleer et al., 2021) To demonstrate the efficacy of DO for EFGs, we present a basic example of XDO (SDO) in a two-player zero-sum EFG, as depicted in Figure 4. DO methods can compute the NE in EFGs with a small support NE without solving the original game. XDO has been empirically shown to converge rapidly in small support games (McAleer et al., 2021), but it lacks a theoretical analysis of the convergence rate for achieving an approximate NE. In the following section, we introduce a general framework that can generalize to both XDO and extensive-form ODO, and provide a theoretical analysis of their convergence rates. Based on this analysis, we propose a more sample-efficient DO algorithm for EFGs. ## 3 Regret-Minimizing Double Oracle In this section, we propose Regret-Minimizing Double Oracle (RMDO), a novel generic Double Oracle framework combined with regret minimization to approximate the Nash Equilibrium of EFGs. To the best of our knowledge, this is the first study analyzing the convergence rate and sample complexity of regret-minimization based Double Oracle for EFGs. RMDO consists of the same elements as the previous DO methods. Restricted game is constructed by considering only a subset of all pure strategies. Population \(\Pi_{t}\) containing the available pure strategies in the restricted game. Time window \(T_{j}\), defined as a partition of the set of all iterations where the populations are the same: \(\forall t_{0},t_{1}\in T_{j},\Pi_{t_{0}}=\Pi_{t_{1}}\), plays a crucial role in RMDO and contributes to making it a generic framework. The number of time windows, denoted by \(k\), corresponds to the number of restricted games from iteration \(t=0\) to \(T\). However, in contrast to existing DO methods, RMDO has the ability to expand the restricted game at any time. Prior to presenting the formal process, we highlight two key new components of RMDO. The first component is the **frequency function**\(m(\cdot)\) used in the computation of Best Response, denoted as \(m(\cdot)\), which is defined as a mapping from the set of time window indices \(\mathcal{N}\cap[0,k-1]\) to \(\mathcal{N}^{+}\). The function \(m(j)\) represents the frequency of computing Best Response in the \(j\)-th time window. Since the process of DO based on regret minimization is exactly to take turns to do regret minimization and compute the best response, balancing the regret minimization and Best Response computation is critical to achieve a rapid convergence. The second component is **weighted-average scheme**. To accelerate the convergence in the restricted game, we incorporate within-window weights \(w_{t}\), where \(t\in T_{j}\), allowing us to utilize the discounted regret minimizer. The within-window weights are exactly the weights in the discounted regret minimizer, satisfying that their sum in the current window \(T_{j}\) equals one. For instance, vanilla CFR employs uniform weights \(1/|T_{j}|\) in window \(T_{j}\). CFR\(+\) uses linearly increasing weights. Specifically, within the \(j\)-th window \(T_{j}\), \[w_{t}=2(t-\sum_{m=1}^{j-1}|T_{m}|)/|T_{j}|(|T_{j}|+1). \tag{5}\] Thus, the weighted average strategy in the window \(T_{j}\) is: \[\tilde{\pi}_{i}^{t}=\sum_{t^{\prime}\in T_{j}}\pi_{i}^{t^{\prime}}\cdot w_{t} \tag{6}\] Presented in Algorithm 2, the formal RMDO procedure is as follows. At each iteration \(t\), assuming the current time window is \(j\), the restricted game \(\mathbf{G}_{t}\) is constructed by restricting the pure strategies in the population \(\Pi_{t}\) for players. Within \(\mathbf{G}_{t}\), regret minimization is conducted by traversing the game tree, computing the regret of each infoset (node), and updating the strategy using any Counterfactual Regret Minimization (CFR) algorithm. At the outset of the procedure, when \(t=0\), the construction of the restricted game and the strategy update are bypassed since \(\Pi_{0}\) is empty. The expected value at \(t=0\) is computed based on the joint strategy \(\pi\) following a uniformly random policy. As the procedure progresses, when \(t>0\) and the current time window is \(T_{j}\), the joint average strategy of current window \(\tilde{\pi}=(\tilde{\pi}_{1},\tilde{\pi}_{2})\) is expanded to the original game every \(m(j)\) iteration by setting the probabilities of actions not in the population to zero. Then the original game best response (BR), considering all actions in the original game, is computed against the expanded current-window average strategy, which is \(\mathbf{a}_{i}^{t}=\arg\max_{i\in\Pi}v_{i}(\pi_{i},\pi_{-i})\), for both players. \(\mathbf{a}_{i}^{t}\) for \(i=1,2\) are both merged to the population \(\Pi_{t+1}\). Finally, if the population changes (\(\Pi_{t+1}\neq\Pi_{t}\)), a new time window is initiated, and \(\pi_{i}^{t+1}\) is reset to a uniform random strategy. ``` Input: hyperparameter \(m\), window index \(j=0\), uniform random strategy \(\pi^{0}\). Set population \(\Pi_{1}=\mathbf{BR}_{i}(\pi^{0})\) for \(i\in\{1,2\}\). Construct restricted game \(\mathbf{G}_{1}\) with \(\Pi_{1}\). for\(t=1,\cdots,\infty\)do Run one iteration of CFR in \(\mathbf{G}_{t}\). if\(t\mod m(j)=0\)then Compute \(\tilde{\pi}_{i}^{t}\) with equation (6). \(\Pi_{t+1}=\Pi_{t}\cup\mathbf{BR}_{i}(\tilde{\pi}_{-i}^{t})\) for \(i\in\{1,2\}\). if\(\Pi_{t+1}\neq\Pi_{t}\)then Start new window: \(j=j+1\). Reset strategy \(\pi^{t+1}\). Construct restricted game \(\mathbf{G}_{t+1}\) with \(\Pi_{t+1}\). endif endif endfor ``` **Algorithm 2** Regret-Minimizing Double Oracle Then we investigate the convergence guarantee of RMDO. Regret minimization algorithms can converge to \(\epsilon\)-NE by iteratively updating strategy in a static game. But in RMDO, a regret minimizer is employed in the restricted game, which expands over time. Thus if the restricted game stops expanding at some finite iteration, the convergence of RMDO is guaranteed. The following lemma proves that the number of restricted games is finite, which guarantees RMDO's convergence. **Lemma 3.1**.: _In an extensive-form game, where \(\Pi^{*}\) represents the set of Nash Equilibrium (NE) strategies of this game. Given that \(k\) is the number of restricted games, we have \(\min_{\pi\in\Pi^{*}}\max_{s\in S}\text{supp}^{\pi}(s)<k\leq\sum_{i}|S_{i}|\). (Refer to Appendix B.1 for proof)_ RMDO will converge by doing regret minimization in the final restricted game, after \(k\) times of restricted game expanding. Additionally, according to Lemma 3.1, where \(k\) is bounded, the subsequent assumption can be made in hindsight, without any loss of generality, for the remainder of this paper. **Assumption 3.2**.: Given an extensive-form game solving by Regret-Minimizing Double Oracle, there are \(k<\infty\) restricted games even when \(T\rightarrow\infty\). While the value of \(k\) is unknown during training, it is possible to partition the iterations into a limited number of time windows in hindsight to investigate the convergence rate and sample complexity of RMDO. In the following section, we will examine two types of strategies that are generated by RMDO. ### Overall Average Strategy The convergence rate of the average strategy can be determined by the regret upper bound of the regret minimization algorithm. Thus we first investigate the overall average strategy by taking the average over iterations \(t=0\) to \(T\). In addition to the within-window weights, global weights are assigned to \(\pi_{t}\) when computing the overall average strategy. Specifically, for all \(t\in T_{j}\) within a time window \(T_{j}\), the weight \(W_{t}\) is defined as \(|T_{j}|w_{t}/T\). It can be easily shown that \(\sum_{t}W_{t}=1\). The overall average strategy for player \(i\) is then obtained as: \[\bar{\pi}_{i}^{t}=\sum_{t=1}^{T}\pi_{i}^{t}\cdot W_{t} \tag{7}\] Define weighted-average regret of RMDO as \[\bar{R}_{i}^{T}=\max_{\pi_{i}^{\prime}}\sum_{t=1}^{T}(v_{i}(\pi_{i}^{{}^{ \prime}},\pi_{-i}^{t})-v_{i}(\pi^{t}))\cdot W_{t} \tag{8}\] We then prove that the regret has the following upper bound: **Theorem 3.3**.: _In RMDO, suppose the regret minimizer has \(\mathcal{O}(|S_{i}|\sqrt{|A|T})\) regret, the weighted-average regret bound of RMDO:_ \[\mathcal{O}\left(\sum_{j=0}^{k-2}\frac{|T_{j}|}{T}\cdot[m(j)-1]+ \sum_{j=0}^{k-1}\frac{\sqrt{k}|S_{i}||T_{j}|}{T\sqrt{|T_{j}|-m(j)+1}}\right), \tag{9}\] _converges to \(0\) if \(m(j)\) is sublinear in \(T\)._ The proof is in Appendix B.3. If the number of restricted games \(m(j)\) is sublinear in the total number of iterations \(T\), then the Regret-Minimizing Double Oracle (RMDO) algorithm is an anytime algorithm that converges to Nash Equilibrium (NE) with its overall average strategy. Moreover, with the help of Lemma 2.2, one can easily obtain the expected number of iterations required for the algorithm to converge to \(\epsilon\)-NE. ### Last-Window Average Strategy The final average strategy obtained from regret minimization in the last time window is an \(\epsilon\)-NE, requiring at least \(\mathcal{O}(|A_{i,k}||S_{i,k}|^{2}/\epsilon^{2})\) iterations to reach, where \(A_{i,k}\) and \(S_{i,k}\) denote the action and information set space in the last time window \(T_{k}\) of player \(i\). The regret bound of the regret minimizer is \(\mathcal{O}(|S_{i,k}|\sqrt{|A_{i,k}|/T})\). During the growth of the population, we observe that in each time window, the regret minimizer at each iteration except the last one will not reach \(\epsilon\)-NE. Otherwise, in a non-last time window, the global best response is already in the population. Thus, the average strategy in this window at this iteration will already reach \(\epsilon\)-NE, which is contradictory. Utilizing this idea, we can bound the number of iterations required for each time window and provide a sample complexity to reach \(\epsilon\)-NE. **Theorem 3.4**.: _The last-window average strategy of RMDO needs the following number of iterations to reach \(\epsilon\)-NE._ \[\mathcal{O}(k|A||S|^{2}/\epsilon^{2}-k+\sum_{j}m(j)). \tag{10}\] The proof is in Appendix B.4. Utilizing this result, we can estimate the sample complexity required for RMDO to achieve \(\epsilon\)-NE. It is noteworthy that as the exact regret minimizer will traverse the entire game tree, the sample complexity is at most \(\mathcal{O}(|S|)\). Hence, the sample complexity for the regret minimization part is given as \(\mathcal{O}(k|A||S|^{3}/\epsilon^{2}-k|S|+|S|\sum_{j}m(j))\). In addition, we also need to investigate the sample complexity involved in computing the best response (BR). As BR computation also requires full tree traversal, we only need to consider the number of times RMDO computes BR during training. Since it is reliant on the selection of frequency function \(m(j)\), we will examine the overall sample complexity in the following section where we introduce RMDO with various schemes of frequency functions and demonstrate how it generalizes to existing methods. Following this analysis, we propose a more sample-efficient algorithm in comparison to existing methods. ## 4 Efficient Schemes of Frequency Function The complexity of approximating NE using RMDO is influenced by the choice of frequency function \(m(j)\) for best response computation, as demonstrated in the previous section through theoretical analysis. For analysis on existing methods, we present various RMDO instantiations with distinct frequency schemes in this section. ### Online Double Oracle for Extensive-Form Games We propose an extension of the Online Double Oracle (ODO) algorithm, known as the Extensive-Form Online Double Oracle (XODO), which combines the Sequence-form DO framework with Counterfactual Regret Minimization (CFR) to solve extensive-form games. The algorithm is described in detail in Algorithm 3, where the construction and update of the restricted game and strategy are performed in a similar manner to the DO framework used in ODO. However, in each iteration, XODO expands the restricted game by computing the best response against the average strategy in the current window. As XODO computes the best response in each iteration after regret minimization, it is equivalent to the RMDO algorithm with \(m(j)=1\). Based on Theorem 3, XODO has a regret bound. **Corollary 4.1**.: _In XODO, given the regret minimizer with \(\mathcal{O}(|S_{i}|\sqrt{|A|T})\) regret upper bound, the weighted-average regret bound of XODO will be:_ \[\mathcal{O}(\frac{|S_{i}|k}{\sqrt{T}}). \tag{11}\] Proof.: Plug \(m(j)=1\) into equation 9, the upper bound will be \(\mathcal{O}(|S_{i}|\sqrt{k}\sum_{j}\sqrt{|T_{j}|}/T)\). According to Cauchy-Schwartz inequality, \(\sum_{j}\sqrt{|T_{j}|}\leq\sqrt{k\sum_{j}|T_{j}|}=\sqrt{kT}\), then the upper bound becomes \(\mathcal{O}(|S_{i}|k/\sqrt{T})\) Then according to lemma 3.1, \(k\leq|S|\), XODO has a sublinear regret upper bound. According to the regret to strategy conversion in Lemma 2.2, the overall average strategy of XODO requires \(\mathcal{O}(|S|^{2}k^{2}/\epsilon^{2})\) iterations to reach \(\epsilon\)-NE. We can further derive the sample complexity: **Proposition 4.2**.: _Since XODO compute BR in each iteration, the sample complexity to reach \(\epsilon\)-NE is \(\mathcal{O}(2|S|^{3}k^{2}/\epsilon^{2})\). (Proof in Appendix B.5)._ ### Extensive-Form Double Oracle The Extensive-form Double Oracle (XDO) algorithm is initialized with a given threshold \(\epsilon_{0}\), which is divided by two each time the local exploitability of the regret minimizer meets the threshold. The local exploitability is the exploitability in the restricted game. In time window \(T_{j}\), the algorithm performs regret minimization for more than \(4^{j}|S_{i,j}|^{2}|A_{i,j}|/\epsilon_{0}^{2}\) iterations before computing the best response. Here \(A_{i,j}\) and \(S_{i,j}\) denote the action space and infoset space in the \(j\)-th time window of player \(i\). Finally the average strategy in the last window is outputted when the convergence condition is met. If XDO converges, the last-window average strategy is \(\epsilon_{0}/2^{k}\)-NE. To investigate the complexity of reaching \(\epsilon\)-NE, it is assumed without loss of generality that \(\epsilon_{0}/2^{k}\leq\epsilon\). Thus, RMDO can generalize to XDO with \(m(j)\geq 4^{j}|S_{i,j}|^{2}|A_{i,j}|/\epsilon_{0}^{2}\) and the last-window average strategy. Based on Theorem 3, we can determine the expected iterations and sample complexity for XDO to reach \(\epsilon\)-NE. **Corollary 4.3**.: _XDO needs at least \(\mathcal{O}(k|A||S|^{2}/\epsilon^{2}+|A||S|^{2}4^{k}/\epsilon_{0}^{2}-k)\) iterations to reach \(\epsilon\)-NE._ **Proposition 4.4**.: _Since XDO computes BR only at the end of the time window before convergence, the sample complexity to reach \(\epsilon\)-NE is at least \(\mathcal{O}(k|A||S|^{3}/\epsilon^{2}+|A||S|^{3}4^{k}/\epsilon_{0}^{2})\)._ Corollary 4.3 is a specific instance of Theorem 3 with an appropriate choice of \(m(j)\), and its proof is provided in Appendix B.6. The sample complexity of XDO is analyzed in Proposition 4.4, and its proof can be found in Appendix B.7. Theorem 3.1 states that \(k\leq|S|\); thus, theoretically, the restricted game stopping condition of XDO decays exponentially, implying that in the worst-case scenario, when \(k=|S|\), XDO has an exponential sample complexity in the number of infosests. Thus, theoretically, XDO may suffer from a large sample complexity. Empirically, the values of \(k\) when executing XDO on common poker games leads to a large sample complexity (Appendix A). ### Periodic Double Oracle The exponentially growing frequency function \(m(j)\) of XDO leads to an exponential increase in sample complexity with respect to \(k\). On the other hand, XODO's inflexibility arises from the fact that it performs best response computation in each iteration, neglecting the balance between regret minimization and best response computation. In order to mitigate the large increase in sample complexity caused by a large value of \(k\), and to balance the two computations, we propose the Periodic Double Oracle (PDO) algorithm. PDO computes the best response at a fixed time interval in each window and outputs the average strategy in the last window. By setting \(m(j)\) to a constant \(c\), PDO can be viewed as an instantiation of RMDO. We can derive the expected number of iterations required for PDO to reach an approximate NE by utilizing Theorem 3.4. The algorithm is presented in Appendix D. **Corollary 4.5**.: _PDO needs \(\mathcal{O}(k|A||S|^{2}/\epsilon^{2}+(c-1)k)\) iterations to reach \(\epsilon\)-NE._ **Proposition 4.6**.: _Since PDO computes BR every \(c\) iterations, the sample complexity to reach \(\epsilon\)-NE is \(\mathcal{O}(k|A||S|^{3}/\epsilon^{2}+ck|S|+k|A||S|^{3}/c\epsilon^{2}-k|S|/c)\)._ Proposition 4.6 has been demonstrated in Appendix B.8. The periodicity \(m(j)=c\) in PDO reduces the impact of the dominating term \(|S|^{3}\) in sample complexity, compared to that of XODO. Additionally, compared to XDO, PDO eliminates the term exponential in \(k\) from the sample complexity. While XDO may have an _exponential_ sample complexity in \(|S|\) in the worst case scenario, PDO only has polynomial complexity in \(|S|\). Hence, theoretically, PDO is more sample-efficient than existing algorithms (refer to Table 3 for a summary of sample complexities). Given that these complexities are computed in the worst-case scenario, and are computed to ensure that the algorithm reaches at most \(\epsilon\)-NE, we cannot determine the value of \(c\) by merely solving for extreme values of sample complexity. Instead, we consider it as a hyperparameter and analyze the empirical performance of PDO with different \(c\) in the next section. ## 5 Experiments We conducted empirical assessments on a variety of extensive-form games, including Sequential Blotto (perfect-information extensive-form game), Kuhn Poker with a initial spot \(40\) for each player, Leduc Poker, Leduc Poker Dummy and Oshi Zumo. Leduc Poker Dummy is of particular interest as the NE of the game has a small support as the actions are duplicated in each infoset (McAleer et al., 2021). Oshi Zumo is a board game in which players must repeatedly bid to push a token off the other side of the board (Buro, 2004). The full description of games is in Appendix C. We evaluate the performance with the exploitability in terms of number of infosets visited and wall time measured in seconds. The number of visited infosets refers to the total number of nodes traversed by the algorithm, including those encountered during best response (BR) computation. It is equivalent to the number of touched nodes in the experiment of Stochastic Regret Minimization (Farina et al., 2020). It is important to note that the difference between the expanded infosets in XDO paper (McAleer et al., 2021) and the visited Figure 1: Performance of XODO and PDO with periodicity function \(m(j)=1,10,50,100\) on Leduc Poker, Dummy Leduc Poker and Oshi Zumo. Figure 2: Performance of PDO with periodicity \(50\) and XDO on Sequential Blotto Games and Kuhn Poker with initial spot \(40\) for each player. In Sequential Blotto, XDO is still significantly more exploitable than PDO even after visiting more than \(10^{6}\) infosets. infosets in the present study is that XDO did not include the infosets during BR computation. The experiment uses the state-of-the-art exact regret minimizer, CFR\(+\)(Tammelin, 2014), for all double oracle algorithms, and the regret minimization algorithms are initialized with uniform random policies following the default setting. The study begins by analyzing the performance of PDO with various periodicity choices and comparing them with XODO. Furthermore, the algorithm's performance is compared against baselines, including XDO with restricted game solver CFR\(+\), CFR\(+\) itself, and Extensive-form Fictitious Self-play. All experiments and algorithm implementations are based on OpenSpiel (Lanctot et al., 2019). In Figure 1, we present a comparison between the XODO and PDO with periodicity values of \(m=1,10,50,100\), in terms of exploitability plotted against wall time in seconds.Our results show that in Kuhn poker, PDO algorithms outperform XODO, with all PDO algorithms exhibiting similar performance. Among the PDO algorithms, we found that PDO with periodicity \(100\) performs slightly better than the other PDO algorithms. In all the other games, PDO algorithms outperformed XODO by a large margin. In Leduc Poker, larger periodicity values led to faster convergence. Among the PDO algorithms, PDO with periodicity values of \(50\) and \(100\) performed the best. In Leduc Poker Dummy, PDO with periodicity \(50\) achieved a small exploitability the fastest. In Oshi Zumo, large periodicity values led to slow decreasing in exploitability at the early stages of training, but reached the least exploitability later on. We also compare the performance of PDO with XDO in Figure 2. We find that PDO (\(50\)) outperforms XDO with a large margin in Sequential Blotto and Large Kuhn Poker. In Figure 3, we investigate the performance of PDO (\(50\)) and other baselines. We find that PDO outperforms XDO and Extensive-form Fictitious Self-play (XFP) with a large margin in Leduc Poker, and has a slight improvement over CFR\(+\) in exploitability in the later stages of the training. In Leduc Poker Dummy, PDO outperforms all other algorithms with a large margin starting from the beginning of the training. In Oshi Zumo, PDO has a more stable exploitability curve and a faster convergence in general compared to XDO, with a lower exploitability level than CFR\(+\) in the later stages of the training. Our findings suggest that PDO improves the convergence speed of DO methods in different types of games, and that the choice of periodicity can have a significant impact on the performance of PDO. Furthermore, we find that the last-window average strategy converges faster than the overall average strategy when comparing PDO (\(1\)) and XODO, likely due to the poor performance of the strategy before the restricted games stop expanding. These results contribute to the ongoing effort to improve the efficiency and effectiveness of DO algorithms in solving large-scale imperfect-information games. ## 6 Conclusion This paper proposes the first generic framework for studying the theoretical convergence speed and algorithmic performance of regret-minimization based Double Oracle algorithms. Building upon this framework, we propose the Periodic Double Oracle algorithm which improves the sample complexity. Our numerical simulations demonstrate that PDO achieves superior performance compared to XDO and ODO in extensive-form games. Additionally, PDO exhibits fast convergence in games with small support NE, and remains robust across other games. Overall, our proposed framework and algorithm offer a significant contribution to the field of Double Oracle methods. A future direction is combining PDO with deep regret-based methods for solving the restricted game (Perolat et al., 2022; McAleer et al., 2023) and finding the BR (Lanctot et al., 2017; McAleer et al., 2022; 1). As for applications of our framework, our framework can also be applied to robust and risk-aware reinforcement learning (Zhang et al., 2020; Lanier et al., 2022; Slumbers et al., 2022). Additionally, it could be applied to solve extensive-form continuous games (Adam et al., 2021) where \(k\) can be large, but the sample complexity of PDO is only linear in \(k\). Figure 3: Comparison of PDO with XDO, CFR\(+\) and XFP on Leduc Poker, Dummy Leduc Poker and Oshi Zumo. PDO preserves the strength of DO that performing well in games with small-support NE (Dummy Leduc Poker), but remains competitive to the state-of-the-art regret minimization methods in other games.
2308.01640
Beyond small-scale transients: a closer look at the diffuse quiet solar corona
Within the quiet Sun corona imaged at 1 MK, much of the field of view consists of diffuse emission that appears to lack the spatial structuring that is so evident in coronal loops or bright points. We seek to determine if these diffuse regions are categorically different in terms of their intensity fluctuations and spatial configuration from the more well-studied dynamic coronal features. We analyze a time series of observations from Solar Orbiter's High Resolution Imager in the Extreme Ultraviolet to quantify the characterization of the diffuse corona at high spatial and temporal resolutions. We then compare this to the dynamic features within the field of view, mainly a coronal bright point. We find that the diffuse corona lacks visible structuring, such as small embedded loops, and that this is persistent over the 25 min duration of the observation. The intensity fluctuations of the diffuse corona, which are within +/-5%, are significantly smaller in comparison to the coronal bright point. Yet, the total intensity observed in the diffuse corona is of the same order as the bright point. It seems inconsistent with our data that the diffuse corona is a composition of small loops or jets or that it is driven by discrete small heating events that follow a power-law-like distribution. We speculate that small-scale processes like MHD turbulence might be energizing the diffuse regions, but at this point we cannot offer a conclusive explanation for the nature of this feature.
J. Gorman, L. P. Chitta, H. Peter, D. Berghmans, F. Auchère, R. Aznar Cuadrado, L. Teriaca, S. K. Solanki, C. Verbeeck, E. Kraaikamp, K. Stegen, S. Gissot
2023-08-03T09:17:56Z
http://arxiv.org/abs/2308.01640v1
# Beyond small-scale transients: a closer look at the diffuse quiet solar corona ###### Abstract Context: Aims:Within the quiet Sun corona imaged at 1 MK, much of the field of view consists of diffuse emission that appears to lack the spatial structuring that is so evident in coronal loops or bright points. We seek to determine if these diffuse regions are categorically different in terms of their intensity fluctuations and spatial configuration from the more well-studied dynamic coronal features. Methods:We analyze a time series of observations from Solar Orbiter's High Resolution Imager in the Extreme Ultraviolet to quantify the characterization of the diffuse corona at high spatial and temporal resolutions. We then compare this to the dynamic features within the field of view, mainly a coronal bright point. Results:We find that the diffuse corona lacks visible structuring, such as small embedded loops, and that this is persistent over the 25 min duration of the observation. The intensity fluctuations of the diffuse corona, which are within \(\pm\)5%, are significantly smaller in comparison to the coronal bright point. Yet, the total intensity observed in the diffuse corona is of the same order as the bright point. Conclusions:It seems inconsistent with our data that the diffuse corona is a composition of small loops or jets or that it is driven by discrete small heating events that follow a power-law-like distribution. We speculate that small-scale processes like MHD turbulence might be energizing the diffuse regions, but at this point we cannot offer a conclusive explanation for the nature of this feature. ## 1 Introduction In studying the coronal heating problem, three main regions are acknowledged to be observed within the corona. These are active regions (ARs), coronal holes (CHs), and the quiet Sun (QS). Active regions are the brightest and most dynamic portions of the corona, often associated with underlying strong magnetic field patches including sunspots, and seen to consist of coronal loops as observed in the extreme ultraviolet (EUV) and X-rays. In contrast, CHs are the darkest portions in the solar corona in EUV and are attributed to open magnetic fields that connect the solar surface to the heliosphere. Outside of ARs and CHs, there remains the QS. As the QS makes up the largest proportion of the solar surface, understanding the processes at work within this portion of the corona is crucial to understanding coronal heating. Based on observations, a variety of small-scale, dynamic features, such as nanoflares, coronal bright points, and jets, have all been considered to be of high importance to balance the overall energy losses from the QS corona (e.g., Aschwanden et al. 2000; Hosseini Rad et al. 2021; Shen 2021; Chitta et al. 2021a). More recently, based on high spatial resolution and high cadence EUV observations from the Extreme Ultraviolet Imager (EUI; Rochus et al. 2020) on board the Solar Orbiter (Muller et al. 2020), Berghmans et al. (2021) observed compact isolated coronal brightenings termed campfires. A main characteristic of all these nanoflare-type heating events, including campfires, is that they are all clearly distinguishable from the local background coronal emission. Whether these observable discrete heating events are sufficient to explain the energy losses from the quiet Sun corona is still an open question (Aschwanden et al. 2000; Chitta et al. 2021a). While much focus has been placed on these more distinguishable elements or events in the past, there remains much to be learned from the quieter portions of plasma that are devoid of these obvious localized transient brightenings. In particular, we are referring to the areas of seemingly stable and featureless EUV emission in the QS that we label the diffuse corona. Diffuse emission associated with ARs has been studied in the past. Viall & Klimchuk (2011) looked at the diffuse portions of AR emission (i.e., areas not associated with any distinguishable loops or loop footpoints) and determined that these regions encompass a majority of the emitting portion of ARs, are only marginally less bright (10-35%) compared to AR loops, and are dynamically heated, as opposed to being energized by some steady process. Whether or not a similar significance and heating mechanism can be attributed to the diffuse corona in the QS remains to be seen. In most studies, however, this diffuse emission is considered as a background only. Usually, no particular consideration is given to it, apart from the urge to correct for (i.e., subtract) it when looking at features resolved in space and time embedded in this background. In this work, we analyze the evolution of the diffuse quiescent corona observed with the high spatial and temporal resolution allowed by the EUI instrument. We compare the diffuse emission and its fluctuations to those seen in the more dynamic features within the observational field of view (FOV). We find that not only is the diffuse corona an enduring contributor of seemingly stable and unstructured emission, but that it also is more widespread and is therefore considered to be an important factor in the overall energy balance of the solar corona. ## 2 Observations On March 26, 2021, Solar Orbiter (Muller et al. 2020) was located at a distance of 0.72 AU from the Sun on the far-side with respect to Earth. The EUV High Resolution Imager (HRI\({}_{\rm EUV}\)) on the Extreme Ultraviolet Imager was pointed towards the limb at a latitude of about 30\({}^{\circ}\) and recorded images with very high cadence of 2 s (1.65 s exposure) between 23:32:20 UT and 23:57:18 UT (25 min).1 Footnote 1: Data release 4.0 2021-12. DOI: [https://doi.org/10.24414/s5da-7e78](https://doi.org/10.24414/s5da-7e78) HRI\({}_{\rm EUV}\) has a plate scale of 0.492\({}^{\prime\prime}\) pixel\({}^{-1}\), which amounts to roughly 260 km pixel\({}^{-1}\) on the Sun (as seen from Solar Orbiter) for this data set. The pass-band of HRI\({}_{\rm EUV}\) is centered at 17.4 nm and its response peaks at temperatures of about 1 MK due to the presence of spectral lines of Fe ix (at 17.11 nm) and Fe x (at 17.45 nm and 17.72 nm). Before conducting our analysis, we aligned the level 2 (L2) data to remove the jitter in the image sequence as described in Chitta et al. (2022). The region observed by EUI is a quiet Sun region outside coronal holes. While the target area was not visible from Earth-based telescopes, the Full Sun Imager (FSI) of EUI acquired images in the 304 A and the 174 A channels. These show a north polar coronal hole at latitudes well above 60\({}^{\circ}\) North, while the field of view of HRI\({}_{\rm EUV}\) is below 55\({}^{\circ}\) North, i.e., located far from the coronal hole. ## 3 Results We focus on the on-disk portion of these observations that also covered the limb and regions off the disk. An overview of the observational FOV, including the primary areas of interest for this study, is shown in Fig. 1. Panel a depicts the full-FOV of HRI\({}_{\rm EUV}\), and panel b shows a zoom into the on-disk portion of the observation that is of interest for this study. For both a and b, the intensity shown has been averaged over the entire 25 min observing time of this EUI sequence, and then it has been normalized by the minimum and maximum intensities within the zoomed FOV shown in b. Thus, the image color scale in the full FOV and the zoom in Fig. 1 is different. With the above normalization for panel b it is limited between 0 and 1 and for panel a, which contains pixels with both greater and weaker intensities, it ranges from -0.7 to 1.1. ### Diffuse region, loop-like features, and coronal bright points We find several distinct regions in the corona at \(\sim\)1 MK. There are (1) dark patches (with normalized intensities of 0.25 or less), (2) bright patches associated with loop-like structures and coronal bright points containing intensities at or near the saturation limit of the detector, and, finally, (3) there appear to be what we refer to as "diffuse" regions. These diffuse regions are areas that seem to lack structuring in both time and space. They are hazy portions seen throughout the FOV with time-averaged, normalized intensities mostly between 0.25-0.6. An example patch of diffuse corona is outlined by the cyan box (panel i) in Fig. 1b. The diffuse corona appears to cover a significant area when compared to the brighter regions in the FOV. However, the diffuse corona lacks any of the obvious structuring that is associated with the well-studied structures of the quiet solar corona, such as coronal bright points that are composed of \(\sim\)10 Mm long loops (e.g., see the review article on coronal bright points by Madjarska 2019). Our analysis aims to compare these differing regions in both a qualitative and quantitative manner to determine the extent of this apparent discrepancy in terms of spatial structuring. For this we investigate a cut through the FOV that runs through these dif Figure 1: Observation summary. _Panel a_: the full field of view (FOV) covered in the 17.4 nm band of HRI\({}_{\rm EUV}\). The red box outlines the zoomed-in FOV that is shown in _panel b_. Both _panels a–b_ show the time-averaged, normalized intensity (to the minimum and maximum in the area in panel b). _Panel b_: the vertical black line shows the location of the cut that is used in the time-distance plot displayed in Fig. 2 and the intensity-distance plot shown in Fig. 3. The cyan box (i) and the orange box (ii) outline the diffuse region and the bright point used in Fig. 4. The green box (iii) outlines the loop-like features zoomed into in Fig. B.1. See Sect. 3.1. ferent features (black line in Fig. 1b): diffuse regions, loop-like features, and a coronal bright point. (While many bright features are saturated in the exposure, in particular at the limb, this bright point is not). The intensity along this cut as a function of time is presented in a time-distance plot (see Fig. 2). It is evident from this plot that the bright point stretching from \(y\)=106 Mm to 117 Mm shows intensity structuring in both space and time. The same also applies to the clear loop-like features situated around y=65 Mm to 100 Mm. The bright point and the loop-like features appear to be continually evolving on spatial scales of roughly a few Mm and timescales on the order of a few minutes, although the loop-like features are comparatively less variable than the bright point. The bright point is a typical small coronal bright point in appearance and size. It consists of a number of short coronal loops with the core region (Fig. 4 ii) having a size of just over 5 Mm (cf. Madjarska 2019). Such compact small-loop-type bright points are found in abundance in the quiet Sun and have been used, e.g., to determine the coronal rotation (Brajsa et al. 2001, 2002), even though the spatial resolution of the older data was inadequate to sufficiently resolve the internal structure of the bright points. The region with loop-like features does not show clearly distinguishable loops, but (in the time-averaged image) elongated features reminiscent of loops (Fig. B.1 iii). Because we concentrate in this study on the diffuse regions, we do not follow this up further (see Appendix B for a further discussion of the loop-like features). In contrast to these more dynamic features, diffuse regions seen in Fig. 2 are rather invariable. The time-distance plot reveals the overall temporal stability of the diffuse corona. The emission is diffuse in the \(y\)-direction in a range of 15-65 Mm, with two seemingly distinct regions of differing intensity levels: a brighter portion between 15-42 Mm and a darker portion from 42-65 Mm. For both portions, there is a hazy quality that remains stable throughout the 25 min observing time on spatial scales comparable to that of a supergranule. Both the brighter and darker sections show no obvious jumps in intensity across their width. There is a small brightening at about \(y\)=48 Mm occurring periodically over intervals of time limited to a few minutes (highlightlighted by arrows in Fig. 2). This could be categorized as an HRI\({}_{\rm EUV}\) campfire (see Berghmans et al. 2021). However, the brightening is only a few pixels in width and makes up only a small fraction of the overall emission in that region. We emphasize that the data shown in the space-time plot in Fig. 2 have a time cadence of only 2 s and a spatial sampling of 260 km on the Sun. This implies that transient small-scale Figure 3: Spatial variability along a cut through a diffuse region, loop-like features and a coronal bright point at four different times. This intensity-distance plot shows the intensity along the cut outlined by the black vertical line in Fig. 1b for several intervals of time during the observation. For each 30 s interval (15 time-consecutive images), the intensity at each pixel along the cut is averaged over that time period and plotted as a function of distance. Four of these light curves are shown as a stack, with the bottom curve being the intensity-distance 30 s average ending at 1 min (and, therefore, starting at 30 s), the curve directly above it is the 30 s average ending at 10 min, and so on. The black vertical bars with each curve show the maximum error for that curve (see Appendix A). Above the panel the location in the \(y\) direction of three types of regions are marked. See Sects. 3.2 and 4.1. Figure 2: Temporal evolution of cut through diffuse region and coronal bright point. This time-distance plot shows the intensity in HRI\({}_{\rm EUV}\) from along the cut (1 pixel in x) outlined by the black vertical line in Fig. 1b as it evolves over the entire 25 min observing time. The intensity is scaled linearly from 915 DN s\({}^{-1}\) (black) to 2500 DN s\({}^{-1}\) (white). Above the panel the location in the \(y\) direction of three types of regions are marked. The two arrows mark transient brightenings in the diffuse region. See Sects. 3.1, and 4.2. features should be visible if they were present (down to those temporal and spatial scales). ### Spatial and temporal variability We now dissect the time-distance plot for a closer look at the difference between the coronal bright points and the diffuse regions. To this end, the intensity along the cut from Fig. 1b was averaged over each subsequent 30 s interval and displayed as line plots of time-averaged intensity versus distance, several of which are shown stacked in Fig. 3. Essentially these stacked plots are horizontal cuts through Fig. 2 for a given time span. Across the aforementioned diffuse regions (i.e., \(y\)=15-65 Mm), spatial changes in the intensity are gradual. As will be discussed below, the small fluctuations seen on top of this gradual variation are at the level of the calculated maximum error. The shape of the curve also remains similar for each plot, despite there being 5-10 min of separation between them. This implies that the diffuse corona behaves in a spatially coherent fashion over its entire extent corresponding to the scale of a supergranule of about 20 Mm. The peak intensity in the brighter diffuse area (at \(y\)\(\approx\)35 Mm) is only slightly less than, or even equal to the peak intensity seen in the bright point and loop-like features (in particular before \(t\)=15 min). Conversely, the morphology of the loop-like features and coronal bright point changes markedly over time, and the intensity fluctuations within these features at each point in time varies significantly, i.e., well-above the maximum error. This further differentiates the characteristics of the diffuse regions from those of the bright points and loop-like features. To quantitatively characterize the temporal stability of the diffuse corona, its relative intensity fluctuations provide clearer insight (Fig. 4). Here, we compared the absolute and relative intensity variations of the diffuse corona (box i in Fig. 1b) to that within the coronal bright point (box ii). The absolute fluctuations are derived by simply spatially averaging the intensity within each of the sub-fields a to d for each snapshot. The relative fluctuations were then calculated by subtracting the overall time-averaged intensity within each sub-field from its spatially-averaged intensity and then dividing this difference by the time-averaged intensity. Figure 4i and ii show which sub-fields of the observation are sampled, either from a diffuse region (a, b) or from a bright point (c, d). The relative fluctuations in the sub-fields of the diffuse region (a, b) remain within a few percentage points (less than \(\pm\)5%) over the entire 25 min (Fig. 4a, b). In contrast, in the sub-fields of the coronal bright point (c, d), the intensity changes by over 10% on shorter timescales of about 5 min and can sway by up to almost 40% over the entire 25 min (Fig. 4c, d). To judge the significance of these fluctuations, one has to compare the observed variability to the measurement errors. In Appendix A we discuss how we calculate the maximum estimated error. As a very rough estimate of this, for our current data set in regions that are moderately bright, the typical error of the intensity is of the order of 3%. In general, the sub-fields of the bright point show trends that rise well above the level of the maximum estimated error, both on shorter and longer timescales. In contrast, in the diffuse regions the variability we see would essentially be consistent with measurement errors. There might be some dynamics within the diffuse regions with amplitudes just above the noise that persist for more than a few time steps, but these fluctuations are still smaller than the overall level of fluctuations over the observing period (of 25 min). While the fluctuations shown here are only for two sub-sections of each feature, the behavior is similar when more sub-sections are analyzed. This relative temporal stability of the diffuse corona and the larger variability seen in the coronal bright point are further demonstrated using the respective snapshots, at three instances, without time-averaging (see Fig. 5). It is clear from these images that the diffuse region is not only temporally stable compared to the bright point, but it also lacks any distinguishable spatial structuring. This time series analysis underlines that the diffuse corona, often considered as a mere background, contributes a significant amount of the coronal emission in the quiet Sun. For a quantitative estimate of this contribution, see Sect. 4.1. ## 4 Discussion We note the general pervasiveness of the diffuse corona seen at temperatures of 1 MK with HRI\({}_{\rm EUV}\). These regions remain featureless in both time and space, yet are relatively bright compared to the more dynamic features within the FOV. This naturally leads to questions about the significance of these regions in terms of their contribution to the emission of the 1 MK corona and on the possible heating mechanisms operating in such regions. Similarly, questions arise regarding the reasoning behind the lack of noticeable structuring and how this is related to the overall energy balance within the solar atmosphere. ### Contribution of diffuse regions to radiative losses The intensity observed in the diffuse regions is relatively strong, even when compared with bright points. This is illustrated in Fig. 4a-d where we show the relative and absolute average intensity values [DN] within sub-fields of the diffuse and bright point regions. We see that, over the entire observing period, the time-averaged diffuse intensity (i.e., radiative flux per area) is about 80% that of the bright point, i.e., of the same order of magnitude. Also, from Fig. 3, the peak intensities along the cut from the brighter diffuse portion (around \(y\) = 35 Mm) are very similar to those from the bright point (around \(y\) = 110 Mm) and also about equal to those from the loop-like features sampled between \(y\) = 65 Mm and 100 Mm. All of this points to the conclusion that the emission coming from the diffuse regions is non-negligible and should be an important consideration in any coronal heating model, in particular when considering that the diffuse corona can cover a large fraction of the quiet Sun. We further investigate the significance of diffuse emission by conducting a rough estimate of the total emission contribution from the diffuse areas compared to the bright points within the FOV. For this we classify the regions in the FOV in Fig. 1b into areas covered by coronal bright points and diffuse regions by a simple by-eye estimate. These regions are marked and labeled in Fig. 6. Some of the bright points host a few pixels that are saturated on the detector. Hence our estimate for the bright point emission will be a lower limit only. However, this effect should be well below a factor of two and thus our calculation should be just fine for the order-of-magnitude estimate we aim for here. Based on the integration of the emission from the respective diffuse and bright point areas, we find that the seemingly quiet diffuse areas provide almost 2.7 times the emission as the more dynamic features. Thus overall, the diffuse emission would dominate the quiet Sun radiative losses at around 1 MK, while discernible bright features would contribute a minor fraction, maybe half at best. This is not surprising since these by-eye de fined diffuse regions also have almost three times the area as the bright points (see Fig. 6). ### Diffuse quiet-Sun corona and small loops The magnetic field is space-filling and is the driver behind the energetics of the corona. The work of Dowdy et al. (1986) described the solar magnetic scene to be comprised of both (locally) open field lines expanding into funnels with height and small-scale, closed loops that connect back to the surface. Based on studies of magnetic field extrapolations, such loops are expected to be rooted not only in the network regions at the edges of supergranules, but also in the internetwork within a few Mm of the network boundaries (Schrijver & Title 2003; Wiegelmann et al. 2010). Quiet Sun observations reveal the presence of very short loops clearly distinguishable in EUV observations, only a few Mm long. They come at (probably) high coronal temperatures (Peter et al. 2013; Barczynski et al. 2017) as well as at lower transition region temperatures (Hansteen et al. 2014). These short loops have lifetimes of only a (few) minute(s). More recent observations also reveal the dynamic nature and substructure of such small EUV loops including propagating features (Mandal et al. 2021) and small jets associated with them (Chitta et al. 2021b). Also, transition region loops crossing a super-granular cell in the quiet Sun (i.e., with a length of ca. 20 Mm), have been reported (e.g., Fig. 1 of Teriaca et al. 2004). However, all these types of loops are discrete units and far from space-filling. Still, the presence of loops in the quiet Sun would mean that we should expect the presence of a few Mm long, small-scale coronal loops in our observation even within the so-called diffuse areas. As discussed above, these smaller loops are expected to be dynamic, showing brightness fluctuations on timescales on the order of minutes (see also Reale 2010). Similar lifetimes of (magnetic) loops of a few minutes are also found based on magnetic field extrapolations from time series of high resolution quiet Sun magnetograms (Wiegelmann et al. 2010). Case in point, our observations do show such brightness fluctuations in the coronal bright points, which are indeed on shorter timescales compared to the 25 min of observed stability for our diffuse areas. If the diffuse areas are composed of such smaller loops, then we would expect loop-like brightenings at some point during our analysis unless a different loop emission behavior is at play. We do see that there are a limited number of small-scale brightenings occurring within some diffuse portions of Fig. 2 (see white arrows) that last for a few minutes at a time and are only a few pixels in length (i.e., some 500 km in \(y\)-distance). These brightenings, however, are not ubiquitous as would be expected for the typical picture of low-lying loops crisscrossing everywhere in the FOV. Also, these small intensity enhancements barely rise above the level of the local diffuse region (or background). When Figure 4: Intensity fluctuations in diffuse and bright point regions. The two left panels show the zoomed-in FOV covering the diffuse region (panel i) and bright point (panel ii) outlined by the cyan and orange boxes in Fig. 1b, respectively. Each image is the same time-averaged and normalized figure as shown in Fig. 1b, except the intensity range shown is further limited to the minimum and maximum for each zoomed FOV. Within the regions of interest, four smaller sub-fields a to d are outlined. Each of these boxes has the same area. The spatially-averaged relative (left y-axis) and absolute (right y-axis) intensities for the respective sub-fields are plotted as a function of time on the right in panels a–d. The vertical bars shown in each of these panels represent the maximum error for each time-series (see Appendix A). compared to the intensity level local to the area of the brightening in the minutes before and after its enhancement, the brightening only rises to a level of 5% above this background. The coherent spatial intensity variations on supergranular scales in these diffuse regions indicate that all the constituting smaller loops, if present, must evolve in unison, which is unlikely. Based on this, we think that the small (coronal) loops reported before may not be the source of emission from the diffuse regions. In that case we should (occasionally) see a transient brightening caused by one of the transient Mm-scale loops that have been reported before, but we do not see these. ### Diffuse quiet-Sun corona and jets Chromospheric spicules (de Pontieu et al., 2007) and transition region network jets (Tian et al., 2014) are other common, small-scale jet features whose imprint should be seen everywhere on the quiet Sun. Spicules are chromospheric jets and are categorized into two types (de Pontieu et al., 2007). Type-I spicules are longer-lived and seen to both rise and fall at the limb, remaining at chromospheric temperatures throughout their lifetime. Type-II spicules are more impetuous and often appear to shoot up before disappearing from chromospheric imaging channels, sometimes then appearing in the hotter channels imaging transition region (TR) plasma (Pereira et al., 2014), which can also include the return flows when cooling back down from hotter temperatures (Bose et al., 2021). Spicules have typical speeds ranging from tens of \(\,\mathrm{km\,s^{-1}}\) (type-I) up to \(100\,\mathrm{km\,s^{-1}}\) (type-II), lifetimes of several minutes, and characteristic lengths ranging from a few hundred to several thousand km (e.g., Tsiropoula et al., 2012, and included references). While spicular material is easiest to detect at the cooler temperatures found in the chromosphere and TR (T \(\leq 10^{5}\) K), recent investigations show that there are exceptions to this where spicule signatures can be observed as corrugations in the EUV emission. For example, signatures have been found that the on-disk counterparts of spicules, namely dynamic fibrils, show EUV emission (Mandal et al., 2023). This might indicate higher temperatures of more than \(10^{5}\) K, although this is not conclusive, yet Figure 5: Snapshots of diffuse region and bright point without time-averaging. The snapshots at three times as indicated by the time stamps are shown in the three columns. The top row shows the the diffuse region (marked i) and the bottom row the bright point (marked ii). Images are commonly scaled to the same minimum and maximum values of intensity. Boxes a-d have the same meaning as in Fig. 4. See Sects. 3.2 and 4.1. Figure 6: Grouping of diffuse coronal regions and bright points in quiet Sun. The intensity image is the same as Fig. 1b. The diffuse coronal regions (dc1 to dc4) are outlined in blue, the bright points (bp1 to bp6) are highlighted in yellow. See Sect. 4.1. (Henriques et al. 2016; Martinez-Sykora et al. 2018; Samanta et al. 2019). Similarly, coronal counterparts of network jets remain elusive (Kayshap et al. 2018; Gorman et al. 2022). Small coronal jets are, in general, a common phenomenon in quiet Sun regions. In particular, recent EUI observations have shown an abundance of small-scale jets (e.g., Chitta et al. 2021b; Mandal et al. 2022). Small jets and their substructure, e.g., plasmoids, have recently also been observed in radio emission (e.g. Shimojo et al. 2017; Rodger et al. 2019). Still, the question remains if highly dynamic small-scale features such as these jets or spicules can provide an explanation of the diffuse corona. At 2 s cadence and about 500 km resolution, the observation from HRI\({}_{\rm EUV}\) that we analyze in this study certainly could pick up any spicules or network jets that would show counterparts in EUV. Yet, we do not detect any such signatures in the far-reaching diffuse areas. This either means that these jets only very rarely reach coronal temperatures, or at least do not get heated to 1 MK in the magnetic regime that is responsible for the diffuse emission. Another possibility could be that those small-scale jets that do reach coronal temperatures might lose their identity and structuring as their energy gets dissipated. ### Diffuse quiet-Sun corona and small-scale heating events The simple presence of a diffuse, seemingly featureless structure has implications for the heating mechanism that has to sustain the hot corona. The energization has to be either continuous or be concentrated in a very large number of small-scale heating events so that this is not leaving an imprint at the resolution of our observations. Hence, we explore the applicability of several proposed heating mechanisms for the diffuse corona. Clearly, we cannot relate our findings to all possible heating scenarios, instead we picked those we considered most relevant for our study. The reader is referred to recent reviews that discuss the heating problem in general (e.g., Klimchuk 2006), with respect to 3D models (e.g., Peter 2015), in terms of waves (e.g., Van Doorsselaere et al. 2020), or in the light of field-line braiding (e.g., Pontin & Hornig 2020). One heating scenario involves a power-law distribution of the number of brightening events with energy characterized by a slope of \(-2\), at least for the smaller flare-like events with less than \(10^{26}\) ergs (Hudson 1991). This requires there to be an increasingly higher number of distinct events at ever smaller spatial and energy scales. In fact, these events would be so small that they have not yet been resolvable by Sun-observing instruments (Parnell & Jupp 2000; Chitta et al. 2021a). Up until now, however, results are mixed regarding whether or not the energy distribution for impulsive heating events has the necessary slope to account for coronal heating (e.g., Berghmans et al. 1998; Krucker & Benz 1998; Aschwanden et al. 2000; Parnell & Jupp 2000; Pauluhn & Solanki 2007; Aschwanden & Shimizu 2013). It has also been pointed out that even if the power-index is greater than 2, this is not necessarily sufficient to heat the quiet Sun (Berghmans 2002). Joulin et al. (2016) argue that, regardless of the actual energy distribution slope, it is inherently unlikely to be able to detect high-frequency, small-scale brightenings against the background coronal emission. The authors state that their study combined with the works of others before them shows that heating cannot be guaranteed to be observed as broken up into discrete events. Perhaps such unresolved, impulsive brightenings are what is the cause behind the diffuse corona. If there were a sufficiently high number of small-scale events resulting in the diffuse emission that we see, they would still have to be driven in a spatially coherent fashion to create a diffuse region about 10 to 20 Mm large in size, i.e., a region the size of a supergranule. Considering the significant structuring found at the base below the corona, in the chromosphere, it is not very plausible that the diffuse corona is driven by small-scale events. Should small-scale events drive and heat the diffuse coronal patches, one would also expect larger events in these regions. This is based on the finding that even on the smallest scales resolved so far power-law-like distributions prevail (Berghmans et al. 2021). However, with two exceptions of events barely resolved by HRI\({}_{\rm EUV}\) (arrows in Fig. 2), we do not see any distinguishable transient brightenings in the diffuse regions (above the noise level). A more rigorous statistical analysis is required to draw a final conclusion, but at this point we consider it unlikely that individual events distributed through a power law could create a diffuse corona as we observe here. One might speculate that some form of MHD turbulence might lead to energy dissipation far below the scales resolvable by current observations and by this create a quasi-continuous heating in these regions. Of course, this would raise the question why this should be operating in the diffuse regions while other areas show much higher contrast in the coronal quiet Sun features. As such a discussion would go far beyond this observational study, we refrain from any further speculation on this. ### Diffuse quiet-Sun corona and wave heating If the energization of the hot, diffuse quiet-Sun corona has to be quasi-continuous, then wave heating could also be an option. In the first theoretical attempts to explain the hot outer atmosphere, heating through waves was already suggested, at that time through the dissipation of sound waves (Biermann 1946; Schwarzschild 1948). On average, an energy flux of about 100 W m\({}^{-2}\) is required to balance the energy losses of the corona in the quiet Sun (e.g., Withbroe & Noyes 1977), and in general upward propagating magneto-acoustic waves have the potential to heat the plasma in magnetically closed structures (e.g., review by Arregui 2015). Such waves have been observed (e.g., Tomczyk et al. 2007) and they carry an energy flux sufficient to heat the quiet Sun corona (e.g., McIntosh et al. 2011). Assuming that the upward-propagating waves are generated by p-mode leakage (e.g., de Moortel 2009), one might expect quite a homogeneous distribution of the energy flux into the upper atmosphere. While in the photosphere we can expect a spatial structure on the scale of granulation, the rapid expansion of the magnetic field with height guiding the wave flux might quickly even out spatial inhomogeneities. However, a detailed investigation of the expansion of the underlying magnetic field in the diffuse coronal regions would be required before drawing any final conclusions on this. In the time series of small sub-fields, some indications for 3-minute oscillations might be found. A by-eye inspection of Fig. 4a-d suggests the presence of fluctuations on a time scale of a few minutes in the diffuse corona (panels a,b) and the bright point (c,d). If such fluctuations would be present, that would indeed be supportive of the leakage of wave power from the photosphere into the higher atmospheric regions seen in the diffuse corona. Just as with the expansion of the magnetic field, a detailed analysis of a possible presence of 3-minute oscillations in the diffuse regions will have to be conducted in the future. ## 5 Conclusions We analyze the diffuse quiescent corona observed by HRI\({}_{\rm EUV}\) on board Solar Orbiter. Here we find large patches of diffuse corona lacking resolvable physical structuring on scales below those of a supergranule, i.e., about 20 Mm. Still, the coronal emission from these diffuse regions is of comparable brightness to the more dynamic features like loop-like features or coronal bright points. The spatial variability in the diffuse corona is below about 5%. The diffuse corona remains temporally stable throughout the observing period, i.e., for at least 25 minutes. This diffuse regime is rather commonplace within the coronal makeup, contributing a large proportion of the emission seen at 1 MK, yet its underlying nature is still unclear. We consider it unlikely to be connected to features such as spicules or small loops. A power-law-like distribution of discrete heating events seems inconsistent with our observations. We speculate that small-scale processes like MHD turbulence or upward-propagating waves might be energizing the diffuse regions, but at this point we cannot offer a conclusive explanation for the nature of the diffuse regions. The lower atmosphere in the quiet Sun shows a high degree of temporal and spatial complexity. This is illustrated, e.g., by the cartoon picture suggested by Wedemeyer-Bohm et al. (2009) in their Fig. 16. The diffuse region we investigated in this study might well be related to the mixture of structures in the inter-network, above and below the canopy domain. This would also fit into the interpretation of Milanovic et al. (2023) of a diffuse region between network patches of the same polarity on opposite sides of a super-granular cell. A more extensive investigation of this diffuse component of the quiet Sun's corona, using more and more diverse datasets, would help to better estimate how common this component is, including observations of plasma at other temperatures. Also, the brightness distribution in the diffuse patches, the lifetime and size distributions of such diffuse patches, and if wave patterns (of whatever nature) are commonly seen in them needs further consideration. Just as important will be studies combining EUV data with magnetic field measurements, e.g., provided by the SO/PHI instrument Solanski et al. (2020). Finally, there is also a clear need for studies of possible heating mechanisms leading to such diffuse parts of the corona. ###### Acknowledgements. This work was supported by the International Max-Planck Research School (IMPRS) for Solar System Science at the University of Gottingen, L.P.C. gratefully acknowledges funding by the European Union (ERC, ORI-GIN, 10039844). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Solar Orbiter is a mission of international cooperation between ESA and NASA, operated by ESA. The EUI instrument was built by CSL, LAS, MPS, MSST/JCL, PMOD/WRC, ROB, LCFO with funding from the Belgian Federal Science Policy Office (BELSPO/PRODEX PEA 4000134088, 4000112292, 4000117262, and 400134474); the Centre National d'Etudes Spatiales (CNES); the UK Space Agency (UKSA); the Bundesministerium fur Physikant and Energetic (BMWi) through the Deutsches Zentrum fur Luft- und Raumfahrt (DLR); and the Swiss Space Office (SSO).
2307.09499
Variable Independence in Linear Real Arithmetic
Variable independence and decomposability are algorithmic techniques for simplifying logical formulas by tearing apart connections between free variables. These techniques were originally proposed to speed up query evaluation in constraint databases, in particular by representing the query as a Boolean combination of formulas with no interconnected variables. They also have many other applications in SMT, string analysis, databases, automata theory and other areas. However, the precise complexity of variable independence and decomposability has been left open especially for the quantifier-free theory of linear real arithmetic (LRA), which is central in database applications. We introduce a novel characterization of formulas admitting decompositions and use it to show that it is coNP-complete to decide variable decomposability over LRA. As a corollary, we obtain that deciding variable independence is in $ \Sigma_2^p $. These results substantially improve the best known double-exponential time algorithms for variable decomposability and independence. In many practical applications, it is crucial to be able to efficiently eliminate connections between variables whenever possible. We design and implement an algorithm for this problem, which is optimal in theory, exponentially faster compared to the current state-of-the-art algorithm and efficient on various microbenchmarks. In particular, our algorithm is the first one to overcome a fundamental barrier between non-discrete and discrete first-order theories. Formulas arising in practice often have few or even no free variables that are perfectly independent. In this case, our algorithm can compute a best-possible approximation of a decomposition, which can be used to optimize database queries by exploiting partial variable independence, which is present in almost every logical formula or database query constraint.
Alexander Mayorov
2023-07-18T17:37:11Z
http://arxiv.org/abs/2307.09499v1
# Variable Independence ###### Abstract We present a new method for the linear Real Arithmetic Arithmetic in Linear Real Arithmetic in Linear Real Arithmetic in Linear Real Arithmetic in Linear Real Arithmetic in Linear Real Arithmetic in Linear Real Arithmetic in Linear Real Arithmetic in Linear Real Arithmetic in Linear Real Arithmetic in Linear Real Arithmetic in _Alexander Mayorov_ November 7, 2021 Rheinland-Pfalzische Technische Universitat Kaiserslautern-Landau Department of Computer Science, Gottlieb-Daimler-Strasse, 67663 Kaiserslautern, Germany Supervisors: Prof. Dr. Anthony W. Lin Prof. Matthew Hague
2308.10995
Deep Learning Techniques in Extreme Weather Events: A Review
Extreme weather events pose significant challenges, thereby demanding techniques for accurate analysis and precise forecasting to mitigate its impact. In recent years, deep learning techniques have emerged as a promising approach for weather forecasting and understanding the dynamics of extreme weather events. This review aims to provide a comprehensive overview of the state-of-the-art deep learning in the field. We explore the utilization of deep learning architectures, across various aspects of weather prediction such as thunderstorm, lightning, precipitation, drought, heatwave, cold waves and tropical cyclones. We highlight the potential of deep learning, such as its ability to capture complex patterns and non-linear relationships. Additionally, we discuss the limitations of current approaches and highlight future directions for advancements in the field of meteorology. The insights gained from this systematic review are crucial for the scientific community to make informed decisions and mitigate the impacts of extreme weather events.
Shikha Verma, Kuldeep Srivastava, Akhilesh Tiwari, Shekhar Verma
2023-08-18T08:15:21Z
http://arxiv.org/abs/2308.10995v1
# Deep Learning Techniques in Extreme Weather Events: A Review ###### Abstract Extreme weather events pose significant challenges, thereby demanding techniques for accurate analysis and precise forecasting to mitigate its impact. In recent years, deep learning techniques have emerged as a promising approach for weather forecasting and understanding the dynamics of extreme weather events. This review aims to provide a comprehensive overview of the state-of-the-art deep learning in the field. We explore the utilization of deep learning architectures, across various aspects of weather prediction such as hundterstorm, lightning, precipitation, drought, heatwave, cold waves and tropical cyclones. We highlight the potential of deep learning, such as its ability to capture complex patterns and non-linear relationships. Additionally, we discuss the limitations of current approaches and highlight future directions for advancements in the field of meteorology. The insights gained from this systematic review are crucial for the scientific community to make informed decisions and mitigate the impacts of extreme weather events. Extreme Weather Events, Weather Prediction, Deep Learning ## I Introduction Weather refers to short-term natural events that occur in a certain location and time which include characteristics such as temperature, pressure, humidity, cloud cover, precipitation, wind speed and wind direction [1]. Extreme weather, on the other hand, refers to weather events that deviate significantly from the expected conditions. Some instances of extreme weather encompass include tropical cyclones, heatwaves, intense blizzards with heavy snowfall, excessive rainfall leading to flooding, and droughts [2][3][4][5]. These occurrences pose serious challenges to society and the environment, requiring careful planning and necessary measures to mitigate their detrimental effects. Consequently, predicting weather holds significant importance. Weather prediction relies on a gathering information from weather stations, satellites, radar systems, weather balloons, and buoys to access the current atmospheric conditions [1][6]. NWP utilizes mathematical models to simulate the pattern of atmosphere which are based on initial conditions collected from observational data [1][6][7]. Ensemble forecasting generates several forecasts with slight modifications in initial variables and model parameters to assess uncertainty and likelihood of possible outcomes. Climate models utilize data assimilation, which integrates observational data with model output, to generate long-term weather trends with increased forecast precision. Deep learning models are composed of multiple layers of interconnected artificial neurons [8]. The distinguishing feature of deep learning models is their ability to automatically learn and discover intricate patterns and features directly from the data, without the need for explicit feature engineering. This is achieved by passing the data through multiple layers of interconnected neurons, where each layer learns to extract increasingly abstract representations of the input data. The precision of weather forecasts is heavily dependent on historic data. However, non-linear and complex nature of weather phenomena poses inherent challenges to achieving absolute precision. The traditional methods, including statistical, dynamical and numerical models, have proven effective in forecasting weather events with considerable lead time [9], they encounter limitations in capturing intricate patterns and dynamics. As a result, achieving accurate predictions becomes unattainable due to the intricate nature of weather systems. Continuous advancements in research and the utilization of emerging technologies, such as deep learning, offer promising avenues for further enhancing weather prediction capabilities. This transformative approach enables highly accurate weather predictions including severe weather events, empowering proactive measures to mitigate their impacts effectively. Deep learning facilitates the integration of diverse data sources, including satellites, radars, and weather stations, to provide comprehensive and real-time meteorological insights for improved public safety and resilience. This review is organized as follows: section I introduces the paper by providing an overview of the challenges, advancements, and applications of weather forecasting using deep learning, as well as outlining the organization of the paper. section II explores the realm of extreme weather events, highlighting the need of accurate weather prediction. In section III, an extensive literature review is presented, focusing on the utilization of deep learning for extreme weather events. This section explores the existing research, highlighting the different approaches, models, and findings in the field. section IV focuses on the challenges encountered in the field of deep learning for meteorology and weather forecasting. This section discusses the limitations, data issues, interpretability concerns, and other obstacles faced when applying deep learning techniques in this domain. section V highlights how the integration of deep learning in extreme weather research advances predictive capabilities and emphasizes the potential of hybrid models to enhance forecast accuracy and proactive mitigation strategies. Additionally, it outlines potential avenues for future research in weather forecasting using deep learning, discussing promising areas of exploration, methodologies, and potential advancements to enhance the accuracy and efficiency of deep learning models for weather prediction. Finally, section VI offers a concise conclusion that summarizes the key findings and contributions of the study. ## II Extreme Weather Events Exploring the fundamental elements that govern weather patterns and shape the dynamics of atmosphere helps to obtain a broader perspective [1]. Temperature plays a pivotal role in determining the thermal state of the atmosphere, thereby exerting a substantial influence over the behaviors of gases, liquids, and the overall human comfort. Air pressure, commonly referred to as atmospheric pressure, significantly shapes weather patterns by creating low and high-pressure systems. These systems contribute to a wide array of meteorological conditions, including notable adverse conditions in case of low-pressure systems. The pressure difference give rise to atmospheric winds that traverse from low pressure area to high pressure area, enabling the essential mechanism of air circulation. Humidity, the amount of water vapour present in atmosphere, demonstrates a direct relationship with temperature; higher temperatures allow the air to hold more water vapor, leading to the formation of clouds. The extent of cloud cover plays a decisive role in modulating solar radiation reaching the Earth's surface, thereby affecting temperature profile and atmospheric dynamics. Precipitation, which includes rain, snow, and hail, occurs when moisture in the atmosphere condenses and is then released from clouds. This process constitutes a primary mechanism through which atmospheric water is returned to the Earth's surface [10] When these atmospheric elements surpass anticipated norms, they can trigger a diverse range of extreme weather events that carry significant consequences. These include heatwaves posing health risks and intensifying wildfires [11], intense precipitation leading to flooding and landslides, warm ocean conditions fueling cyclones with their devastating winds and storm surges, severe thunderstorms generating tornadoes and hailistorms, prolonged droughts affecting agriculture and water supply, snowstorms and blizzards disrupting transportation and infrastructure, and freezing rain causing damaging ice storms. Monitoring and understanding these factors are essential for early detection, preparedness, and effective management to mitigate the potential impacts of these diverse extreme weather phenomena [12]. To achieve this, ensemble forecasting emerges as a powerful tool, generating several forecasts with slight modifications in initial variables and model parameters to assess uncertainty and the likelihood of possible outcomes. This strategic approach is complemented by the utilization of climate models, which employ data assimilation techniques. By integrating observational data with model outputs, these models generate long-term weather trends with precise forecast. These forecasts enable decision-makers and emergency response teams to better understand the level of risk associated with different weather events and make informed choices to protect communities and infrastructure. ## III Deep Learning in Extreme Weather Events Deep learning models are revolutionizing extreme weather prediction by leveraging diverse data sources to accurately forecast events such as cyclones, heatwaves, heavy rainfall, and severe storms. Their ability to analyze complex patterns and relationships enables early warning systems and proactive mitigation strategies, with the potential to minimize the impacts of extreme weather on society and the environment. A Deep Neural Network (DNN) is being developed for an early warning system to predict extreme weather events such as floods, droughts, and heatwaves. The DNN approach effectively downscales and bias corrects coarse resolution seasonal forecast ensembles, generating realistic, high-resolution climate information. The study demonstrates that the DNN model accurately predicts extreme values while preserving the physical relationships and trends in the variables [13]. Researchers aim to improve the forecast of SCW, including thunderstorms, short-duration heavy rain, hail, and convective gusts by employing deep-CNN algorithm to effectively extract the characteristics of SCW and achieve better forecast performance as compared to traditional machine learning algorithms [14]. Deep learning methods are being proposed for detecting and forecasting anomalies in spatiotemporal data. The choice of learning task depends on whether the anomalies are known or unknown. Anomalies are often imbalanced and require specific data pre-processing. Leveraging diverse data sources and modelling techniques, deep learning models exhibit strong capabilities in accurately forecasting flood occurrence, severity, and spatial distribution. These models play a crucial role in real-time monitoring, enabling timely response and effective mitigation strategies. Wasserstein Generative Adversarial Network (WGAN) is being utlized for downscaling tropical cyclone rainfall to hazard-relevant spatial scales [15]. Additionally, a hybrid approach combining WGAN and Variational Autoencoder GAN (VAEGAN) is being introduced to enhance the resolution of rainfall measurements from 100 km to 10 km resolution, showing realistic power spectra for various wave numbers [16]. A deep learning-based technique employing a fully connected neural network is being proposed to accurately predict rainfall-induced shallow landslides across Italy [17]. ### _Deep Learning in Thunderstorm and Lightning_ Thunderstorms are complex atmospheric phenomena characterized by a combination of thunder, lightning, heavy rainfall, and strong winds, with lightning resulting from electrical discharges within clouds or between clouds and the ground. In a study two hybrid models, EEMD-ANN and EEMD-SVM, are being developed for predicting thunderstorm frequency in Bangladesh. These models utilize ensemble empirical mode decomposition (EEMD) to extract relevant features for accurate prediction. The EEMD-ANN model consists of an input layer with 11 variables, two hidden layers (4 and 2 neurons), sigmoid activation function, and a 0.1 learning rate. On the other hand, EEMD-SVM employs various kernel functions for effective handling of non-stationary TSF data. The input variables include CAPE, CPRCP, CRR, DP, KI, PRCP, RH, ST, TSD, TT, and WS50. EEMD, based on Hilbert-Huang transform, mitigates challenges of EMD by introducing Gaussian white noise. This enables precise decomposition of time series data into intrinsic mode functions (IMFs), revealing underlying patterns. ARIMA models effectively handle non-stationary time series data. The hybrid models, EEMD-ANN and EEMD-SVM, capitalize on EEMD's capabilities to handle non-stationary data and capture nonlinear relationships. These models outperform like ANN, SVM, and ARIMA in terms of prediction accuracy, with improvements ranging from 8.02% to 22.48% across TSF categories [18]. In another study, the aim is to predict severe thunderstorm occurrences through an innovative approach using lightning and radar video data in the Liguria region of Italy. The ensemble technique outperforms traditional methods that optimized standard quality-based scores. The architecture involves a LRCN, which combines a CNN for extracting spatial features and an LSTM network for analyzing sequential aspects. The training process spans 100 epochs, employing the Adam Optimizer with a learning rate of 0.001 and a mini-batch size of 72. The training process incorporates a class balanced cross-entropy loss function to fine-tune the model's performance. The model's reliability is validated using a historical radar video dataset comprising CAPPI images at 2 km, 3 km, and 5 km above soe sea level, demonstrating its effectiveness in probabilistic forecasting of severe thunderstorms [19]. In a study focused at assessing the accuracy of various LSTM neural network variants in predicting thunderstorm severity through the utilization of remote sensing weather data, the primary objective is to quantitatively forecast the intensity of thunderstorms by analyzing the frequency of lightning flashes using deep learning models. The study employs two main datasets: SALDN lightning detection network data and SAWS weather station data. These datasets are used to train and evaluate different LSTM neural network variants, including LSTM-FC, CNN-LSTM, and ConvLSTM models. The LSTM-FC model consists of three LSTM layers and one dense layer, with an optimizer based on the Adam algorithm. The activation function used is the Leaky-Rectified Linear Unit with an alpha value of 0.15. Similarly, the CNN-LSTM model comprises two Conv2D layers, one LSTM layer, and one dense layer. This model also employs the Adam optimizer and utilizes the Leaky-Rectified Linear Unit as the activation function, with an alpha value of 0.05. The ConvLSTM model is structured with two ConvLSTM2D layers and two dense layers. The same Adam optimizer is employed, and the activation functions are set to Leaky-Rectified Linear Unit (with an alpha value of 0.05) and Rectified Linear Unit. The models are trained and evaluated using hourly lightning flash data and weather variables based on the MAE and MSE. Among the various LSTM model variants, the CNN-LSTM model outperforms the other models with a MAE of 51 flashes per hour because of its ability to capture spatio-temporal features, leading to more accurate predictions of thunderstorm severity [20]. To predict the occurrence of lightning, an innovative data-driven neural network model called Attention-Based Dual-Source Spatiotemporal Neural Network (ADSNet) is being introduced. ADSNet is designed for accurate hourly lightning forecasting and utilizes both numerical simulations and historical lightning observations, resulting in a comprehensive and effective approach. A diverse dataset, combining WRF simulation data with Cloud-to-Ground Lightning Location System (CGLLS) observations from North China, is being employed. The model consists of dual RNN encoder-decoder units, several CNN modules, DCNN modules, and attention mechanisms. ConvLSTM is chosen for its adeptness in capturing intricate spatiotemporal dependencies. This intricate framework is tailored for conducting 12-hour lightning forecasts in the North China region. The model adopts the Adam optimizer with an initial learning rate of 0.0001 and Weighted Binary Cross-Entropy as a loss function. Experimental results are validating the superiority of ADSNet over baseline methods in terms of lightning forecast accuracy [21]. In another study, an innovative approach known as Lightning Monitoring Residual Network (LM-ResNet) is being introduced, leveraging deep learning for effective lightning location monitoring in Ningbo, China. By transforming the task into binary classification, radar data (PPI, CR, ET, V) and essential land attributes (DEM, aspect, slope, land use, NDVI) are being harnessed to create a comprehensive lightning feature dataset. LM-ResNet employs Rectified Linear Unit (ReLU) activation for effective learning and addresses data imbalances through Focal Loss, a specialized cross-entropy-based loss function. The model's training configuration includes an initial learning rate of 0.1 and utilizes the SGD optimizer with a batch size of 64, incorporating momentum of 0.9 and weight attenuation of 0.0004 to enhance learning while mitigating overfitting. The study is demonstrating LM-ResNet's superiority over competing architectures like GoogLeNet and DenseNet, highlighting its potential for accurate and reliable lightning incident tracking. An approach called Lightning-SN is introduced, designed \begin{table} \begin{tabular}{l l l} \hline \hline Task & Approach & Ref \\ \hline Thunderstorm Prediction & EEMD-ANN, EEMD-SVM, ARIMA & [18] \\ & LRCN-CNN, LSTM & [19] \\ Thunderstorm Severity Prediction & LSTM-FC, CNN-LSTM, ConvLSTM & [20] \\ \hline Lightning Prediction & RNN & [21] \\ & ResNet & [22] \\ Lightning Identification & CNN & [23] \\ \hline \hline \end{tabular} \end{table} TABLE I: Deep Learning in Thunderstorm and Lightning for precise cloud-to-ground (CG) lightning identification using deep learning techniques. This model effectively utilizes S-band Doppler radar data and CG lightning records of the Ningbo area in Zhejiang Province, China, collected from August 2009 to December 2021 via the ADTD lightning positioning system. Lightning-SN leverages an encoder-decoder structure with 25 convolutional layers, five pooling layers, five upsampling layers, and a sigmoid activation function layer. The architecture capitalizes on symmetry, boundary preservation techniques, and a 1x1 convolution kernel in the final layer. The model's optimization is driven by the Adam optimizer and guided by the GHM loss function. Training involves the BP algorithm, employing iterative refinement and validation testing. Additionally, the study includes a comprehensive comparative analysis with other semantic segmentation algorithms--FCNN, DeepLab-V3, and BiSeNet--evaluated under identical conditions. Lightning-SN demonstrates substantial performance improvements over traditional threshold-based methods, particularly in scenarios involving high-resolution radar data. [23]. ### _Deep Learning in Precipitation_ Precipitation is the process by which water, in either liquid or solid form, falls from the atmosphere to the Earth's surface. Hail, snow, and rainfall are the three most common types of precipitation. The most frequent type of precipitation is rain, which occurs when water droplets congregate and become heavy enough to fall to the ground. When raindrops are pushed higher into the freezing parts of the sky during violent thunderstorms, they freeze and pile in layers, resulting in hailstones of varied sizes that can cause property and crop damage. Snow is formed when water vapour condenses straight into ice crystals in cold atmospheric conditions. Accurate forecasting and understanding of precipitation patterns are critical for many industries, including agriculture, water resource management, and transportation. #### Ii-B1 Rainfall Deep learning uses meteorological data such as historical rainfall records, satellite images, and atmospheric conditions. Rainfall forecasts give essential information for disaster preparedness, agricultural planning, water resource management, and climate modelling. A nowcasting model is designed to address extreme weather phenomena encompassing both precipitation and landfalling hurricanes. The research employs a comprehensive dataset spanning five years (2015-2020) of radar observations over South Texas, including 22 hurricane events that occurred in the United States. The architectural design of the model comprises four core components: RNN, up-sample, down-sample, and convolution. The architecture is built upon a three-layer encoder-decoder structure, incorporating distinct filter arrangements for the RNN, while seamlessly integrating convolution and deconvolution operations. GRU is selected as the foundational RNN unit, organised in multiple layers to effectively capture intricate spatiotemporal patterns. The model effectively predicted future radar reflectivity echo maps based on five preceding observations, enabling forecasts for up to a 3-hour lead time. Model parameter optimization is achieved using the Adam optimizer, fine-tuned with a learning rate of \(10^{4}\) and a momentum of 0.5. To further enhance predictive performance, the research incorporates Balanced Mean Squared Error (B-MSE) and Balanced Mean Absolute Error (B-MAE) as loss functions. The model's forecasting capabilities are evaluated using established metrics--HSS, CSI, POD, and FAR--all of which collectively highlight its proficiency in precipitation nowcasting. [24]. MetNet-2, a deep neural network-based weather model, outperforms existing physics-based models in predicting high-resolution precipitation up to 12 hours ahead. The study utilizes data sources such as MRMS, GOES-2, and HRRR datasets. Input observations from various sources, including radar, satellite, and assimilation features, are processed through a CNN to capture temporal dynamics. Efficient computation is achieved through model parallelism across 16 interconnected TPU cores, allowing accurate forecasts over a 512 km x 512 km target patch. The model's architecture consists of three stacks of 8 residual blocks with exponentially increasing dilation factors. Operating within the Continental United States, MetNet-2 generates forecasts at a 2-minute frequency with a spatial resolution of 1 km. It operates within a probabilistic framework, producing categorical predictions across 512 precipitation levels for each target position. The model's performance exceeds that of the High-Resolution Ensemble Forecast (HREF) when assessed using the Cumulative Ranked Probability Score (CRPS). [25]. The utilization of deep learning techniques to merge precipitation data from diverse sources across the Tibetan Plateau, with the aim of enhancing data precision. The study explores three methodologies: ANN, CNN and a statistical Extended Triple Collocation (ETC) method. The neural network architecture employed consists of an ANN with four fully connected layers and a CNN enhanced by two additional convolutional layers to capture spatial features. To mitigate overfitting, dropout layers with a 0.1 dropout rate followed each fully connected or convolutional layer. The optimization employs the Adam algorithm with a learning rate of 0.0001, the RMSE serves as the loss function, and the ReLU function acts as the activation function. The hyperparameters consist of 500 epochs and a batch size of 2500 for effective training. Meteorological and hydrological evaluations reveal that the CNN approach consistently demonstrates superior performance compared to the others, showcasing enhanced spatial distribution and heightened accuracy. The meteorological evaluation employs eight metrics: CC, BIAS, STDRATIO, MAE, RMSE, POD, FAR, and CSI. The hydrological assessment utilizes NSE and PBIAS for model parameter validation, with KGE employed to counter NSE's flow peak bias and emphasize runoff variability [26]. Using the U.S. Weather Surveillance Radar-1988 Doppler (WSR-88D) observations dataset, researchers developed four DL models for OPE (Quantitative Precipitation Estimation) with a CNN-VGG architecture. These models, named RQPENetD1, RQPENetD2, RQPENetV, and RQPENetR, incorporate dense blocks, RepVGG blocks, and residual blocks. The architecture of RQPENetD1 features an initial convolution layer, four dense blocks with varying bottleneck layers, and transition layers for spatial reduction. It processes 3-D radar data from two elevation angles to estimate rainfall rate using a fully connected layer with adaptive average pooling and utilizes MSE as the loss function. RQPENetD2 shares a similar structure with dense blocks featuring (24, 16) and (36, 24) bottleneck layers, along with transition layers involving 1x1 Convolution and average pooling. RQPENetV incorporates RepVGG blocks in a multi-branch structure across five stages, while RQPENetR utilizes residual modules in four sequential blocks with varying bottleneck layers for feature enhancement from 3-D radar data. The evaluation of RQPENet's radar precipitation estimation includes metrics such as RMSE, MAE, CC, and NSE, along with additional atmospheric science metrics: POD, FAR, CSI, HSS, and GSS. The findings indicate the superior performance of dense blocks-based models, particularly RQPENet D1 and RQPENet D2, compared to residual blocks and RepVGG blocks-based models, as well as five traditional Z-R relations [27]. In a recent study, researchers propose an ANN model with incremental learning to derive total precipitable water (TPW) and convective available potential energy (CAPE) from GEO-KOMPSAT-2A satellite imagery over Northeast Asia. The study utilizes AMI satellite imagery, ERA5 data, and radiosonde observations for training and evaluation. An MLP feedforward backpropagation ANN model is employed for the retrieval algorithm. The model architecture includes an input layer with 20 neurons, a hidden layer with 40 neurons using a hyperbolic tangent activation function, and an output layer with a linear activation function. The optimization process utilizes the Adam optimizer with a mean squared error loss function. The accuracy assessment involves the utilization of statistical metrics, including correlation coefficient, bias, and RMSE. The incremental ANN model demonstrates improved accuracy and stability compared to static learning methods, indicating its potential to accurately estimate TPW and CAPE [28]. #### Iii-A2 Hail Hail prediction helps to improve our understanding and readiness for this dangerous weather occurrence. We can increase hail forecasting accuracy, giving advanced warnings to prevent possible infrastructure, agriculture, and community damage. Because hail storms may have significant socioeconomic consequences, applying deep learning techniques into hail prediction models is critical for timely and effective risk management and disaster response measures. In a test case study, researchers applied deep learning networks for hailstorm detection using CNN and DNN architectures. The approach involves training these networks on GOES satellite imagery and MERRA-2 atmospheric parameters to identify hail storms. Different architectures are utilized, including a CNN for processing satellite imagery and a DNN for atmospheric parameters, aimed at capturing pertinent features. The CNN architecture for satellite imagery uses four convolutional layers with ReLU activation functions, combined with max-pooling layers for downsizing and a batch normalization layer for streamlined training. Fully connected layers are also integrated into the architecture to enable classification. Concurrently, the DNN architecture for atmospheric parameters features four fully connected layers with ReLU activation, utilizing a Softmax function for classification. Both architectures converge within a merged network, amalgamating outputs from the CNN and DNN via concatenation and incorporating \begin{table} \begin{tabular}{l l l} \hline \hline Task & Approach & Ref \\ \hline Precipitation Forecast & RNN & [24] \\ & CNN & [25] \\ Precipitation Data Merging & CNN, ANN & [26] \\ Quantitative Precipitation Estimation & CNN-based & [27] \\ TPW and CAPE Estimation & MLP & [28] \\ \hline Hailstorm Detection & CNN, DNN & [29] \\ Hailstorm Forecast & Atuencoder, CNN & [30] \\ & CNN & [31] \\ & PCA, BPNN & [32] \\ \hline Cloud or Snow Identification & DeepLab-CRF & [33] \\ & CNN & [34] \\ & U-Net & [35] \\ & U-Net & [36] \\ & U-Net & [37] \\ & CNN & [38] \\ Snow Depth Estimation & BPNN & [39] \\ & CNN, ResNet & [40] \\ & deep CNN & [41] \\ Snow Water Equivalent Estimation & ANN, ANFIS & [42] \\ & MNLR, NNGA & [43] \\ \hline \hline \end{tabular} \end{table} TABLE II: Deep Learning in Precipitation additional fully connected layers for the final classification. This approach harnesses the capabilities of deep learning to enhance nail detection by merging multi-source data and recognizing spatial patterns. The CNN model achieves heightened precision by accurately identifying decreased infrared brightness temperatures linked to nail storms. [29]. In an effort to forecast haisl storms, researchers introduced an architecture that comprises three distinct models: an Autoencoder (AE) with encoder and decoder layers, each containing 32 neurons; a CNN constructed with CNN layers featuring 64 and 32 filters; and an RF model characterized by an ensemble of decision trees and decision tree aggregation through majority voting. Both AE and CNN are optimized using the Adam optimizer and MSE as the loss function. The dataset utilized in the study consists of observations from the TRMM and reanalysis data from the ECMWF spanning one year. The selected attributes for training the models include convective potential energy, convective inhibition, wind shear within the 1-3 km range, and warm cloud depth. The study aims to predict global haisl storms using these models and to compare their performance in terms of accuracy, precision, and error rates. Surprisingly, contrary to expectations, RF outperforms the deep learning methods in terms of haislstorm prediction performance [30]. In a study focusing on severe nail prediction, researchers employed CNN to encode spatial weather data and compared its performance with traditional statistical approaches like Logistic Mean and Logistic PCA. The dataset utilized in this study includes geopotential height, temperature, dewpoint, zonal wind, and meridional wind variables from the NCAR ensemble model output. These variables are collected at different pressure levels: 500 hPa, 700 hPa, and 800 hPa. The study uses upper-air dynamic and thermodynamic fields from an NCAR NWP model. The CNN architecture used in this study comprises three strided convolutional layers with 5x5 gridcell filters. A range of hyperparameters is tested, including the initial number of filters, dropout rates, activation functions (ReLU and Leaky ReLU), L2 norm regularization coefficients, and optimizers (Stochastic Gradient Descent and Adam) with different learning rates. The model's evaluation is conducted using the Brier Score as the prediction error function, and standard probabilistic verification metrics are employed to assess the quality of probabilistic forecasts. The results demonstrate a significant enhancement in various measures of prediction skill achieved by the CNN architecture, leading to improved probabilistic predictions when compared to the logistic PCA approach [31]. Accurately estimating nail size remains crucial for evaluating potential damage caused by haisl storms. In order to address this challenge, a model consisting of two main components has been proposed. The PCA-based technique selects 18 features that strongly correlate with nail sizes, while the BPNN regression model with a two-layer architecture and 35 hidden layer neurons is employed to estimate the size of haislstones from satellite images. Using a MSE loss function, the BPNN regression model achieves an R-squared value of 0.52 through linear fitting when assessing the correspondence between predicted and observed Maximum Hail Diameters on the test set [32]. #### Iii-B3 Snow Understanding snowfall patterns is critical for many industries, including transportation, agriculture, and disaster planning, resulting to more effective resource management and risk mitigation techniques. However, the intricate task of accurately differentiating between snow and clouds arises from their comparable white appearance in satellite imagery, highlighting the utmost importance of precise discrimination. Therefore, in an attempt to accurately identify cloud and snow in high-resolution remote sensing images, a study introduced the DeepLab v3+ neural network with a CRF model. The research utilized data from China Gaofen-1 (GF-1) satellite's Wide Field View (WFV) sensor, comprising four bands and a spatial resolution of 16 m, encompassing a total of ten images spanning three years. The DeepLab v3+ adopts an encode-decoder architecture and employs the Adam optimizer with a learning rate of 0.001, a batch size of 5, and 200 epochs. The study analyzes accuracy variations resulting from distinct loss functions, including the Cross Entropy (CE) loss function, Dice loss function, and Focal loss function. Evaluation metrics encompass Mean Intersection over Union (MIoU) and Mean Pixel Accuracy (MPA) to assess model performance. This methodology effectively mitigates misclassification issues, enhancing cloud and snow identification precision through refined boundary delineation and reduced isolated patches [33]. An end-to-end fully-CNN with a multiscale prediction approach is proposed to differentiate cloud and snow using a dataset of 50 high-resolution Gaofen satellite images (13400x12000 pixels each), meticulously labeled for cloud and snow regions. The network adopts the VGG network architecture with stride reduction and atrous convolution techniques. Due to the frequent co-occurrence of snow and cloud in images, a pixel-level approach is employed, involving the replacement of the last two fully connected layers in the VGG model with two convolutional layers. The final layer employs a three-class softmax loss for classifying snow, cloud, and other land types, using batch normalization and rectified linear units. The Multiscale Prediction Module merges feature maps from diverse intermediate layers, functioning as an ensemble learning approach. This approach allows for the simultaneous utilization of low-level spatial information and high-level semantic information, enabling accurate differentiation between cloud and snow [34]. A deep learning-based method is developed by utilizing the Unet3+ network with Resnet50 and the Convolutional Block Attention Module (CBAM) to accurately detect cloud and snow in remote sensing images. This approach effectively eliminates interference information. The feature extraction process of UNet3+ includes five encoders with effective convolutional and pooling layers. In an enhanced version, multiple convolutional layers, regularization, ReLU activation, and residual modules are added to each of the five encoders, resulting in feature graphs. The decoders consist of convolution and activation layers. To address bias, a weighted cross-entropy loss is employed, emphasizing cloud and snow regions. For enhanced focus and deeper feature extraction, the Convolutional Block Attention Module (CBAM) is incorporated into ResNet50. CBAM harnesses channel attention through global pooling, multi-layer perceptron processing, and sigmoid activation to generate attention feature maps. The model's performance is assessed using various metrics, including Mean Intersection over Union (mIoU), Mean Pixel Accuracy (mPA), Mean Precision (mPrecision), and Estimated Total Size. These evaluations successfully mitigate interference data, resulting in accurate cloud and snow extraction from diverse landforms within remote sensing images.[35]. The effectiveness of U-Net based deep learning models in delineating glacier boundaries and identifying snow/ice is demonstrated in a study that developed ENVINet5 and ENVI Net-Multi deep learning classifiers to analyze Landsat-8 satellite data over the Bara Shigri glacier region in Himachal Pradesh, India. The ENVINet5 architecture, based on a mask-based encoder-decoder U-Net model, is employed for single-class categorization, while ENVI Net-Multi is used for multi-class classification of features like snow, ice, and barren areas. The ENVINet5 architecture comprises five levels with twenty-three convolutional layers, incorporating input patches, feature maps, various convolutions, feature fusion, max-pooling, co-convolution, and 1x1 convolutions. For ENVINet-Multi, training parameters include 25 epochs, a patch sampling rate of 16, class weight of 2.5, loss weight of 0.5, 200 patches per epoch, 464x464 pixel patch size, and 2 patches per batch [36]. The performance of U-Net, RF, and Sen2Cor models for snow coverage mapping is compared in a study using Sentinel-2 satellite multispectral images across 40 diverse sites spanning all continents except Antarctica. A Random Forest model is built using Bayesian Hyperparameter Optimization for improved performance. The Sentinel-2 Level-2A product incorporates cloud and snow confidence masks derived from Sen2Cor, which employs threshold tests on spectral bands, ratios, and indices like NDVI and NDSI. A U-Net network architecture is employed, featuring an encoding path with repeated 3x3 convolutions, batch normalization, and ReLU activation, followed by 2x2 max pooling for downsampling. The decoding path utilizes transpose convolutions for upsampling, concatenating with corresponding encoding path features, and applying 3x3 convolutions with BN and ReLU. The final layer comprises a 1x1 convolution for class prediction. Training involves weighted cross-entropy loss, stochastic gradient descent with a learning rate of 0.01 and momentum of 0.9, resulting in effective semantic segmentation. The model's performance is assessed using precision, recall, F1 score, Intersection Over Union (IoU), and accuracy metrics. The results demonstrate that U-Net models exhibit superior performance compared to RF and Sen2Cor in accurately mapping snow coverage [37]. An open-source machine learning-based system for snow mapping, AutoSMILE, was developed to automate the process using image processing, machine learning, deep learning, and visual inspection. It was applied in a mountainous area in the northern Tibetan Plateau using RF and CNN algorithms, achieving accurate snow cover mapping. The CNN architecture comprises four layers: convolutional layers for feature extraction, activation layers like ReLU to expedite training, pooling layers for non-linear downsampling, and fundamental components like fully connected and flatten layers. For model evaluation, key metrics include producer's accuracy (PA), user's accuracy (UA), intersection over union (IoU), and overall accuracy (OA) [38]. A deep learning approach is introduced to downscale snow depth retrieval across an alpine region by integrating satellite remote-sensing data with diverse spatial scales and characteristics. The study focuses on collaborative snow parameter retrieval in Northern Xinjiang, China, utilizing MODIS and MWRI data. A three-hidden-layer neural network is designed with 20, 20, and 10 neurons in each layer. The network processes resampled BTD, topographic, and meteorological data at a 500m resolution, utilizing a sigmoid function for capturing nonlinear patterns. Backpropagation, guided by MSE and SGD, optimizes the network to enhance the accuracy of snow depth observations. The optimization process entails weight and bias adjustments, informed by a learning rate of 0.001. This approach aims to boost the precision of snow depth observations through deep neural network downscaling. Reference data from ground station snow depth measurements are employed to evaluate the downscaling model's performance and retrieval accuracy, employing assessment metrics encompassing R2, RMSE, PME, NME, MAE, and BIAS [39]. A deep learning model is presented for 'area-to-point' snow depth estimation, which integrates AMSR2 TB, MODIS, and NDSI data, achieving high accuracy with a spatial resolution of 0.005\({}^{\circ}\). The model utilizes CNN and residual blocks to capture spatial heterogeneity and leverage high-resolution snow information from MODIS. The CNN comprises convolutional and ReLU activation layers, pooling for downsampling, measures to prevent overfitting, and concludes with a fully connected layer for output. The proposed deep residual network contains a 35-layer input patch, applies convolutions with batch normalization and max pooling, followed by 4 residual blocks for feature extraction. After adaptive average pooling and fully connected layers, it predicts snow depth at the patch center. It has 9 convolutions and 4 fully connected layers, using ReLU activation except for the linear output layer. The model is trained for 50 epochs, using a learning rate of 0.0001, with exponential decay of 0.5 every 20 epochs, and a batch size of 32, while employing stochastic gradient descent (SGD) as the optimizer. The evaluation metrics in this study include RMSE, MAE, MBE, and R2. The results demonstrate that by incorporating spatial heterogeneity and leveraging high-resolution MODIS snow cover data, the proposed model achieves promising accuracy in snow depth estimation, with potential applicability to other regions [40]. A novel inverse method is presented for extracting snow layer thickness and temperature from passive microwave remote sensing data. Utilizing convolutional, pooling, and fully-connected layers, the study employs a ConvNet to inversely estimate the thickness and temperature of a snowpack from its corresponding vertical polarization brightness temperature and horizontal polarization brightness temperature. The model chooses the Adam optimizer with a learning rate of 0.01 to optimize the half mean squared error loss function, and L2 regularization is employed to enhance prediction accuracy by mitigating over-fitting. Furthermore, a comparative analysis is conducted between the ConvNet outcomes and those of conventional ANN and SVM. The model assessment is carried out using RMSE and R2 metrics, underscoring the effectiveness of the advanced ConvNet approach. The utilized ANN architecture comprises three layers - input, hidden, and output - with 20 hidden-layer units utilizing hyperbolic tangent basis functions [41]. A study on predicting snow water equivalent (SWE) in a semi-arid region of Iran was conducted using regression, ANN, and adaptive neuro-fuzzy inference system (ANFIS) models. The study proposes a three-layer ANN alongside ANFIS, which integrates ANN with fuzzy logic, utilizing a five-layer structure based on the Sugeno model featuring two fuzzy if-then rules. In the ANFIS architecture, a hyperbolic tangent activation function is utilized in the hidden layer, and optimal neuron counts are ascertained for both hidden and input layers through iterative refinement. The handling of numerous independent input variables in the ANFIS approach is accomplished using backpropagation training and a sub-clustering method. The assessment of ANN, ANFIS, and regression models involves statistical metrics such as MBE, MAE, RMSE, correlation coefficient, relative error percentage, and Nash-Sutcliffe coefficient efficiency. The results demonstrate the superior performance of both ANN and ANFIS models compared to the regression method, with ANN and ANFIS exhibiting similar prediction accuracy for SWE [42]. In a study, researchers focused on estimating SWE in the Samsami basin of Iran using MNLR, NNGA, and ANN architectures. The study aims to estimate snow water equivalent, a critical component of water resources in mountainous areas, based on climatic and topographic parameters such as elevation, slope, aspect, longitude, and latitude. The MNLR architecture employed in the study aims to model the complex non-linear relationship between SWE and a set of independent parameters. Moreover, four different ANN architectures are investigated: MLP for supervised prediction with input, hidden, and output layers; GFF for efficient problem-solving through multi-layer connections; RBF for rapid learning via self-organizing hidden layers; and MNN, a specialized MLP with parallel sub-modules for specialized function and faster training. The study evaluates these architectures to enhance the prediction of SWE values. The NNGA model utilizes genetic algorithms to optimize neural network parameters, enhancing accuracy by iteratively refining weights through selection, crossover, and mutation. Additionally, six diverse learning algorithms are examined for training neural network components in the NNGA model. These algorithms include Levenberg-Marquardt for adaptive MSE minimization, Delta-Bar-Delta for efficient step size adaptation, Step for gradient descent with step size adjustment, Momentum for inertia-infused gradient descent, Conjugate Gradient for second-order optimization, and Quickprop for error surface curvature-based weight adjustments. The study evaluates three activation functions: Sigmoid, Tanh, and Linear, and assesses all models using standard statistical criteria, including correlation coefficient, RSME, ratio of average estimated to observed values, and MAE. The NNGA model, specifically the NNGA5 variant, proves to be the most effective approach, offering valuable insights for water resource management in mountainous regions [43]. ### _Deep Learning in Drought_ Drought refers to an extended period of unusually low precipitation within the natural climate cycle caused by deficiency in rainfall. It can have far-reaching impacts, including water shortages, crop failure, livestock losses, increased wildfire risk, and ecosystem degradation. Drought can be categorized into four types: meteorological, hydrological, agricultural, and socio-economic [44][45], with each type influenced by critical climatic factors such as increased evaporation, transpiration, and insufficient precipitation. In this review, we will consider only meteorological drought. Meteorological Drought forecasting is a complex process that involves analyzing various climatic and environmental factors to anticipate future drought conditions. Statistical models and drought indices, such as the Standardized Precipitation Index (SPI) or Palmer Drought Severity Index (PDSI) [46], are utilized to quantify drought severity and monitor changes over time. These methodologies contribute to a comprehensive understanding of drought dynamics and aid in the formulation of effective drought management and adaptation measures. However, they often face challenges in capturing the complexities of drought dynamics. The study utilized ANFIS to forecast SPI-based drought indices using rainfall data from 10 stations in Central Anatolia, Turkey. The ANFIS architecture consists of five layers: input, rule, average, consequent, and output nodes. These models, named SPI-1, SPI-3, SPI-6, SPI-9, and SPI-12 for different time scales, are designed to capture diverse drought patterns by integrating SPI and precipitation data, addressing short-term, seasonal, and long-term variations. For each phase, a total of 20 distinct models with varied input combinations are developed. The evaluation includes performance metrics such as Root Mean Square Error (RMSE), Efficiency (E), and Correlation (CORR), and comparisons are made against Feed Forward Neural Networks (FFNN) and Multiple Linear Regression (MLR) models. Significantly, the ANFIS models exhibit exceptional performance, showcasing a notable ability to accurately identify dry and wet periods, particularly across extended time scales [47]. In a different study, the focus is on long-term Standard Precipitation Index (SPI) drought forecasting (6 and 12 months lead time) in the Awash River Basin, Ethiopia. The study compares the efficacy of five data-driven models: ARIMA, ANN, SVR, WA-ANN, and WA-SVR. The ARIMA model involves three essential steps: model identification, parameterization, and validation. The significant lags are determined using ACF and PACF, guiding the selection of accurate and precise parameters. The ANN model employs a MLP structure with input, hidden, and output layers, trained using the Levenberg-Marquardt (LM) backpropagation algorithm. Lagged SPI values are utilized as input, with optimal input layer neurons determined via trial and error, while hidden layer neurons are selected using empirical methods. The SVR model employs a non-linear RBF kernel. The Wavelet Decomposition process encompasses CWT for time-frequency representation and DWT for efficient computation. The transformed time series serves as input for both ANN and SVR models. The performance of models is evaluated using RMSE, MAE, R2, and persistence. It is found that the WA-ANN model outperforms other models for forecasting SPI values over lead times of 6 and 12 months[48]. In another study, the WA-ANN models outperform alternative approaches, providing the most accurate forecasts for SPI 3 and SPI 6 values over lead times of 1 and 3 months. This underscores the effectiveness of the WA-ANN architecture, characterized by 3 to 5 neurons in the input layer, 4 to 7 neurons in the hidden layer, and a single neuron in the output layer, contributing to improved short-term drought forecasting [49]. A comparative approach is employed to evaluate the performance of various forecasting models for drought using SPI as the indicator. Three primary models are investigated: ANN, WANN, and traditional stochastic models, namely ARIMA and Seasonal ARIMA (SARIMA). The focus is on SPI-3, SPI-6, and SPI-12 time scales, and the impact of wavelet preprocessing on model accuracy is explored for the Algerois Basin in North Algeria. ARIMA and Seasonal ARIMA models offer an empirical framework for modeling and predicting complex hydrologic systems, with nonseasonal ARIMA addressing stationary data and seasonal ARIMA handling nonstationarity through AR, MA operators, and differentiation parameters. An implementation of ANN-MLP involves a network structure comprising interlinked input, hidden, and output layers. The WA-ANN model utilizes Discrete Wavelet (DW) inputs derived from original SPI time series and corresponding un-decomposed SPI outputs, with a focus on assessing the impact of various mother wavelets to enhance model efficiency. The model performance is assessed using the Nash-Sutcliffe model efficiency coefficient (NSE), RSME, and MAE as evaluation metrics. The results demonstrate that the WANN model outperforms the ANN model for SPI-3 forecasts over up to six months while the SARIMA model shows satisfactory results for SPI-12 forecasts with a one-month lead time. However, all models experience reduced accuracy as lead times increase [50]. In another study, three data-driven models, namely ARIMA, ANN, and WANN, are employed for drought forecasting based on the SPI at two time scales (SPI-6 and SPI-12) in the north of the Haihe River Basin. The effectiveness of the models is assessed using statistical tests like the Kolmogorov-Smirnov (K-S) test, Kendall rank correlation, and correlation coefficients (R2). ARIMA models with varying parameter combinations are employed, selecting the model that minimizes K-S distance and maximizes Kendall rank correlation. The chosen model's efficacy is validated through ACF and PACF plots. The ANN model underestimates certain instances of extreme drought or extreme precipitation, where the observed SPI values correspond to such extreme conditions. The WANN model outperforms other models, exhibiting higher correlation, lower K-S distance, and enhanced Kendall rank correlation. The comparison results show that the WANN model is the most suitable and effective for forecasting SPI-6 and SPI-12 values in the study area [51]. Furthermore, a hybrid predictive model is presented, combining EMD with a DBN for drought forecasting using the SSI across the Colorado River basin. DBN is constructed by stacking multiple RBM on top of each other, and the RBMs are trained using the contrastive divergence algorithm. EMD is utilized to decompose the data into IMF with varying frequencies. Some IMFs are found to contain noise or irrelevant information. A denoising technique is proposed, involving the use of Detrended Fluctuation Analysis (DFA) scaling exponents. A threshold (Hurst exponent 0.5) is applied to identify noisy IMFs, which are subsequently eliminated, and the relevant IMFs are aggregated for reconstruction. The DBN model, along with other models (MLP, SVR, EMD-MLP, EMD-SVR), is used to predict SSI-12 with lead times of one and two months. The evaluation metrics include RMSE, MAE, and NSE, and the EMD-DBN model outperforms all other models in the two-step ahead prediction [52]. The focus of the study is on drought forecasting with lead times of 1 month and 6 months for the Gulbarga district in Karnataka, India, using the SPI as the drought quantifying parameter. WPT is employed to preprocess SPI time series, generating decomposed coefficients used as inputs for ANN and SVR models. The SPI time series forecasting utilizes a BPNN with a 3-4-1 architecture. The network's weights and biases are determined using the gradient descent optimization algorithm, incorporating an adaptive learning rate of 0.45, momentum rate of 0.15, and 5000 learning cycles. The SVR utilizes a loss function based on Vapnik's \(\epsilon\)-insensitive approach and incorporates the Gaussian radial basis function (RBF) kernel. This approach creates hybrid WP-ANN and WP-SVR models for drought forecasting, with Daubechies 4 (db4) wavelet chosen as the mother wavelet. It is observed that the hybrid WP-ANN model performs better than standalone \begin{table} \begin{tabular}{l l l} \hline \hline Task & Approach & Ref \\ \hline SPI Forecast & ANFIS, FFNN, MLR & [47] \\ & ARIMA, ANN, SVR, WA-ANN, WA-SVR & [48] \\ & ANN, SVR, WANN & [49] \\ & ANN, WANN, ARIMA, SARIMA & [50] \\ & ARIMA, ANN, WANN & [51] \\ & EMD-DBN & [52] \\ & WP-ANN, WP-SVR & [53] \\ SWSI and SIAP Forecast & ANN & [54] \\ SPEI Forecast & ANFIS, hybrid WT-ARIMA-ANN & [55] \\ \hline \hline \end{tabular} \end{table} TABLE III: Deep Learning in Drought approaches, with the forecast accuracy decreasing as the lead time increases [53]. In another study, an ANN model is used to forecast drought indices, including the Standardised Water Storage Index (SWSI) and the Standard Index of Annual Precipitation (SIAP). The dataset encompasses rainfall and water level information originating from the Langat River Basin in Malaysia, covering a time span of three decades (1986-2016). A feed-forward multilayer perceptron (MLP) structure is employed, comprising input, hidden, and output layers. This architecture is trained using the Levenberg-Marquardt (LM) back-propagation algorithm for both traditional artificial neural network (ANN) models and the wavelet-based artificial neural network (W-ANN) models. In the W-ANN approach, discrete wavelet transform (DWT) is applied to the input data, yielding subseries components. Subsequently, pertinent components are chosen from these subseries and integrated into the MLP to enhance the accuracy of forecasting. The outcomes demonstrate that the W-ANN model showcases improved performance, achieving heightened correlation coefficients [54]. Another study employs two hybrid models, namely Wavelet-ARIMA-ANN (WAANN) and Wavelet-Adaptive Neuro-Fuzzy Inference System (WANFIS), to predict the Standardized Precipitation Evaportranspiration Index (SPEI) at the Langat River Basin for different time scales (1-month, 3-months, and 6-months). The input data are subjected to wavelet decomposition at a level of three, and the resulting components are employed as inputs for both ANN and ANFIS models. ANN models are constructed using Bayesian regularization backpropagation with a total of 1000 training epochs, and the optimal number of hidden neurons is identified through trial and error. The WANFIS involves normalizing decomposed historical SPEI series as input for Sugeno-type Fuzzy Inference System (FIS), chosen for its computational efficiency and compatibility with optimization techniques. This is followed by applying the ANFIS algorithm with determined training parameters to enhance model performance. It is found that the hybrid WT-ARIMA-ANN technique outperforms other models, providing better forecasts for both short-term and mid-term drought indices (SPEI 1, SPEI 3, and SPEI 6).[55]. ### _Deep Learning in Heatwave and Cold waves_ Heatwaves are extreme weather event characterized by prolonged periods of excessively hot weather [56], often accompanied by high humidity. During a heatwave, temperatures rise significantly above the average for a particular location and persist for an extended period, typically several days. Conversely, a coldwave is a meteorological phenomenon marked by a sudden and significant decrease in air temperature at the Earth's surface. This results in extremely low temperatures that can give rise to hazardous weather conditions, including frost formation and the formation of ice. Both events can have profound impacts on human health [12, 57] particularly in vulnerable populations such as the elderly, children, and individuals with pre-existing health conditions, infrastructure, agriculture, ecosystems, and can even result in mortality for human beings and livestock. Additionally, heatwaves strain power grids due to increased air conditioning use, resulting in power outages, and can cause crop failures, wildfires and damage to infrastructure like roads, bridges, railways, and airports [11]. The World Meteorological Organization's (WMO) annual report for 2023 highlighted the unprecedented heatwaves experienced in Europe during the summer, exacerbated by abnormally dry conditions. Tragically, these extreme heat events resulted in over 15,000 excess deaths across several countries, including Spain, Germany, the United Kingdom, France, and Portugal. These alarming findings underscore the pressing need for urgent and effective heatwave mitigation strategies and adaptive measures to safeguard vulnerable populations in the face of escalating climate challenges [58]. Monitoring and predicting heatwaves and cold waves are crucial for preparedness [59] and mitigating potential risks, such as implementing appropriate measures to protect vulnerable populations and ensuring the efficient functioning of critical systems during extreme cold episodes. Both the events can be predicted using a range of approaches, including statistical models [60] and dynamic models such as GMC [61][62] and RCM [63][64][65] and machine learning techniques that analyze extensive datasets to identify patterns associated with heatwave occurrences. Deep learning can be utilized to extract features from various meteorological variables, such as temperature, humidity, wind patterns, and atmospheric pressure, to forecast the likelihood, frequency, duration and intensity of both heatwaves and coldwaves. The focus of the study is on heatwave monitoring and prediction, employing index-based monitoring, and LSTM-based prediction model in northern India up to 5-6 days ahead. The study employs IMD daily mean gridded surface temperature data (1951-2020) and NCMRWF-IMDAA reanalysis dataset for humidity and wind data (1979-2020). The objective of this study is to develop an operational framework that can monitor, track, and predict heatwaves in real-time over the Indian region, utilizing a combination of temperature indices, synoptic information, and an LSTM-based prediction model. The model's performance is evaluated using a correlation coefficient and root mean square error (RMSE). The results demonstrate that the proposed approach offers a promising approach to enhance heatwave preparedness and response strategies [66]. In another study, a GNN model is developed to predict regional summer heatwaves in North America. By utilizing daily weather data of 91 stations across CONUS and analyzing key meteorological variables, the model reduces computational burdens for immediate heatwave warnings and facilitates fast decision-making. The model utilizes an encoder-processor-decoder architecture for binary classification of heatwave events. Each node within the graph corresponds to a weather station, while the model employs a GAL - a form of nonlinear graph convolution. This GAL dynamically adjusts the adjacency matrix based on node features via attention mechanisms, thereby enhancing its expressiveness. The GAL computes specialized attention weights (AW) to capture interactions between nodes, encompassing influences from neighbors, to neighbors, and historical data. Moreover, the model's training incorporates a soft F1-score metric, effectively combining recall and precision to mitigate bias and maximize the F1-score. As a result, the GNN model achieves an impressive 90% accuracy in predicting regional heatwave occurrences [67]. Another study employs ConvNet and CapsNet to predict heatwaves and cold waves, predicting the occurrence and region of extreme weather patterns in North America. The study employs daily data from the Large-Ensemble Community Project (LENS) for surface air temperature (T2m) and geopotential height at 500 mb (Z500) during boreal summer and winter months from 1920 to 2005. The ConvNet architecture comprises 4 convolutional layers with ReLU activation, where the last two layers are followed by max-pooling (2x2, stride 1). The output feeds into a fully connected neural network with 200 neurons, featuring dropout regularization and L2 regularization to prevent overfitting. An adaptive learning rate is implemented through the ADAM optimizer, while a softmax layer assigns patterns to cluster indices based on the highest probability. On the other hand, CapsNet includes two convolutional layers with ReLU activation, followed by a primary capsule layer (eight capsules with eight convolution layers), utilizing the routing-by-agreement algorithm to convey information to a secondary capsule layer for cluster probability prediction. The squash function introduces nonlinearity, and a decoding layer with three fully connected layers aids pattern reconstruction. The framework's performance is evaluated using accuracy and recall metrics and compared against CNN and logistic regression. The CapsNet-based framework achieves notable accuracy and recall in predicting extreme temperature events based on Z500 patterns [68]. The challenge of long-term air temperature prediction in summer using AI techniques is addressed in the study. ECMWF's ERA5 reanalysis data spanning 1950 to 2021 for Paris (France) and Cordoba (Spain) is employed, incorporating nine vital meteorological variables, including 2m air temperature, sea surface temperature, wind components (10m and 100m), mean sea level pressure, soil water layer, and geopotential pressure. For each region, two experiments are carried out: one for shorter-term prediction and another for prolonged prediction time-horizons, therefore possibly indicating a heatwave or a colarkow occurrence. The research explores a diverse array of nine modeling approaches, encompassing Linear Regression (LR), Lasso Regression (Lasso), Polynomial Regression (Poly), AdaBoost, Decision Trees (DT), Random Forest (RF), Convolutional Neural Network (CNN), CNN with Recurrence Plots (RP+CNN), and RP+CNN with binary fusion (RP+CNN+BIN). Each method is assessed using MSE, MAE, Pearson and Spearman rank correlation coefficients, and optimal predictor variable subsets from exhaustive search. The CNN combined with RP approaches accurately detects maximum temperatures, indicating heatwaves, outperforming classical CNN and other machine learning techniques. [69]. In another study, the investigation of the association between surface air temperature (SAT) and land surface temperature (LST) considering land use during heat and cold wave events is undertaken. The author employs LSTM with a memory block containing forget, input, and output gates. These gates utilize sigmoid layers and pointwise multiplication to govern the flow of data across the cell and neural networks, effectively managing data dynamics. The study uses Terra and Aqua MODIS daytime and nighttime LST data, along with observed air temperature data obtained from 79 weather stations under the Korea Meteorological Administration spanning the years 2008 to 2018. The performance of the model is conducted using metrics such as R-squared, Root Mean Square Error (RMSE), and Index of Agreement (IoA) [70]. ### _Deep Learning in Tropical Cyclone_ Tropical cyclones are low-pressure weather systems that form over warm tropical oceans between latitudes 23.5 degrees North and South, except in the South-Atlantic Ocean region [71]. #### Iv-E1 Frequency and Identification A study focuses on predicting TC frequency during the post-monsoon season based on large-scale climate variables such as geopotential height, relative humidity, sea-level pressure, and zonal wind. Three types of artificial neural networks, namely MLP, RBF, and GRNN, are employed to develop prediction models. The research methodology involves selecting significant predictors using correlation analysis and utilizing historical TC frequency data from 1971 to 2013. The models are trained with data from 1971 to 2002 and evaluated with independent data from 2003 to 2013. The MLP architecture consists of two hidden layers with five nodes in the first layer, three nodes in the second layer, and an output layer. The RBF network employs radial basis functions with optimized spread parameters of 0.6, while the GRNN employs a parallel structure with spread factors of 0.2. Results demonstrate that the MLP model outperforms RBF and GRNN models across various evaluation metrics, showing lower RMSE, higher correlation, and better agreement between predicted and observed TC counts [72]. Furthermore, a multistaged deep learning framework proposes incorporating a Mask R-CNN detector, a wind speed filter, and a classifier based \begin{table} \begin{tabular}{l l l} \hline \hline Task & Approach & Ref \\ \hline Heatwave Forecast & LSTM & [66] \\ & GNN & [67] \\ Heatwave and Cold wave Forecast & ConvNet, CapsNets & [68] \\ SAT Forecast & CNN, CNN-RP, CNN-RP-BIN & [69] \\ SAT and LST Forecast & LSTM & [70] \\ \hline \hline \end{tabular} \end{table} TABLE IV: Deep Learning in Heatwaves and Cold waves on CNN to detect TCs. The Mask R-CNN detector with the R50 FPN model predicts TC locations, trained on RGB satellite images, and generates predictions with class labels, scores, segmentation masks, and bounding box coordinates. A Wind Speed Filter is applied to reduce false positives using a threshold of 34KT or higher. Cropped images based on bounding box coordinates from the detector are fed to the DenseNet169 CNN classifier to differentiate between true TCs and non-TCs. This methodology is optimized using Bayesian optimization techniques. The study uses Meteosat Visible Infra-Red Imager (MVIRI) IR satellite images from Meteosat 5 and Meteosat 7 in the Indian Ocean Data Coverage (IODC) region. The model is tested on a dataset of 171 images, including 88 TCs, indicating promising performance [73]. #### Iv-B2 Genesis Forecast Traditionally, meteorologists relies on various physical models and statistical techniques to predict tropical cyclone (TC) genesis. However, these physical models often have limitations and simplifications, which can affect their accuracy in capturing the complex interactions and dynamics involved in TC genesis. On the other hand, statistical models have been used to analyze historical data and identify patterns and relationships between different meteorological variables and TC genesis. While statistical models can provide valuable insights and correlations, they may struggle to capture nonlinear and complex relationships present in the data. Short-term tropical cyclogenesis forecasting plays a critical role in predicting the formation and development of TCs within a relatively short time frame. A study investigated the detectability of TCs and their precursors using a CNN model across different basins, seasons, and lead times. The CNN architecture consists of four convolutional layers, three pooling layers, and three fully connected layers, and finally an output layer with two units for binary classification. The Adam optimizer is applied to the CNN to update the network parameters to minimize the loss function called binary cross-entropy. In the western North Pacific, the CNN successfully detects TCs and their precursors during the period of July to November, achieving high POD ranging from 79.0% to 89.1%, along with relatively low FAR ranging from 32.8% to 53.4%. Notably, the CNN exhibits impressive performance in detecting precursors, with detection results of 91.2%, 77.8%, and 74.8% for precursors occurring 2, 5, and 7 days before their formation, respectively. This method displays promise for studying tropical cyclogenesis and exhibits robust performance even in regions with limited training data and short TC lifetimes. However, the detection of TCs and their precursors is found to be limited in cases where cloud cover is extremely small (! 30%) or extremely large (? 95%). Considering developing TCs and precursors as one category potentially affects the ability to detect pre-TCs. Additionally, model-specific biases are identified due to the CNN being trained solely on Nonhydrostatic ICsahedral Atmospheric Model (NICAM) dataset. Notably, the detection performance in the North Atlantic is relatively lower, which could be attributed to the scarcity of training data and shorter lifetimes of TCs in that particular region [74]. In the realm of long-term cyclogenesis forecasting, various approaches have been employed to improve the accuracy of predictions and provide insights into the behavior of TCs over an extended period. Taking a distinctive route, a study combines SOM and FFNN to investigate changes in TCs' GPI and its contributing factors for a global climate model. This study introduces a comprehensive methodology employing two types of artificial neural networks to project changes in North Atlantic tropical cyclone genesis potential under the warming climate of the twenty-first century. Through SOMs, archetypal patterns of GPI-related environmental variables are captured, arranging them on a two-dimensional grid to retain data topology. Concurrently, FBNNs identify the relative importance of these variables in driving projected GP changes. SOMs' training ensures the preservation of data relationships, while FBNNs calculate variable relevance for GP outcomes. The FBNNs' training involves conveying input signals to hidden-layer nodes, generating output via sigmoid functions. The neural network framework NEVPROP4 is employed for FBNN implementation. This dual-network approach yields significant insights into the intricate trends of TC genesis potential as they respond to evolving environmental conditions [75]. #### Iv-B3 Track Forecast The accurate prediction of TC tracks is crucial for effective disaster preparedness and response. In recent years, deep learning techniques have emerged as promising tools for improving TC track forecasting. In a study, researchers aimed to utilize the neural oscillatory elastic graph matching (NOEGM) technique for tropical cyclone (TC) pattern identification, and a hybrid radial basis function (HRBF) network integrated with time difference and structural learning (TDSL) algorithm for TC track prediction. The HRBF network employs three layers, with past network outputs introduced through time-delay units and influenced by a decay factor. The evaluation encompassed 120 TC cases spanning from 1985 to 1998. The NOEGM model achieved noteworthy results, with 98% accurate segmentation and 97% correct classification rates for TC pattern recognition. The HRBF model showcased an accuracy of over 86% in TC track and intensity mining. In comparison with prevailing TC prediction models, the proposed approach demonstrated substantial enhancements, reducing forecast errors by more than 30% and achieving a remarkable 14% enhancement in 48-hour track forecast accuracy. [76]. Utilizing an extensive dataset spanning 32 years of cyclone best track analysis, researchers constructed an ANN model to forecast TC positions 24 hours in advance. Notably, this model incorporates inputs from the two most recent 6-hourly positions, along with the present latitude and longitude, while predicting positions for a 24-hour lead time. Through a systematic exploration, a range of both linear and nonlinear transfer functions, such as Radial Basis Function and linear least squares optimization, are evaluated. Furthermore, different configurations of hidden layers and neurons are experimented with to optimize performance. The chosen linear neural network architecture, driven by a pseudo invert learning algorithm, yields remarkable results, achieving MAE as low as 0.75 degrees for latitude and 0.87 degrees for longitude. The model's effectiveness is reinforced by a comparison of average errors: the Limited Area Model (LAM), the National Centre for Environmental Prediction based Quasi Lagrangian Model (QLM), and ANN models exhibit errors of 132.6 km, 142.0 km, and 127.5 km, respectively. These findings underscore the potential accuracy of the ANN-based approach in cyclone tracking, particularly within a 24-hour prediction window [77]. Rutgers et al. leveraged a Generative Adversarial Network (GAN) to anticipate the paths of typhoons. This was accomplished by merging satellite images from the KMA and reanalysis data from the ECMWF dataset, covering the time span from 1993 to 2017, with a focus on typhoons that could impact the Korean Peninsula. Training data comprised cropped segments of historical typhoon images, while full-scale images were employed for testing purposes. The GAN framework consisted of a generator, which utilized multi-scale capabilities to generate diverse images, and a discriminator to differentiate between authentic and generated images. Inputs encompass meteorological variables such as Sea Surface Temperature (SST), Sea Pressure, Relative Humidity (RH), Surface velocity field (zonal and meridional components), Velocity field at 950 mb pressure level, and Vertical wind shear (at 850 mb and 200 mb pressure levels). The training process involved iteratively optimizing both networks using distinct loss functions: L2 loss for quantifying image disparities, gradient difference loss to amplify image clarity, and adversarial loss to challenge the discriminator's ability to distinguish real from generated images. The findings underscored the GAN's efficacy in predicting typhoon trajectories and cloud formations. Accuracy in predicting typhoon center positions was assessed, revealing that a majority fell within 80 km (65.5%), a notable portion within 80-120 km (31.5%), and a smaller fraction exceeded 120 km (3.0%). The overall prediction error was significantly reduced to 67.2 km, compared to 95.6 km when relying solely on observational data. The GAN's ability to anticipate cloud movement patterns underscored its potential in capturing dynamic phenomena [78]. An algorithm based on LSTM is employed for desirable 6-24 hour nowcasting of typhoon tracks on historical typhoon data from 1949 to 2011 in China's Mainland. The model's architecture encompasses three layers: input, hidden, and output, featuring 20 LSTM cells in the hidden layer and 2 neurons in the output layer. Through backpropagation utilizing the BPTT algorithm, errors are minimized by comparing predictions with actual observed tracks, thus offering a substantial advancement in typhoon track prediction [79]. An innovative approach is introduced to predict tropical cyclone movement over a 24-hour timeframe by combining historical trajectory data and reanalysis atmospheric images, particularly wind and pressure fields. The technique involves adopting a dynamic frame of reference that follows the storm center, thus enhancing the precision of forecasts. The model's versatility is demonstrated by its capability to rapidly provide forecasts for newly emerging storms, a crucial asset for real-time predictions. Leveraging an extensive database spanning more than three decades and over 3,000 storms, sampled at six-hour intervals, the approach \begin{table} \begin{tabular}{l l l} \hline \hline Task & Approach & Ref \\ \hline Frequency & MLP, RBF, GRNN & [72] \\ Identification & CNN, Mask R-CNN & [73] \\ \hline Cyclogenesis Forecast & CNN & [74] \\ & SOM-FNN & [75] \\ \hline Track Forecast & HRBF & [76] \\ & ANN & [77] \\ & GAN & [78] \\ & LSTM & [79] \\ & CNN & [80] \\ \hline Intensity Forecast & NN & [81] \\ & MLP & [82] \\ & CNN & [83] \\ & RNN & [84] \\ & CNN & [85] \\ & CNN & [86] \\ & double cascade CNN & [87] \\ & CNN, VGG-19 & [88] \\ & CNN & [89] \\ & CNN & [90] \\ & CNN, VGG & [91] \\ RI Prediction & CNN & [92] \\ & RNN, LSTM & [93] \\ \hline \hline \end{tabular} \end{table} TABLE V: Deep Learning in Tropical Cyclone integrates past displacement data, metadata, wind fields, and geopotential height fields to capture diverse information. The methodology involves separate training of the Wind CNN, Pressure CNN, and Past Tracks + Meta NN, followed by integration into a fused network. Training incorporates root mean square error (RMSE) as the loss function, with regularization to prevent overfitting. This fusion network not only enhances prediction accuracy but also significantly reduces testing time, making it a promising advancement for real-time forecasting in the realm of tropical cyclones [80]. #### Iv-B4 Intensity Prediction Tropical cyclone intensity prediction is a critical aspect of forecasting, and various approaches have been explored in recent research. In a study, an advanced neural network model is employed to predict tropical cyclone intensity changes in the western North Pacific, incorporating climatology, persistence, and synoptic factors. The neural network architecture consists of three layers: an input layer with 11 units representing climatology, persistence, and synoptic predictors; a hidden layer with 11 units capturing complex relationships; and an output layer predicting intensity changes. The models analyzed include a multiple linear regression model with climatology and persistence predictors (R-CP), a neural network model with the same predictors (N-CP), a multiple linear regression model with climatology, persistence, and synoptic predictors (R-CPS), and a neural network model incorporating all predictors (N-CPS). The performance of these models is assessed through average intensity prediction errors across different prediction intervals. The N-CPS model demonstrates superior performance in predicting tropical cyclone intensity changes, especially over shorter time intervals, while the N-CP model shows slight superiority over the R-CPS model [81]. In a study, the use of MLP models capable of forecasting intensity changes at 3-hour intervals beyond 72 hours in the North Indian Ocean, specifically in the Bay of Bengal and Arabian Sea, is explored. The architecture of the MLP model incorporates central pressure (CP), maximum sustained wind speed (MSWS), pressure drop (PD), total ozone column (TOC), and sea surface temperature (SST) as inputs for predicting cyclone intensity. The model's effectiveness is assessed using metrics like RSME and MAE, revealing the MLP's superior performance compared to other models like RBFN, MLR, and OLR for forecasting cyclone intensity. The models' individual performances are evaluated for various cyclones, accounting for varying sea surface temperatures over the Arabian Sea and Bay of Bengal [82]. In a study, a CNN model is used to estimate the intensity of tropical cyclones in the Atlantic and Pacific regions. The proposed model uses a comprehensive dataset comprising two distinct components: a collection of 48,828 infrared (IR) hurricane images sourced from the Marine Meteorology Division of the U.S. Naval Research Laboratory, and HURDAT2 data to label these images. The model's architecture integrates convolutional layers with varying filter sizes and strides, followed by strategic max-pooling for down-sampling. Complemented by local response normalization and fully connected layers with ReLU activation, the model incorporates regularization techniques like dropout to prevent overfitting. Weight updates are fine-tuned through SGD optimization with momentum, and a specialized Softmax loss layer facilitates accurate multi-class classification. By autonomously extracting pivotal features from TC images, this methodology achieves better accuracy and reduced RSME, indicating a significant advancement in tropical cyclone intensity estimation [83]. Additionally, a study investigates the application of RNN for forecasting TC intensity by leveraging historical observation data in the Western North Pacific since 1949. The RNN architecture captures intricate relationships among sequential elements--longitude, latitude, and intensity--across input, hidden, and output layers. Integrating a backpropagation through time optimization algorithm, the model refines weights and biases. Employing a cross-entropy loss function, it gauges disparities between predicted and actual TC intensity, with the hidden layer employing the tanh activation function. Notably, the model excels, achieving a compelling 5.1 m/s error in 24-hour forecasts, outperforming select dynamical models and closely approximating subjective predictions.[84]. A dual-branch CNN model is proposed for estimating tropical cyclone (TC) intensity in the Northwest Pacific. The model exhibits strong performance for tropical storm and super typhoon categories but demonstrates reduced accuracy for moderate intensity and the weakest tropical depression category. The architecture of the TCIENet model comprises two parallel CNN branches designed for processing infrared and water vapor images. Each branch includes essential modules for feature extraction, water vapor attention, and intensity regression, with the overall goal of capturing the intricate relationship between image patterns and TC intensity. The training is facilitated by the Adam optimizer, utilizing techniques such as Softmax operation, dropout regularization, and L1 and L2 loss functions to enhance its predictive capability. The research also delves into the impact of diverse image sizes and model components on intensity estimation accuracy, leveraging metrics like RMSE, MAE, bias, and absolute error to evaluate the model's effectiveness [85]. Tian et al. presented a novel CNN-based hybrid model designed to accurately estimate tropical cyclone intensity by harnessing 46,919 infrared images sourced from the Pacific Northwest and Atlantic Ocean. This architecture incorporates a classification model, fine-grained regression models, and a Back-propagation neural network. The classification model effectively categorizes TC samples into distinct intensity levels, thereby guiding the selection of appropriate regression models. The model's optimization is carried out using the Adam optimization algorithm, while a cross-validation loss function is employed for both classification and regression tasks. Notably, the model achieves exceptional accuracy and remarkably low RSME, outperforming the existing methodologies [86]. The TCIENet model offers a novel approach to accurately classifying and estimating tropical cyclone intensity using infrared satellite images from the northwest Pacific Ocean basin. This model adopts a cascading deep-CNN architecture consisting of two essential components: TC intensity classification (TCIC) and TC intensity estimation (TCIE). The TCIC module employs convolutional layers to categorize TC intensity into three specific classes, while the TCIE module, inspired by a modified AlexNet structure, predicts intensity values across different TC intensity categories. Notably, the TCIC module employs a cross-entropy loss with L2 regularization, and the TCIE module employs a SmoothL1 loss function for precise intensity estimation. The model's effectiveness is validated using a dataset encompassing 1001 TCs from 1981 to 2019, partitioned into distinct sets for training, validation, and testing. Evaluation based on intensity estimation metrics reveals impressive performance, achieving an overall root mean square error of 8.60 kt and a mean absolute error of 6.67 kt in comparison to best track data. [87]. In a separate study, a CNN model is utilized to predict the intensity levels of hurricanes using IR satellite imagery data from HURSAT and wind speed data from the HURDAT2 of the Greater Houston region. The architecture involves sequential layers: input, convolution, pooling, and fully connected, guided by ReLU activations, MSE loss, and RmsProp/Adam optimizers. This facilitates accurate hurricane intensity estimation, pattern recognition for storm categorization by severity, achieving lower RMSE (7.6 knots) and MSE (6.68 knots) through batch normalization and dropout layers. Additionally, a VGG19 model is employed to evaluate the extent of damage and automate annotation of satellite imagery data. The VGG 19 model undergoes fine-tuning for hurricane damage prediction and classification of severe weather events. The optimization process is guided by the Adam Optimizer, utilizing MSE as the foundational loss function. The models are subjected to rigorous evaluation, encompassing a diverse set of metrics including RMSE, MAE, MSE, and Relative RSME. Notably, the model demonstrates remarkable performance, achieving a 98% accuracy in predicting hurricane damage and a 97% accuracy in classifying severe weather events [88]. Furthermore, a model called DeepTCNet is proposed specifically designed for TC intensity and size estimation in the North Atlantic by using IBTrACS and the Hurricane Satellite dataset. The study harnesses CNN as the core architecture within DeepTCNet to estimate TC intensity and wind radii from IR imagery. Extensive experimentation establishes VGGNet with 13 layers and compact (3 \(\times\) 3) convolutional filters as the optimal configuration, forming the foundational structure for DeepTCNet. The evaluation presents MAE for TC intensity estimation (measured in knots) on the test dataset across various depths and kernel sizes in VGGNet's initial convolutional layer. Leveraging the Adam optimization with default parameters, learning occurs through the adoption of MAE as the loss function. This holistic approach exemplifies the seamless fusion of physics-augmented deep learning, culminating in enhanced TC analysis and prediction capabilities [89]. The study focuses on estimating TC intensity using a CNN model. Satellite IR imagery and Best Track data are employed to analyze 97 TC cases over the Northwest Pacific Ocean from 2015 to 2018. The CNN architecture encompasses an input layer, four convolutional layers, four pooling layers, two fully connected layers, and an output layer, resulting in the derivation of eight intensity values. Notably, the multicategory CNN achieves an accuracy of 84.8% for TC intensity estimation, which further improves to 88.9% through conversion to a binary classification task. [90]. Another study proposes a CNN model for estimating TC intensity using Himawari-8 satellite cloud products, including cloud optical thickness (CLOT), cloud top temperature (CLT), cloud top height (CLTH), cloud effective radius (CLER), and cloud type (CLTY). The model's architecture is based on the VGG framework, enhanced with attention mechanisms and residual learning to improve precision while reducing parameter count. The CNN comprises four convolutional blocks with progressively larger filter sizes, integrating residual learning at different levels and a Convolutional Block Attention Module (CBAM) after a maximum pooling layer. Batch normalization and dropout layers are employed to counter overfitting. The model is optimized using the Adam optimizer and MAE loss function. It undergoes training and tuning through six-fold cross-validation and is evaluated on independent test data, utilizing real-time typhoon track information from the western North Pacific basin alongside Himawari-8 cloud products [91]. These studies contribute valuable insights into improving tropical cyclone intensity forecasts through the utilization of advanced neural network models. In addition to general tropical cyclone intensity forecasting, a particularly challenging aspect is predicting Rapid Intensification (RI), where a tropical cyclone undergoes a sudden and significant strengthening over a short timeframe. RI is a critical phenomenon due to its potential to escalate a relatively mild storm into a highly destructive force, posing severe threats to coastal communities and infrastructure. To address the complexities of RI prediction, a CNN model called TCNET was developed to enhance the prediction of RI in tropical cyclones by extracting features from large-scale environmental conditions. The study used ECMWF ERA-Interim reanalysis data and the SHIPS database. TCNET's architecture consists of data filters, a customized sampler (GMM-SMOTE), an XGBoost classifier, and hyperparameter tuning. This model's performance outperforms COR-SHIPS and LLE-SHIPS in RI prediction, yielding superior results in terms of kappa, PSS, POD, and FAR metrics. Moreover, TCNET identifies previously unexplored variables, such as ozone mass mixing ratio, that influence RI. The training of TCNET involves backpropagation, utilizing mean square error as the loss function and Adam optimizer for weight updates of the filters [92]. In another study, deep learning models including RNN and LSTM were explored for predicting tropical cyclone intensity and rapid intensification (RI). The proposed approach involved convolutional layers for autonomous feature extraction from satellite images, an RNN block with ConvLSTM cells for feature evolution, and a final output regressor composed of convolutional and dense layers to forecast tropical cyclone intensity (Vmax) at +24 hours. Additionally, the study introduced a deep learning ensemble strategy involving 20 models with diverse designs, effectively improving TC intensity and RI prediction by incorporating both conventional and satellite-derived features. This ensemble method offered intensity distributions for deterministic predictions, RI likelihood estimation, and prediction uncertainty assessment, yielding improved RI detection probabilities and reduced false-alarm rates compared to operational forecasts for western Pacific TCs [93]. ## IV Challenges The effective utilization of DL models in weather forecasting is accompanied by several challenges that require careful consideration. In this section, we explore several key challenges associated with the application of deep learning in the field of weather forecasting. #### Iv-1 Data Availability Data availability is crucial for advancing the capabilities of deep learning models in meteorological applications. Limited access to historical records, real-time observations, and specialized data sources can hamper model development and evaluation [94]. Addressing data availability challenges requires establishing robust data-sharing frameworks, promoting data collaboration between meteorological organizations, and exploring innovative approaches to gather and enhance meteorological data. #### Iv-2 Data Quality Ensuring data quality poses a significant challenge for deep learning models in meteorology. Weather data, obtained from various sources like weather stations, satellites, and radars, may have limitations in terms of spatial coverage, temporal resolution, and accuracy. Missing or inaccurate observations can introduce biases and errors. For example, inadequate temperature measurements in remote regions due to limited weather station distribution can lead to incomplete climate models, potentially affecting the accuracy of long-term weather predcitions. #### Iv-3 Model Architecture DL models often consist of complex architectures with numerous layers and parameters, posing challenges in their design and optimization for weather forecasting. Determining the optimal network architecture, selecting appropriate activation functions, and managing computational resources are critical task in developing efficient DL models for meteorological applications [8]. #### Iv-4 Hybrid Approach Combining DL techniques with traditional physical models can leverage the strengths of both approaches, leading to more accurate and reliable predictions. DL models excel at learning complex patterns and capturing nonlinear relationships in large datasets [94], while traditional physical models provide valuable insights into the underlying physical processes. For instance, coupling a deep learning algorithm with a NWP model can allow the DL component to capture intricate spatial patterns in satellite imagery, while the physical model contributes its understanding of atmospheric physics. This collaborative approach offers the potential for more precise predictions of complex meteorological events. However, integrating DL models with existing frameworks triggers challenges such as resource requirement, data quality and interpretability. #### Iv-5 Data Heterogeneity Data heterogeneity refers to the diversity of data sources, formats, and features, which can complicate the integration and analysis of different data types. For instance, in the development of a deep learning model for weather prediction, information must be accumulated from various sources such as satellites, radar systems, automatic weather stations, numerical models and manual observations. However, each of these sources employs its own distinct method of storing data. Satellites often employ formats like TIFF or GeoTIFF, while radar data may utilize formats such as HDF5 or NetCDF, and other sources could have unique formats. To ensure the seamless operation of a weather prediction model, these different types of data must be harmonized, enabling the model to effectively learn from the combined data and resulting in accurate and reliable weather forecasts. #### Iv-6 Model Explainability Model explainibility refers to the how a model process and transform input into corresponding output, making the process transparent and easy to comprehend [95]. For instance, if the model predicts upcoming rain, meteorologists need to comprehend the specific meteorological variables and potential biases influencing its predictions. This understanding becomes crucial as the model transitions to real-world application, allowing its developers to provide insights into its functioning. ## V Discussion and Future Directions The integration of deep learning methods into the study of extreme weather events brings about a significant transformation, expanding our understanding and predictive abilities. These advanced models not only excel in deciphering intricate spatial relationships but also stand out in unraveling the complex timing patterns inherent in meteorological phenomena. This advancement holds the potential to greatly improve weather forecasting accuracy across a wide range of events, from thunderstorms and lightning occurrences to the tracking of tropical cyclones. At the heart of this innovation lies the natural capacity of deep learning models to identify and replicate non-linear relationships within the complex fabric of atmospheric data. By analyzing extensive sets of information, these models unearth hidden patterns and interactions that conventional methods struggle to capture. However, as we move forward with these promising advancements, it becomes crucial to address certain key challenges that must be overcome to fully realize the potential of deep learning in meteorology. One such challenge revolves around the continuous need for high-quality and comprehensive data. The effectiveness of deep learning models relies on their exposure to a diverse array of carefully curated data points. This emphasizes the importance of creating robust data pipelines and well-organized datasets. Furthermore, the inherently complex nature of deep learning architectures presents a dilemma regarding their interpretability. Ensuring that the decisions made by these intricate models can be comprehended and validated by experts in meteorology remains an ongoing endeavor. The future of weather prediction calls for the exploration of ensemble techniques that combine the strengths of various models to produce more comprehensive and accurate forecasts. This pursuit involves developing innovative approaches that seamlessly integrate deep learning models with traditional numerical weather prediction methods, drawing on the well-established physical understanding of atmospheric processes. The convergence of deep learning capabilities with specialized insights in meteorology emerges as a fertile area for further exploration. Hybrid models that blend empirical meteorological knowledge with the computational power of deep learning offer a promising path to enhancing forecast accuracy and reinforcing our ability to handle the multifaceted impacts of extreme weather events. In the broader context of this review, a clear message underscores the persistent drive for progress, necessitating ongoing collaboration and interdisciplinary synergy. By harnessing the capabilities of deep learning and pushing the boundaries of meteorological understanding, we are positioned to empower decision-makers and stakeholders with invaluable tools to proactively mitigate the far-reaching consequences of the ever-evolving realm of extreme weather events. In weather forecasting, there exist several research gaps that need to be addressed to enhance the capabilities and effectiveness of these models. Firstly, in drought prediction, current studies lack long-term forecasting capabilities and are limited in spatial resolution. Improving these aspects is crucial to provide accurate and detailed information on drought conditions, enabling proactive mitigation measures. In the case of tropical cyclones, there is a notable absence of studies focused on pattern identification. Efforts should be directed towards reducing the cone of uncertainty by improving track accuracy, size estimation, and spatial distribution of cyclones. More research is needed in predicting RI and associated weather phenomena such as storm surge, floods, and quantitative precipitation forecasts. The lack of practical success stories in this area underscores the need for further investigation and advancements. In heatwave prediction, there is a paucity of research focused on the frequency and duration of heatwaves. The prediction of severe thunderstorms poses its own set of challenges. To improve forecasts in this domain, exploring other ensemble techniques, incorporating feature selection methods, and leveraging dynamic graph modeling approaches can be beneficial. Integrating data from multiple NWP models, along with high-resolution NWP models, holds promise for enhancing thunderstorm forecasts. The intensity, frequency, and location prediction of lightning strikes require further attention. Monitoring and predicting lightning strikes, especially in discreetly distributed scenarios, remain complex tasks that require advanced techniques and data integration. Radar and satellite data play a crucial role in weather forecasting. However, challenges persist in utilizing radar data to make predictions without clear indications of initial convections. Exploiting the early-stage signals of convections using radar and satellite data can aid in improving forecast accuracy, particularly in mitigating false alarms. Cloud-related weather forecasting also faces challenges, including high computation time and resource requirements. Inefficient observations due to rain, strong winds, foggy conditions, and sunsets further hinder the efficiency of cloud-related forecasting methods. Addressing these challenges is essential to unlock the full potential of deep learning in cloud prediction. Further research is required to enhance hail detection, size estimation, forecasting, and damage assessment methods using deep learning techniques, despite recent advancements in hail-related studies. Improving models, such as NWP, remains a substantial challenge. ## VI Conclusion This review highlighted the significant advancements and promising potential of deep learning techniques in the field of meteorology, specifically in extreme weather events. Deep learning models, such as CNN and RNN, demonstrated their effectiveness in various applications, including cyclone prediction, severe rainfall and hail prediction, cloud and snow detection, rainfall-induced flood, landslide forecasting, and more. The utilization of deep learning algorithms allowed researchers to extract intricate patterns and features from complex meteorological datasets, leading to improved accuracy and performance in weather prediction and analysis. These models showed remarkable skill in capturing spatial and temporal dependencies in weather data, enabling more accurate predictions of extreme events and enhancing our understanding of their underlying dynamics. Furthermore, deep learning methods offered advantages over traditional statistical approaches by automatically learning representations and hierarchies of features, eliminating the need for manual feature engineering. This allowed for more efficient and effective analysis of large-scale meteorological datasets, facilitating the development of advanced forecasting models and decision-support systems. Deep learning models also provided an alternative approach by directly learning the relationships between input observations and output variables from data, circumventing the computational bottlenecks and time lags associated with physics-based models. However, further advancements were needed to enhance the performance and efficiency of these models in weather forecasting applications. Closing these research gaps and advancing the field of deep learning in weather forecasting would contribute to more accurate, reliable, and timely predictions, ultimately benefiting various sectors and society as a whole. ## Acknowledgment The author would like to express heartfelt gratitude to the India Meteorological Development (IMD) and the Indian Institute of Information Technology, Allahabad (IIIT Allahabad) for their invaluable support and contributions to this journal. Their guidance, resources, and assistance have been instrumental in the successful completion of this research. ## Conflicts of Interest The authors declare no conflict of interest. ## Abbreviations The following abbreviations are used in this manuscript:
2310.04572
LIVE: Lidar Informed Visual Search for Multiple Objects with Multiple Robots
This paper introduces LIVE: Lidar Informed Visual Search focused on the problem of multi-robot (MR) planning and execution for robust visual detection of multiple objects. We perform extensive real-world experiments with a two-robot team in an indoor apartment setting. LIVE acts as a perception module that detects unmapped obstacles, or Short Term Features (STFs), in Lidar observations. STFs are filtered, resulting in regions to be visually inspected by modifying plans online. Lidar Coverage Path Planning (CPP) is employed for generating highly efficient global plans for heterogeneous robot teams. Finally, we present a data model and a demonstration dataset, which can be found by visiting our project website https://sites.google.com/view/live-iros2023/home.
Ryan Gupta, Minkyu Kim, Juliana T Rodriguez, Kyle Morgenstein, Luis Sentis
2023-10-06T20:27:30Z
http://arxiv.org/abs/2310.04572v1
# LIVE: Lidar Informed Visual Search for Multiple Objects with Multiple Robots ###### Abstract This paper introduces LIVE: Lidar Informed Visual Search focused on the problem of multi-robot (MR) planning and execution for robust visual detection of multiple objects. We perform extensive real-world experiments with a two-robot team in an indoor apartment setting. LIVE acts as a perception module that detects unmapped obstacles, or Short Term Features (STFs), in Lidar observations. STFs are filtered, resulting in regions to be visually inspected by modifying plans online. Lidar Coverage Path Planning (CPP) is employed for generating highly efficient global plans for heterogeneous robot teams. Finally, we present a data model and a demonstration dataset, which can be found by visiting our project website [https://sites.google.com/view/live-iros2023/home](https://sites.google.com/view/live-iros2023/home). ## I Introduction Autonomous planning and real-world execution for multi-robot (MR) CPP, Active Object Search, and Exploration are receiving significant attention from the robotics community due to their relevance in several real-world scenarios including cleaning, lawn mowing, inspection, surveillance, and SAR [19]. This paper addresses efficient and robust path planning for real-world MR teams performing multi-object visual detection by combining Coverage Path Planning (CPP) with active sensing. The goal in CPP is to generate path plans such that a sensor footprint, or Field of View (FoV), covers the region of interest [10]. Efficiency is commonly measured by coverage time, path length or FoV overlap [19]. Early work [2] consider various cost functions for continuous and varying-rate area sweeping with a single robot and multiple robots [3]. MR CPP offers several benefits including speed and resilience to robot failure and is a common feature in search and rescue (SAR) and other critical applications. State of the art work in multi-aerial vehicle exploration in confined spaces [6] consider the problem of multi-sensor exploration for the purpose of visual mapping or inspection of surfaces. They leverage range scan data for guiding visual surface exploration. Instead, we propose to leverage range scan information for guiding visual search. Active Sensing was first proposed in [5] as a method of providing a control strategy based on current world state, updated by observations, and is frequently employed in information gathering missions. Active sensing has proven to be an effective tool for information gathering missions including SLAM [9], search [4, 21], and object tracking [15]. In the MR setting, Gosrich et al. [12] enable robot teams position themselves to observe events using a Graph Neural Network for non-local information during decision making. In [14], MR multi-object search in unknown environments is cast as a reinforcement learning problem and address non-myopic planning. While MR CPP offers several benefits, computing optimal paths for multiple agents is NP-Hard [13]. It remains an ongoing problem to improve path efficiency and computational load in MR CPP and exploration [19]. Kim et al. [16] provide high efficiency plans by combining sampling and optimization. However, in visual CPP with a small FoV RGB camera, it becomes impractical to plan online due to increased visitation sites. Furthermore, [11] notes the lack of multi-robot systems deployed for real-world autonomous search, even with increased publications. They indicate a need for research focused on the realistic evaluation of methods for real-world search. Mobile robots are frequently equipped with Lidar, which cast a significantly wider FoV than an RGB camera. LIVE leverages the wide FoV Lidar sensor to inform visual inspection in real-world experiments. LIVE classifies incoming Lidar observations to remove dynamic features and static map obstacles leaving Short Term Features (STFs), defined as static, unmapped obstacles. Raw STFs are filtered into inspection regions, defined as possible target object locations. Agents select among inspection regions, inspect them visually, then continue along global plans. An overview is shown in Fig. 1. This online modification of global plans enables the fast planning of efficient paths based on wide FoV Lidar scans at the global stage, while still managing robust visual results from the active sensing approach. The contributions of this work can be summarized as follows: * Propose a new method for incorporating lidar information to real-world multi-robot multi-object visual search * Uniquely combine Lidar CPP and Active Sensing for visual object search * Deploy extensively in a heterogeneous two-robot system for verification and baseline comparison ## II Methods This paper focuses on efficient and robust multi-robot multi-object detection in indoor environments with static map information known. The approach leverages global multi-robot CPP for efficient global plans and a perception module capable of modifying those plans online. An overview can be seen in Fig. 1. First, global path plans are generated for heterogeneous multi-robot teams [16]. ### _Search Map (Entropy Map)_ Bayesian filtering is employed to maintain a target estimate over the map. Each cell is assigned a probability of occupancy between \(0\) and \(1\). A cell is initialized to \(0.5\), except those corresponding to the static map, which are assigned \(1\). Cells that have been visited by the sensor FoV and are free are assigned \(0\). The search map is updated at each step when the central server receives local costmap observations from each robot that represent sensor FoV. In this work, local costmaps are rectangular, representing Lidar FoV, or triangular, representing visual FoV. Entropy over the map is measured by considering all cells in the 2D global costmap and is computed as \[H(M_{t})=-\sum_{i=1}^{N}(m_{t}^{i}\log(m_{t}^{i})+(1-m_{t}^{i})log(1-m_{t}^{i})) \tag{1}\] where \(m_{t}^{i}\) is the occupancy variable at time step \(t\) and \(N\) denotes the total number of cells. For further details refer to [16]. ### _Inspection Region Detection_ Computing inspection regions begins with the assumption that the objects of interest belong to the set of unmapped obstacles. Incoming Lidar observations are classified based on current robot pose estimate. Each point in the 2D scan is classified as a Long Term Feature (LTF), Short Term Feature (STF), or Dynamic Feature (DF) [8]. LTFs represent the static map obstacles and STFs represent static unmapped obstacles. Let \(x_{i}\) denote the pose of the robot, and \(s_{i}\) denote observation at time step \(t_{i}\). Each observation \(s_{i}\) consists of \(n_{i}\) 2D points, \(s_{i}=\{p_{i}^{j}\}_{j=1:n_{i}}\). Observations are transformed from robot local frame into the global frame using an affine transformation \(T_{i}\in SE(3)\). Finally, let map \(M\) be represented as a set of lines \(\{l_{i}\}_{1:n}\). #### Ii-A1 Ltf First, an analytic ray cast is performed [7] to determine expected laserscan based on map \(M\) and current robot position \(x_{i}\). Given observations, the probability that points correspond to one of the lines of that static map can be written \[P(p_{i}^{j}|x_{i},M)=exp\left(-\frac{dist(T_{i}p_{i}^{j},l_{j})^{2}}{\Sigma_{ s}}\right) \tag{2}\] where \(\Sigma_{s}\) is the scalar variance of observations, which comes from sensor accuracy. If Eq. 2 is greater than a threshold, point \(p_{i}^{j}\) is classified as a LTF. #### Ii-A2 Stf Remaining points will be classified as STF or DF. Observations at current time \(i\), \(p_{i}^{j}\), are compared with prior observations at time \(k\), \(p_{k}^{l}\) to determine correspondence between points in subsequent observations. The likelihood of the remaining points corresponding to the same point as in a previous laserscan is computed as \[P(p_{i}^{j},p_{k}^{l}|x_{i},x_{k})=exp\left(-\frac{||T_{i}p_{i}^{j}-T_{k}p_{k}^ {l}||^{2}}{\Sigma_{s}}\right) \tag{3}\] where \(p_{i}^{k}\) is the nearest point from \(p_{i}^{j}\) among points which does not belong to LTF at other timesteps, defined as \[p_{k}^{l}=\arg min||T_{i}p_{i}^{j}-T_{k}p_{k}^{l}|| \tag{4}\] When Eq. 3 is greater than some threshold, point \(p_{i}^{j}\) is classified as an STF. Remaining points in \(s_{i}\) are classified as DFs, which are ignored in this study. #### Ii-A3 Inspection Region Selection STFs obtained in the previous subsection are generated stochastically using the entire set of Lidar points from estimated robot pose. As a result, the set of raw STFs is large and filtering steps are critical. First, the pooling operator is used to reduce duplicates Fig. 2: The lidar costmap is shown over a portion of the map to demonstrate the Lidar FoV detecting unmapped obstacles. Unmapped obstacles, or STFs, are filtered into inspection regions. This figure occurs the moment A1 is given a priority waypoint from LIVE for viewing an inspection region. within a radius. Second, pooled STFs within a certain distance of LTFs are removed to eliminate false positives caused by localization drift. Finally, points inside visually observed regions of the map are removed. The remaining points are inspection regions. During waypoint generation the nearest inspection region is selected for generating a priority waypoint. The result of this process is shown in Fig. 2. ### _Waypoint Manager_ The Waypoint Manager takes global paths as input and acts as a finite state machine to determine when to send the robot an updated navigation waypoint. This node also receives priority waypoints for viewing inspection regions at each localization timestep. Due to the high update frequency and the stochastic nature of inspection regions, exhaustively visiting all priority waypoints is inefficient. As a result, this node incorporates priority waypoints every so often if they are available. The maximum rate of occurrence of priority waypoints is a tunable parameter that will impact robustness and target detection time. After visiting the priority waypoint, the manager will resume along the global path plans. A priority waypoint being selected is shown in Fig. 2. ## III Task Description and Experiment Setup A team comprised of the Unitree A1 Quadruped [1] and the Toyota HSR [20] must visually detect two static objects of interest. Specifically, robots must find two small suitcases in a 20mx30m apartment setting with an attached hall. The goal of the team is to robustly detect objects where efficiency is measured as path length. Robots with their sensors and instrumentation are described in Fig. 3. The HSR employs Toyota move base to generate movement commands from given waypoints. The A1 leverages a carrot planner to navigate to waypoints. LIVE and waypoint manager nodes run aboard each robot. Two maps are maintained: the first is the Search Map, which is implemented as a 2D global costmap in ROS and the second is a vectormap used for localization [8], shown in Fig 4. Both maps can be found alongside the dataset at the project website. A central search server generates path plans and maintains the search map by sending global plans to robots and receiving position and costmap information back. Communication between the central server and each robot are performed using Robofleet [17]. A local network covers the full region using a ASUS AC1900 WiFi router. The laptop and onboard computers run Ubuntu 18.04 and ROS Melodic [18] to implement all of the capabilities. Three planner settings are compared: \(1)\) Lidar CPP, \(2)\) Heuristic Visual CPP, and \(3)\) Lidar CPP + LIVE The three planner settings are described in Section II. In each of the three planning settings, 15 trials performed, for a total of 30 potential objects to be found. The 15 trials are composed of five trials from each of three different initial conditions (IC), depicted on the static map in Fig. 4. Object locations change between trials with varying difficulty. The same five sets of object locations are tested from each IC. There are seven total object locations used, categorized as easy, medium, or hard and they are also shown in Fig. 4. ## IV Results & Discussion Complete tabular results can be found on the project website, including trial-by-trial results, path lengths compared by initial condition, failure mode analysis, and success rate versus object difficulty. Videos and robot trajectories for all trials can be found via the Google Drive link on the website. Figure 5 shows the success rate for each of the three planner settings as well as the relative frequency of each failure mode. A trial is classified 'Path Failure' if the agents successfully follow the path plans output by the planner and failed to visually detect at least one object. 'HSR Navigation Failure' indicates the HSR bumped into an object and triggered the emergency stop. 'A1 Locomotion Failure' occurs if the A1 falls while walking. 'Object Detection Failure' occurs when at least a portion of the target object is in the RGB camera feed, yet the object detection algorithm misses it. Notably, the occurrence of path failure in the Heuristic Visual CPP is similar to the \begin{table} \begin{tabular}{|c||c|c|c|} \hline \multicolumn{4}{|c|}{**Heuristic Visual CPP**} \\ \hline & **HSR Avg.** & **A1 Avg.** & **Overall Avg.** \\ \hline **IC1** & \(36.9\) & \(46.5\) & \(41.7\) \\ \hline **IC2** & \(15.0\) & \(32.2\) & \(23.6\) \\ \hline **IC3** & \(30.2\) & \(44.1\) & \(37.2\) \\ \hline & \(27.4\) & \(40.9\) & \(34.2\) \\ \hline \hline \multicolumn{4}{|c|}{**Lidar CPP + LIVE (Our Method)**} \\ \hline & **HSR Avg.** & **A1 Avg.** & **Overall Avg.** \\ \hline **IC1** & \(19.4\) & \(32.9\) & \(26.1\) \\ \hline **IC2** & \(17.0\) & \(22.5\) & \(19.7\) \\ \hline **IC3** & \(25.2\) & \(46.0\) & \(35.6\) \\ \hline & \(20.5\) & \(33.8\) & \(27.2\) \\ \hline \end{tabular} \end{table} TABLE I: Table showing average path lengths for each robot and combined in meters as a function of initial condition. Fig. 4: Top view of the static map with each of the three robot initial conditions (IC) and seven object location labeled. A1 ICs are circles and HSR ICs are squares, color-coordinated for the trials. The object locations are shown by stars whose color indicates difficulty with location names included. Fig. 3: Figure of two robot team with key components labeled. proposed method, while Lidar CPP fails at a significantly higher rate, generating insufficient paths 40% of the time. Path lengths are computed in each setting with results from the proposed method and visual CPP in Table I, with Lidar CPP results on the website for brevity. In trials where one or more object is missed, the total lengths of the paths traversed by the robots is considered. While Visual CPP rarely fails due to inadequate global paths, it suffers from 'HSR Navigation Failures' in 16.7% of trials. This can be attributed to the near 50%, increase in path length over the other methods, resulting in higher localization drift and ultimately navigation failure. While the Heuristic Visual CPP path plans are successful, Lidar CPP + LIVE shows to be more robust in finding the objects in real-world scenarios due to this reduction in path length. Trials throughout all three planning methods suffer from object detection shortcomings. In some cases, the robot turns too quickly for the algorithm to detect the suitcase. In others, detection was impacted by reflections of the sun and partial object views, however, the occurrence is nearly equal over the three planner settings. The results indicate the addition of LIVE improves real-world task performance with a success rate boost of 20% as compared with Heuristic Visual CPP and 50% as compared with the Lidar CPP baselines. We further inspect success rate for each method based on object difficulty. In each of the three settings, nine objects are located in Easy, 12 in Medium, and nine in Hard locations. Fig. 4 depicts named object locations with color coded difficulty level. Results in tabular form are omitted for brevity but can be found on the project website. Easy object locations are found nearly 100% of the time across all three experiment settings. When object locations are Medium or Hard, however, Lidar CPP baseline performs poorly, with combined success rate of 14.3%. This result is expected given that Lidar CPP does not account for the visual sensor. In particular, Lidar CPP + LIVE detects 100% of Medium objects and 67% of Hard objects compared to Heuristic Visual CPP's 67% and 44%, respectively. Priority waypoints from LIVE are shown to result in successful object detection in trials 4, 5, 8, 9, and 15. Specifically, these trials have objects in Medium and Hard locations. Fig. 6 is a representative trajectory plot from LIVE trial 9. Both objects in trial 9 are found with a priority waypoint and the moment of detection is overlaid on trajectories. Time data for the experiments can be found on the project website. The data displays a trend that LIVE improves path length efficiency and success rate, but must trade off detection time to explore inspection regions to achieve such success. This can be seen in particular in LIVE trial 13 before the HSR finds the object located 'Behind Fridge.' We note the following on comparing times for the three methods tested: 1) Due to the relative success rates, the data is skewed towards those more difficult trials where the baseline methods are less successful, 2) The variance of time data is high, indicating the initial conditions and object locations are well dispersed, and 3) During some trials there are pauses in robot navigation, confounding the relationship between actual search time and quality of paths generated. ## V Conclusion We present an algorithm that leverages efficient global path plans, 2D range data and map information to efficiently find objects in known environments. We present results supporting that LIVE is more robust and efficient than global planning methods alone for real-world multi-object search. Ongoing work involves extending this method to unknown environments with supervised learning. Further, incorporation of inspection regions in a utility function for viewpoint selection. Fig. 5: Success / Failure Modes for each of 15 trials in all three path planning variants. The addition of LIVE significantly improves overall success rate and reduces the frequency of HSR Navigation Failure with more efficient path lengths. Fig. 6: Top view of the static map with robot trajectories from LIVE trial 9 overlaid. Arrows represent robot pose as recorded using rosbag during execution. Included is the moment of object detection for each of the robots connected with the robot pose at that moment. ## Acknowledgments This research was supported in part by NSF Award #2219236 and Living and Working with Robots, a core research project of Good Systems, a UT Grand Challenge. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. Research was in part sponsored by the Army Research Office and was accomplished under Cooperative Agreement Number W911NF-19-2-0333. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
2303.13136
Approximation of Functions of Several Variables by Multidimensional A- and J-fractions with Independent Variables
The paper deals with the problem of approximating the functions of several variables by branched continued fractions, in particular, multidimensional A- and J-fractions with independent variables. A generalization of Gragg's algorithm is constructed that enables us to compute, by the coefficients of the given formal multiple power series, the coefficients of the corresponding multidimensional A-fraction with independent variables. This algorithm can also be used to construct the multidimensional J-fraction with independent variables corresponding to a given formal multiple Laurent series. Some numerical experiments of approximating the functions of several variables by these branched continued fractions are given.
Roman Dmytryshyn, Serhii Sharyn
2023-03-23T09:38:24Z
http://arxiv.org/abs/2303.13136v1
Approximation of Functions of Several Variables by Multidimensional \(A\)- and \(J\)-fractions with Independent Variables ###### Abstract The paper deals with the problem of approximating the functions of several variables by branched continued fractions, in particular, multidimensional \(A\)- and \(J\)-fractions with independent variables. A generalization of Gragg's algorithm is constructed that enables us to compute, by the coefficients of the given formal multiple power series, the coefficients of the corresponding multidimensional \(A\)-fraction with independent variables. This algorithm can also be used to construct the multidimensional \(J\)-fraction with independent variables corresponding to a given formal multiple Laurent series. Some numerical experiments of approximating the functions of several variables by these branched continued fractions are given. Keywords:Holomorphic function of several complex variables Branched continued fraction Numerical approximation Msc: 32A10 32A17 33F05 ## 1 Introduction The problem of representing the functions of several variables, which arises, in particular, when solving different functional equations, contributes to the development and implementation of effective methods and algorithms that are implemented until the construction of special software. Currently, various tools are used to represent and/or approximate such functions. Possibly one of the most effective is the multidimensional generalization of continued fractions - branched continued fractions [7]. The construction of the rational approximations of a function of several variables is based on a correspondence between the approximants of the branched continued fraction and the formal multiple power series, which represents this function
2302.04763
On Sampling with Approximate Transport Maps
Transport maps can ease the sampling of distributions with non-trivial geometries by transforming them into distributions that are easier to handle. The potential of this approach has risen with the development of Normalizing Flows (NF) which are maps parameterized with deep neural networks trained to push a reference distribution towards a target. NF-enhanced samplers recently proposed blend (Markov chain) Monte Carlo methods with either (i) proposal draws from the flow or (ii) a flow-based reparametrization. In both cases, the quality of the learned transport conditions performance. The present work clarifies for the first time the relative strengths and weaknesses of these two approaches. Our study concludes that multimodal targets can be reliably handled with flow-based proposals up to moderately high dimensions. In contrast, methods relying on reparametrization struggle with multimodality but are more robust otherwise in high-dimensional settings and under poor training. To further illustrate the influence of target-proposal adequacy, we also derive a new quantitative bound for the mixing time of the Independent Metropolis-Hastings sampler.
Louis Grenioux, Alain Durmus, Éric Moulines, Marylou Gabrié
2023-02-09T16:52:52Z
http://arxiv.org/abs/2302.04763v3
# On Sampling with Approximate Transport Maps ###### Abstract Transport maps can ease the sampling of distributions with non-trivial geometries by transforming them into distributions that are easier to handle. The potential of this approach has risen with the development of Normalizing Flows (NF) which are maps parameterized with deep neural networks trained to push a reference distribution towards a target. NF-enhanced samplers recently proposed blend (Markov chain) Monte Carlo methods with either (i) proposal draws from the flow or (ii) a flow-based reparametrization. In both cases, the quality of the learned transport conditions performance. The present work clarifies for the first time the relative strengths and weaknesses of these two approaches. Our study concludes that multimodal targets can reliability be handled with flow-based proposals up to moderately high dimensions. In contrast, methods relying on reparametrization struggle with multimodality but are more robust otherwise in high-dimensional settings and under poor training. To further illustrate the influence of target-proposal adequacy, we also derive a new quantitative bound for the mixing time of the Independent Metropolis-Hastings sampler. The code to reproduce the experiments is available at [https://github.com/h2o64/flow_mcmc](https://github.com/h2o64/flow_mcmc). Machine Learning, ICML ## 1 Introduction Creating a transport map between an intractable distribution of interest and a tractable reference distribution can be a powerful strategy for facilitating inference. Namely, if a bijective map \(T:\mathbb{R}^{d}\to\mathbb{R}^{d}\) transports a tractable distribution \(\rho\) on \(\mathbb{R}^{d}\) to a target \(\pi\) on the same space, then the expectation of any test function \(f:\mathbb{R}^{d}\to\mathbb{R}\) under the target distribution can also be written as the expectation value under the reference distribution \[\pi(f)=\int_{\mathbb{R}^{d}}f(x)\mathrm{d}\pi(x)=\int_{\mathbb{R}^{d}}f(T(x)) |J_{T}(x)|\mathrm{d}\rho(x)\,,\] where \(J_{T}\) is the Jacobian determinant of \(T\). However, the intractability of the target is traded here with the difficult task of finding the map \(T\). According to Brenier's theorem (Brenier, 1991), if \(\rho\) is absolutely continous then an exact mapping \(T\) between \(\rho\) and \(\pi\) always exists. Such a map may be known analytically, as in some field theories in physics (Luscher, 2009). Otherwise, it can be approximated by optimizing a parameterized version of the map. While learned maps typically suffer from approximation and estimation errors, a sufficiently accurate approximated map is still valuable when combined with a reweighting scheme such as a Markov chain Monte Carlo (MCMC) or Importance Sampling (IS). The choice of the parametrization must ensure that the mapping is invertible and that the Jacobian determinant remains easy to compute. Among the first works combining approximate transport and Monte Carlo methods, (Parno and Marzouk, 2018) proposed using triangular maps. Over the years, the term Normalising Flow, introduced initially to refer to a Gaussianizing map (Tabak and Vanden-Eijnden, 2010), has become a common name for highly flexible transport maps, usually parameterized with neural networks, that allow efficient computations of inverses and Jacobians (Papamakarios et al., 2021; Kobyzev et al., 2021). NFs were developed in particular for generative modelling and are now also a central tool for Monte Carlo algorithms based on transport maps. While the issue of estimating the map is of great interest in the context of sampling, it is not the focus of this paper; see e.g. (Rezende and Mohamed, 2015; Parno and Marzouk, 2018; Muller et al., 2019; Noe et al., 2019). Instead, we focus on comparing the performance of algorithmic trends among NF-enhanced samplers developed simultaneously. On the one hand, flows have been used as reparametrization maps that improve the geometry of the target before running local traditional samplers such as Hamiltonian Monte Carlo (HMC) (Parno and Marzouk, 2018; Hoffman et al., 2019; Noe et al., 2019; Cabezas and Nemeth, 2022). We refer to these strategies as _neutrino-MCMC_ methods. On the other hand, the push-forward of the NF base distribution through the map has also been used as an independent proposal in IS (Muller et al., 2019; Noe et al., 2019), an approach coined _neural-IS_, and in MCMC updates (Albergo et al., 2019; Gabrie et al., 2022; Samsonov et al., 2022) among others. We refer to the latter as _flow-MCMC_ methods. Despite the growing number of applications of NF-enhanced samplers, such as in Bayesian inference (Karamanis et al., 2022; Wong et al., 2022), Energy Based Model learning (Nijkamp et al., 2021), statistical mechanics, (McNaughton et al., 2020), lattice QCD (Abbott et al., 2022) or chemistry (Mahmoud et al., 2022), a comparative study of the methods of neutra-MCMC, flow-MCMC and neural-IS is lacking. The present work fills this gap: * We systematically compare the robustness of algorithms with respect to key performance factors: imperfect flow learning, poor conditioning and complex geometries of the target distribution, multimodality and high-dimensions (Section 3). * We show that flow-MCMC and neural-IS can handle multimodal distributions up to moderately high-dimensions while neutra-MCMC is hindered in mixing between modes by the approximate nature of learned flows. * For unimodal targets, we find that neutra-MCMC is more reliable than flow-MCMC and neural-IS given low-quality flows. * We provide a new theoretical result on the mixing time of the independent Metropolis-Hastings (IMH) sampler by leveraging for the first time, to the best of our knowledge, a local approximation condition on the importance weights (Section 4). * Intuitions formed on synthetic controlled cases are confirmed in real-world applications (Section 6). ## 2 Background ### Normalizing flows Normalizing flows are a class of probabilistic models combining a \(C^{1}\)-diffeomorphism \(T:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) and a fully tractable probability distribution \(\rho\) on \(\mathbb{R}^{d}\). The _push-forward_ of \(\rho\) by \(T\) is defined as the distribution of \(X=T(Z)\) for \(Z\sim\rho\) and has a density given by the change of variable as \[\lambda_{T}^{\rho}(x)=\rho(T^{-1}(x))|J_{T^{-1}}(x)|\,, \tag{1}\] where \(|J_{T}|\) denotes the Jacobian determinant of \(T\). Similarly, given a probability density \(\pi\) on \(\mathbb{R}^{d}\), the _push-backward_ of \(\pi\) through \(T\) is defined as the distribution of \(Z=T^{-1}(X)\) for \(X\sim\pi\) and has a density given by \(\lambda_{T^{-1}}^{\pi}\). A parameterized family of \(C^{1}\)-diffeomorphisms \(\{T_{\alpha}\}_{\alpha\in\mathbb{A}}\) then defines a family of distributions \(\{\lambda_{T_{\alpha}}^{\rho}\}_{\alpha\in\mathbb{A}}\). This construction has recently been popularised by the use of neural networks (Kobyzev et al., 2021; Papamakarios et al., 2021) for generative modelling and sampling applications. In the context of sampling, the flow is usually trained so that the push-forward distribution approximates the target distribution i.e., \(\lambda_{T_{\alpha}}^{\rho}\approx\pi\). As will be discussed in more detail below, the flow can then be used as a reparametrization map or for drawing proposals. Note that we are interested in situations where samples from \(\pi\) are not available a priori when training flows for sampling applications. In that case, the reverse Kullback-Leiber divergence (KL) \[\mathrm{KL}(\lambda_{T_{\alpha}}^{\rho}||\pi)=\int\log(\lambda_{T_{\alpha}}^{ \rho}(x)/\pi(x))\lambda_{T_{\alpha}}^{\rho}(x)\mathrm{d}x \tag{2}\] can serve as a training target, since it can be efficiently estimated with i.i.d. samples from \(\rho\). Minimizing the reverse KL amounts to variational inference with a NF candidate model (Rezende Mohamed, 2015). This objective is also referred to in the literature as _self-training_ or _training by energy_; it is notoriously prone to mode collapse (Noe et al., 2019; Jerfel et al., 2021; Hackett et al., 2021). On the other hand, the forward KL \[\mathrm{KL}(\pi||\lambda_{T_{\alpha}}^{\rho})=\int\log(\pi(x)/\lambda_{T_{ \alpha}}^{\rho}(x))\pi(x)\mathrm{d}x \tag{3}\] is more manageable but more difficult to estimate because it is an expectation over \(\pi\). Remedies include importance reweighting (Muller et al., 2019), adaptive MCMC training (Parno and Marzouk, 2018; Gabrie et al., 2022), and sequential approaches to the target distribution (McNaughton et al., 2020; Arbel et al., 2021; Karamanis et al., 2022; Midgley et al., 2022). Regardless of which training strategy is used, the learned model \(\lambda_{T_{\alpha}}^{\rho}\) always suffers from approximation and estimation errors with respect to the target \(\pi\). However, the approximate transport map \(T_{\alpha}\) can be used to produce high quality Monte Carlo estimators using the strategies described in the next Section. ### Sampling with transport maps Since NFs can be efficiently sampled from, they can easily be integrated in Monte Carlo methods relying on proposal distributions, such as IS and certain MCMCs. neural-ISImportance Sampling uses a tractable proposal distribution, here denoted by \(\lambda\), to calculate expected values with respect to \(\pi\)(Tokdar and Kass, 2010). Assuming that the support of \(\pi\) is included in the support of \(\lambda\), we denote \[w(x)=\pi(x)/\lambda(x) \tag{4}\] the importance weight function, and define the self-normalized importance sampling estimator (SNIS) (Robert & Casella, 2005) of the expectation of \(f\) under \(\pi\) as \[\hat{\pi}_{N}(f)=\sum_{i=1}^{N}w_{N}^{i}f(X^{i})\] where \(X^{1:N}\)\(N\) are i.i.d. samples from \(\lambda\) and \[w_{N}^{i}=\left.w(X^{i})\right/\sum_{j=1}^{N}w(X^{j}) \tag{5}\] are the self-normalized importance weights. For IS to be effective, the proposal \(\lambda\) must be close enough to \(\pi\) in \(\chi\)-square distance (see (Agapiou et al., 2017, Theorem 1)), which makes IS also notably affected by the curse of dimensionality (e.g., (Agapiou et al., 2017, Section 2.4.1)). Adaptive IS proposed to consider parametric proposals adjusted to match the target as closely as possible. NFs are suited to achieve this goal: they define a manageable push-forward density, can be easily sampled, and are very expressive. IS using flows as proposals is known as Neural-IS (Muller et al., 2019) or Boltzmann Generator (Noe et al., 2019). NFs were also used in methods building on IS to specifically estimate normalization constants (Jia & Seljak, 2019; Ding & Zhang, 2021; Wirnsberger et al., 2020). Flow-based independent proposal MCMCsAnother way to leverage a tractable \(\lambda_{T_{\alpha}}^{\rho}\approx\pi\) is to use it as a proposal in an MCMC with invariant distribution \(\pi\). In particular, the flow can be used as a proposal for IMH. Metropolis-Hastings is a two-step iterative algorithm relying on a proposal Markov kernel, here denoted by \(Q(x^{(k)},\mathrm{d}x)=q(x^{(k)},x)\mathrm{d}x\). At iteration \(k+1\) a candidate \(x\) is sampled from \(Q\) conditionally to the previous state \(x^{(k)}\) and the next state is set according to the rule \[x^{(k+1)}=\left\{\begin{array}{ll}x&\text{w. prob. }\mathrm{acc}(x^{(k)},x)\\ x^{(k)}&\text{w. prob. }1-\mathrm{acc}(x^{(k)},x)\end{array}\right., \tag{6}\] where, given a target \(\pi\), the acceptance probability is \[\mathrm{acc}(x^{(k)},x)=\min\left(1,\frac{q(x,x^{(k)})\pi(x)}{q(x,x^{(k)})\pi (x^{(k)})}\right). \tag{7}\] To avoid vanishing acceptance probabilities, the Markov kernel is usually chosen to propose local updates, as in Metropolis-adjusted Langevin (MALA) (Roberts & Tweedie, 1996) or Hamiltonian Monte Carlo (HMC) (Duane et al., 1987; Neal et al., 2011), which exploit the local geometry of the target. These local MCMCs exhibit a tradeoff between update size and acceptability leading in many cases to a long decorrelation time for the resulting chain. Conversely, NFs can served as non-local proposals in the _independent_ Metropolis Hastings setting \(q(x,x^{\prime})=\lambda_{T_{\alpha}}^{\rho}\left(x^{\prime}\right)\). Since modern computer hardware allows a high degree of parallelism, it may also be advantageous to consider Markov chains with multiple proposals at each iteration, such as multiple-try Metropolis (Liu et al., 2000; Craiu & Lemieux, 2007) or iterative sampling-importance resampling (i-SIR) (Tjelmeland, 2004; Andrieu et al., 2010). The latter has been combined with NFs in (Samsonov et al., 2022). Setting the number of parallel trials to \(N>1\) in i-SIR, \(N-1\) propositions are drawn at iteration \((k+1)\): \[x_{l}\sim\lambda_{T_{\alpha}}^{\rho}\text{ for }l=2\cdots N\] and \(x_{1}\) is set equal to the previous state \(x^{(k)}\). The next state \(x^{(k+1)}\) is drawn from the set \(\{x_{l}\}_{l=1}^{N}\) according to the self-normalized weights \(w_{N}^{l}\) computed as in (5). If \(x^{(k+1)}\) is not equal to \(x^{(k)}\), the MCMC state has been fully refreshed. In what follows, we refer to MCMC methods that use NFs as independent proposals, in IMH or in i-SIR, as _flow-MCMC_. These methods suffer from similar pitfalls as IS. If the proposal is not "close enough" to the target, the importance function \(w(x)=\lambda_{T_{\alpha}}^{\rho}(x)/\pi(x)\) fluctuates widely, leading to long rejection streaks that become all the more difficult to avoid as the dimension increases. Flow-reparametrized local-MCMCsA third strategy of NF-enhanced samplers is to use the reverse map \(T_{\alpha}^{-1}\) defined by the flow to transport \(\pi\) into a distribution which, it is hoped, will be easier to capture with standard local samplers such as MALA or HMC. This strategy was discussed by (Parno & Marzouk, 2018) in the context of triangular flows and by (Noe et al., 2019) and (Hoffman et al., 2019) with modern normalizing-flows, we keep the denomination of _neutrino-MCMC_ from the latter. neutra-MCMC amounts to sampling the push-backward of the target \(\lambda_{T_{\alpha}^{-1}}^{\pi}\) with local MCMCs before mapping the samples back through \(T_{\alpha}\). It can be viewed as a reparametrization of the space or a spatially-dependent preconditioning step that has similarities to Riemannian MCMC (Girolami & Calderhead, 2011; Hoffman et al., 2019). Indeed, local MCMCs notoriously suffer from poor conditioning. For example, one can show that MALA has a mixing time1 scaling as \(O(\kappa\sqrt{d})\) - where \(\kappa\) is the conditioning number of the target distribution - provided the target distribution is log-concave (Wu et al., 2022; Chewi et al., 2021; Dwivedi et al., 2018). Footnote 1: See Section 4 for a definition of the mixing time. Nevertheless, neutra-MCMC does not benefit from the fast decorrelation of flow-MCMC methods, since updates remain local. Still, local-updates might be precisely the ingredient making neutra-MCMC escape the curse of dimensionality in some scenarios. Indeed, if the distribution targeted is log-concave, the mixing time of MALA mentioned above depends only sub-linearly on the dimension. The question becomes, when do MCMCs provided by locality allow to beat neural-IS and flow-MCMCs? In a recent work, (Cabezas and Nemeth, 2022) also used a flow reparametrization for the Elliptical Slice Sampler (ESS) (Murray et al., 2010) which is gradient free and parameter free2. Notably, ESS is able to cross energy barriers to tackle multimodal targets more efficiently than MALA or HMC (Natarovskii et al., 2021). Footnote 2: (Cabezas and Nemeth, 2022) uses ESS with a fixed covariance parameter \(\Sigma=I_{d}\). This choice is justified by the fact that using the neutra-MCMC trick amounts to sampling something close to the base of the flow which is \(\mathcal{N}(0,I_{d})\) In our experiments we include different versions of neutra-MCMCs and indicate in parentheses the sampler used on the push-backward: either MALA, HMC or ESS. ## 3 Synthetic case studies neural-IS, neutra-MCMC and flow-MCMC build on well-studied Monte Carlo scheme with known strengths and weaknesses (see (Rubinstein and Kroese, 2017) for a textbook). Most of their limitations would be lifted if an exact transport between the base distribution and the target were available. However, learned maps are imperfect, which leaves open a number of questions about the expected performance of NF-enhanced samplers: Which of the methods is most sensitive to the quality of the transport approximation? How do they work on challenging multimodal destinations? And how do they scale with dimension? In this Section, we present systematic synthetic case studies answering the questions above. In all of our experiments, we selected the samplers' hyper-parameters by optimizing case-specific performance metrics. The length of chains was chosen to be twice the number of steps required to satisfy the \(\hat{R}\) diagnosis (Gelman and Rubin, 1992) at the 1.1-threshold for the fastest converging algorithm. We used MALA as local sampler as it is suited for the log-concave distributions considered, faster and easier to tune than HMC. Evaluation metrics are fully descibed in App. D.1. ### neutra-MCMCs are robust to imperfect training Provided an exact transport is available, drawing independent samples from the base and pushing them through the maps generates i.i.d. samples from the target. This is equivalent to running neural-IS and finding that importance weights (4) are uniformly equal. With an approximate transport map, it is not clear which sampling strategy to prefer. In practice, the quality of the learned flow depends on many parameters: expressiveness, training objective, optimization procedure etc. In the first experiments that we now present, we design a framework in which the quality of the flow can be adjusted manually. Our first target distribution is a multivariate Gaussian (\(d=128\)) with an ill-conditioned covariance. We define an analytical flow with a scalar quality parameter \(t\in[0,1]\) leading to a perfect transport at \(t=0.5\) and an over-concentrated (respectively under-concentrated) push-forward at \(t=0\) (resp. \(t=1\))3, as shown on Fig. 1. All the experimental details are reported in App D.2. Footnote 3: This mimics the behavior when fitting the closest Gaussian of type \(\mathcal{N}(0,\sigma I_{d})\) with the forward KL if \(t=0\) and with the backward KL if \(t=1\). For the perfect flow, neural-IS yields the most accurate samples as expected, closely followed by flow-MCMCs. More interestingly, neutra-MCMC (MALA) quickly outperforms flow-MCMC as the flow shifts away from the exact transport, both towards over-spreading and over-concentrating the mass (Fig 1). A low-quality flow leads rapidly to zero-acceptance of NF proposals or very poor participation ratios4 for IS which translates into neural-IS and flow-MCMC being even less efficient that MALA (see Fig. 11 in App. D.4). Conversely, neutra-MCMCs are found to be robust as imperfect pre-conditioning is still an improvement over a simple MALA. These findings are confirmed by repeating the experiment on Neal's Funnel distribution in App. D.3 (Fig. 10), for which the conditioning number of the target distribution highly fluctuates over its support. Moreover, the robustness of neutra-MCMC (MALA) is also consistently observed on using RealNVP flows trained on the Banana distribution for which we report performance of NF-enhanced samplers along training completion (see Fig. 16 in App D.6). Footnote 4: The participation ratio of a sample of \(N\) IS proposal \(1/\sum_{i=1}^{N}{w_{N}^{i}}^{2}\in[1,N]\) tracks the number of samples contributing in practice to the computation of an SNIS estimator. Finally note that flow-MCMC methods using a multiple-try scheme, here i-SIR, remain more efficient than neutralino MCMC for a larger set of flow imperfection compared to the single-try IMH scheme. This advantage is understandable: an acceptable proposal is more likely to be available among multiple tries (see Fig. 11 in App. D.2). If multiple-try schemes are more costly per iteration, wall-clock times may be comparable thanks to parallelization (see Fig. 17 in App. F.1). ### neutra-MCMC may not mix between modes MCMCs with global updates or IS can effectively capture multiple modes, provided the proposal distribution is well matched to the target. MCMCs with local updates, on the other hand, usually cannot properly sample multi modal targets due to a prohibitive mixing time5. Therefore, the performance in multimodal environments of neutral-MCMC methods coupled with local samplers depends on the ability of the flow to lower energy barriers while keeping a simple geometry in the push-backward of the target. Footnote 5: Indeed, the Eyring-Kramers law shows that the exit time of a bassin of attraction of the Langevin diffusion has expectation which scales exponentially in the depth of the mode; see e.g. (Bovier et al., 2005) and the reference therein. We first examined the push-backward of common flow architectures trained by likelihood maximization on 2d target distributions. Both for a mixture of 4 isotropic Gaussians (Fig. 2 left) and for the Two-moons distribution (Fig. 8 in App. C), the approximate transport map is unable to erase energy barriers and creates an intricate push-backward landscape. Visualizing chains in latent and direct spaces makes it is clear that: neutra-MCMC (MALA) mixing is hindered by the complex geometry, the gradient free neutra-MCMC (ESS) mixes more successfully, while flow-MCMC (i-SIR) is even more efficient. We systematically extended the experiment on 4-Gaussians by training a RealNVP (Dinh et al., 2016) with maximum likelihood for a scale of increasing dimensions (see App. D.5 for all experiment details). We tested the relative ability of the neural-IS, neutra-MCMC and flow-MCMC algorithms to represent the relative weights of each Gaussian component by building histograms of the visited modes within a chain and comparing them with the perfect uniform histogram using a median squared error (Fig. 2 middle). As dimension increases and the quality of the transport map presumably decreases, the ability of neutral-MCMC (MALA) to change bassin worsens. Using neutral-MCMC with ESS enables an approximate mixing between modes but only flow-MCMCs recover the exact histograms robustly up to \(d=256\). In other words, dimension only heightens the performance gap between NF-enhanced samplers observed in small dimension. Note however that the acceptance of independent proposals drops with dimension such that flow-MCMC is also eventually limited by dimension (Fig. 12 in App. D.5). Not included in the plots for readability, neural-IS behaves similarly to flow-MCMC. In fact, it is expected that exact flows mapping a unmiodal base distribution to a multimodal target distribution are difficult to learn (Cornish et al., 2020). More precisely, Theorem 2.1 of the former reference shows that the bi-Lipschitz constant6 of a transport map between two distributions with different supports is infinite. Here we provide a complementary result on the illustrative uni-dimensional case of a Gaussian mixture target \(\pi\) and standard normal base \(\rho\): Footnote 6: \(\mathrm{BiLip}(f)\) is the bi-Lipschitz constant of \(f\) is defined as the infimum over \(M\in[1,\infty]\) such that \(M^{-1}\|z-z^{\prime}\|\leq\|f(z)-f(z^{\prime})\|\leq M\|z-z^{\prime}\|\) for all \(z\) and \(z^{\prime}\) different. **Proposition 3.1**.: _Let \(\pi=\mathcal{N}(-a,\sigma^{2})/2+\mathcal{N}(a,\sigma^{2})/2\) with \(a>0\), \(\sigma>0\) and \(\rho=\mathcal{N}(0,1)\). The unique flow mapping \(\pi\) to \(\rho\) denoted \(T_{\pi,\rho}\) verifies that_ \[\mathrm{BiLip}(T_{\pi,\rho})\geq\frac{\mathrm{d}T_{\pi,\rho}^{-1}}{\mathrm{d} z}(0)=\sigma\exp\left(\frac{a^{2}}{2\sigma^{2}}\right)\,. \tag{8}\] The proof of Proposition 3.1, showing the exponential scaling of the bi-Lipschitz constant in the distance between modes, is postponed to Appendix A. Overall, these results show that neural-IS and flow-MCMC are typically more effective for multimodal targets than neutra-MCMC. Note further that it has also been proposed to use mixture base distributions (Izmailov et al., 2020) or mixture of NFs (Noe et al., 2019; Gabrie et al., 2022; Hackett et al., 2021) to accommodate for multimodal targets provided that prior knowledge of the modes' structure is available. These mixture models can be easily plugged in neural-IS and flow-MCMC, however it is unclear how to combine them with neutra-MCMC. Finally, Figure 1: **(Left) Push-backwards \(\lambda_{T_{t}}^{\rho}\) and push-forwards \(\lambda_{T_{t}-1}^{\pi}\) as a function of the flow imperfection parameter \(t\). (Right) Sliced TV distances of different samplers depending on the quality of the flow \(t\), using 256 chains of length 1400 initialized with draws from NF with \(T_{t}\). neural-IS was evaluated with 14000 samples. Results were qualitatively unchanged for \(d=16,32,64,256\).** mixing neutrino-MCMC and flow-MCMC schemes by alternating between global updates (crossing energy barriers) and flow-preconditioned local-steps (robust withing a mode) seems promising, in particular when properties of the target distributions are not known a priori. It will be referred below as _neutrino-flow-MCMC_. A proposition along these lines was also made in (Grumitt et al., 2022). ### flow-MCMCs are the most impacted by dimension To investigate the effect of dimension, we ran a systematic experiment on the Banana distribution, a unimodal distribution with complicated geometry (details in App. D.6). We compared NF-enhanced samplers powered by Real-NVPs trained to optimize the backward KL and found that a crossover occurs in moderate dimensions : neural-IS and flow-MCMC algorithms are more efficient in small dimension but are more affected by the increase in dimensions compared to neutrino-MCMC algorithms. ## 4 New mixing rates for IMH As illustrated by the previous experiment, learning and sampling are expected to be more challenging when dimension increases. To better understand the interplay between flow-quality and dimension, we examined the case of the IMH sampler applied to a strongly log-concave target \(\pi\) and proposal \(\mathcal{N}(0,\sigma^{2}I_{d})\)7, \(\sigma>0\), with density denoted by \(q_{\sigma}\). Footnote 7: Nevertheless, we develop a theory for a generic proposal in Section E trough Theorem E.4 To this end, we consider the following assumption on the importance weight function \(w_{\sigma}(x)=\pi(x)/q_{\sigma}(x)\): **Assumption 4.1**.: For any \(R\geq 0\), there exists \(C_{R}\geq 0\) such that for any \(x,y\in\mathrm{B}(0,R)=\{z\,:\,\|z\|<R\}\): \[|\log w_{\sigma}(x)-\log w_{\sigma}(y)|\leq C_{R}\,|x-y|. \tag{9}\] The constant \(C_{R}\) appearing in Assumption 4.1, for \(R\geq 0\), represents the quality of the proposal with respect to the target \(\pi\) locally on \(\mathrm{B}(0,R)\). Indeed, if \(q_{\sigma}=\pi\) on \(\mathrm{B}(0,R)\), this constant is zero. In particular, \(q_{\sigma}\approx\pi\) with \(q_{\sigma}\) and \(\pi\) smooth and \(\nabla\log w_{\sigma}(x)\approx 0\) on \(\mathrm{B}(0,R)\) would result in Assumption 4.1 holding with a small constant \(C_{R}\). In contrast to existing analyses of IMH which assume \(w_{\sigma}\) to be uniformly bounded to obtain explicit convergence rates, here we only assume a smooth local condition on \(\log(w_{\sigma})\), namely that it is locally Lipschitz. Note this latter condition is milder than assuming the former. To the best of our knowledge, it is the first time that such a condition is considered for IMH; a thorough comparison of our contribution with the literature is postponed to Section 5. While we relax existing conditions on \(w_{\sigma}\), we restrict our study to the following particular class of targets: **Assumption 4.2**.: The target \(\pi\) is positive and \(-\log\pi\) is \(m\)-strongly convex on \(\mathbb{R}^{d}\) and attains its minimum at \(0\). Denote by \(P_{\sigma}\), the IMH Markov kernel with target \(\pi\) and proposal \(\mathcal{N}(0,\sigma^{2}I_{d})\). In our next result, we analyze the mixing time of \(P_{\sigma}\), defined for an accuracy \(\epsilon>0\) and an initial distribution \(\mu\) as \[\tau_{mix}(\mu,\epsilon)=\inf\{n\in\mathbb{N}\ :\ \|\mu P_{\sigma}^{n}-\pi\|_{ \mathrm{TV}}\leq\epsilon\}\,. \tag{10}\] \(\tau_{mix}\) quantifies the number of MCMC steps needed to bring the total variation distance 8 between the Markov chain and its invariant distribution bellow \(\epsilon\). Footnote 8: For two distribution \(\mu,\nu\) on \(\mathbb{R}^{d}\), \(\|\mu-\nu\|_{\mathrm{TV}}=\sup_{\mathcal{A}\in\mathbb{R}(\mathbb{R}^{d})}|\mu( \mathsf{A})-\nu(\mathsf{A})|\). **Theorem 4.3** (Explicit mixing time bounds for IMH).: _Assume Assumption-4.1-4.2 hold. Let \(0<\epsilon<1\) and \(\mu\) a \(\beta\)-warm distribution with respect to \(\pi\)9. Suppose in addition that \(C_{R}\leq\log 2\sqrt{m}/32\) with_ Footnote 9: For any Borel set \(\mathsf{E}\), \(\mu(\mathsf{E})\leq\beta\pi(\mathsf{E})\) \[R\geq C\sqrt{d}\max\left(\sigma,\frac{1}{\sqrt{m}}\right)(1+ \lvert\log^{\alpha}(\epsilon/\beta)\rvert\,/d^{\alpha/2})\,, \tag{11}\] _for some explicit numerical constant \(C\geq 0\) and exponent \(\alpha>0\). Then the mixing time of IMH is bounded as_ \[\tau_{mix}(\mu,\epsilon)\leq 128\log\left(\frac{2\beta}{\epsilon}\right)\max \left(1,\frac{128^{2}C_{R}^{2}}{\log(2)^{2}m}\right)\,. \tag{12}\] The proof of Theorem 4.3 is postponed to App. E. It shows that if \(C_{R}\) is bounded by a constant independent of the dimension for \(R\) of order at least \(\sqrt{d}\), then the mixing time is also independent of the dimension, which recovers easy consequences of existing analyses (Roberts and Rosenthal, 2011; Wang, 2022). In contrast to these works, Theorem 4.3 can be applied to the illustrative case where \(\pi=\mathcal{N}(0,I_{d})\) and \(\sigma=1+\lambda\) considering the error term \(\lambda\) either positive _or_ negative (for which \(w_{\sigma}\) is not uniformly bounded). In that case, Theorem 4.3 shows that reaching a precision \(\epsilon\) with a fixed number of MCMC steps \(n\) requires \(\lambda\) to decrease as \(\mathcal{O}(1/d)\) (the detailed derivation is postponed to App E). Finally, note that we do not assume that \(\pi\) is \(L\)-smooth, i.e., \(\nabla\log\pi\) is Lipschitz in Theorem 4.3. This condition is generally considered in existing results on MALA and HMC for strongly log-concave target distributions; see (Dwivedi et al., 2018; Chen et al., 2020). ## 5 Related Works **Comparison of NF-enhanced samplers** Several papers have investigated the difficulty of flow-MCMC algorithms in scaling with dimension (Del Debbio et al., 2021; Abbott et al., 2022). Hurdles arising from multimodality were also discussed in (Hackett et al., 2021) in the context of flow-MCMC methods. Meanwhile, the authors of (Hoffman et al., 2019) argued that the success of their neurotransmitter MCMC was bound to the quality of the flow but did not provide experiments in this direction. To the best of our knowledge, no thorough comparative study of the different NF-enhanced samplers was performed prior to this work. As previously mentioned, (Grumitt et al., 2022) proposed to mix local NF-preconditioned steps with NF Metropolis-Hastings steps, i.e., to combine neutra-MCMC and flow-MCMC methods. However, the focus of these authors was on the aspect of performing deterministic local updates using an instantaneous estimate of the density of walkers provided by the flow. More related to the present work, they present a rapid ablation study in their Appendix D. Enhancing Sequential Monte Carlo (Del Moral et al., 2006) with NFs has also been investigated by (Arbel et al., 2021; Karamanis et al., 2022). These methods are more involved and require the definition of a collection of target distributions approaching the final distribution of interest. They could not be directly compared to neural-IS, neurotransmitter MCMC and flow-MCMC. **IMH analysis** Most analyses establishing quantitative convergence bounds rely on the condition that the ratio \(\pi/q\) be uniformly bounded (Yang and Liu, 2021; Brown and Jones, 2021; Wang, 2022). In these works, it is shown that IMH is uniformly geometric in total variation or Wasserstein distances. Our contribution relaxes the uniform boundedness condition on \(\pi/q\) by restricting our study to the class of strongly log-concave targets. The analysis of local MCMC samplers, such as MALA or HMC for sampling from a strongly log-concave target is now well developed; see e.g., (Dwivedi et al., 2018; Chen et al., 2020; Chewi et al., 2021; Wu et al., 2022). These works rely on the notion of \(s\)-conductance for a reversible Markov kernel and on the results developed in (Lovasz and Simonovits, 1993) connecting this notion to the kernel's mixing time. This strategy has been successively applied to numerous MCMC algorithm since then; e.g., (Lovasz, 1999; Vempala, 2005; Lovasz and Vempala, 2007; Chandrasekaran et al., 2019; Mou et al., 2019; Cousins and Vempala, 2020; Narayanan and Srivastava, 2022). We follow the same path in the proof of Theorem 4.3. Finally, while (Roberts and Rosenthal, 2011) establish a general convergence for IMH under mild assumption, exploiting this result turns out to be difficult. In particular, we believe it cannot be made quantitative if \(\pi/q\) is unbounded since their main convergence result involves an intractable expectation with respect to the target distribution. ## 6 Benchmarks on real tasks In this Section we compare NF-enhanced samplers beyond the previously discussed synthetic examples. Our main findings hold for real world use-cases. ### Molecular system Our first experiment is the alanine dipeptide molecular system, which consists of 22 atoms in an implicit solvent. Our goal is to capture the Boltzmann distribution at temperature \(T=300K\) of the atomic 3D coordinates, which is known to be multimodal. We have used the flow trained in (Midgley et al., 2022) to drive the samplers and generated 2d projections of the outputs in Figure 3. neutralino MCMC methods are not perfectly mixing between modes, while flow-MCMC properly explores the weaker modes. Figure 2: **(Left) Example chains of NF-enhanced walkers with a 2d target mixture of 4 Gaussians.** The 128-step MCMC chain is colored according to the closest mode in the data space (bottom row) with corresponding location in the latent space (top row). The complex geometry of the push-backward \(\lambda_{T_{\alpha}}^{n}\) hinders the mixing of local-update algorithms. MALA’s step-size was chosen to reach 75% of acceptance. **(Middle) Median squared error of the histograms of visited modes of 4 Gaussians per chain against the perfect uniform histogram as a function of dimension. 512 chains of 1000-steps on average were used. **(Right) Sliced total variation in sampling the Banana distribution in increasing dimension** using a RealNVP. 128 chains of 1024-steps were used. For more details, see App. F.3. ### Sparse logistic regression Our second experiment is a sparse Bayesian hierarchical logistic regression on the German credit dataset (Dua and Graff, 2017), which has been used as a benchmark in recent papers (Hoffman et al., 2019; Grumitt et al., 2022; Cabezas and Nemeth, 2022). We trained an Inverse Autoregressive Flow (IAF) (Papamakarios et al., 2017) using the procedure described in (Hoffman et al., 2019). More details about the sampled distribution and the construction of the flow are given in App. F.2. We sampled the posterior predictive distribution on a test dataset and reported the log-posterior predictive density values for these samples in Table 1. neutrino-MCMC methods achieve higher posterior predictive values compared to flow-MCMC methods, which differ little from HMC. Note that neutrino-flow-MCMC, alternating between flow-MCMC and neutrino-MCMC, does not improve upon neutrino-MCMC. ### Field system In our last experiment we investigate the 1-d \(\phi^{4}\) model used as a benchmark in (Gabrie et al., 2022). This field system has two well-separated modes at the chosen temperature. Defined at the continuous level, the field can be discretized with different grid sizes, leading to practical implementations in different dimensions. We trained a RealNVP in \(64\), \(128\), and \(256\) dimensions by minimizing an approximated forward KL (more details on this procedure in App. F.4). Consistent with the results of Section 3.2, neutrino-MCMC (MALA) chains remain in the modes in which they were initialized, neutrino-MCMC (ESS) crosses over to the other mode rarely while flow-MCMC is able to mix properly (see Fig. 4 left). To further examine performance as a function of dimension, we considered the distribution restricted to the initial mode only and calculated the sliced total variation of samplers' chain compared to exact samples (Fig. 4 right). neutrino-MCMC methods appear to be less accurate here than flow-MCMC. Even within a mode, the global updates appear to allow for more effective exploration. Both approaches suffer as dimensions grow. ### Run time considerations In all experiments, algorithms were compared with a fixed sample-size, yet wall-clock time and computational costs per iteration vary between samplers: neural-IS and single-try flow-MCMC require two passes through the flow per iteration, neutrino-MCMCs require typically more. Multiple-try flow-MCMC computational cost scales linearly with the number of trials yet can be parallelized. In App. F.1 we report the run-time per iteration for the experiments of this section. Results show that neutrino-MCMCs are usually significantly slower per iteration than other methods. Nevertheless, expensive target evaluation such as in Molecular Dynamics impact in particular multiple-try flow-MCMC. \begin{table} \begin{tabular}{l c} \hline \hline \multirow{2}{*}{Sampler} & Average predictive \\ & log-posterior distribution \\ \hline neutrino-MCMC (HMC) & -191.1 \(\pm\) 0.1 \\ neutrino-flow-MCMC (i-SIR + HMC) & -194.1 \(\pm\) 1.8 \\ flow-MCMC (i-SIR) & -207.5 \(\pm\) 2.9 \\ HMC & -209.7 \(\pm\) 0.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Predictive posterior distribution for Bayesian sparse logistic regression on the German credit dataset. Figure 4: **(Left) Sampled \(\phi^{4}\) configurations** in dimension 128. **(Right)** Within-mode Sliced TV as a function of dimension. Figure 3: **Sampled configurations of alanine-dipeptide projected from 66 Cartesian coordinates to 2 dihedral angles \(\phi\) and \(\psi\)** (see App. F.3). **(Top)** Samples from the flow (left) and samples from a single MCMC chain of the different NF-samplers are shown as bright-colored points on colored background displaying the log histogram of exact samples at \(T=300K\) obtained by a Replica Exchange Molecular Dynamics simulation of (Stimper et al., 2022). **(Bottom)** Log-histograms of samples from the flow (left) and from 256 MCMC chains started at the same location. ## Acknowledgements L.G. and M.G. acknowledge funding from Hi! Paris. The work was partly supported by ANR-19-CHIA-0002-01 "SCAI". Part of this research has been carried out under the auspice of the Lagrange Center for Mathematics and Computing.
2308.11453
Quantitative global well-posedness of Boltzmann-Bose-Einstein equation and incompressible Navier-Stokes-Fourier limit
In the diffusive scaling and in the whole space, we prove the global well-posedness of the scaled Boltzmann-Bose-Einstein (briefly, BBE) equation with high temperature in the low regularity space $H^2_xL^2$. In particular, we quantify the fluctuation around the Bose-Einstein equilibrium $\mathcal{M}_{\lambda,T}(v)$ with respect to the parameters $\lambda$ and temperature $T$. Furthermore, the estimate for the diffusively scaled BBE equation is uniform to the Knudsen number $\epsilon$. As a consequence, we rigorously justify the hydrodynamic limit to the incompressible Navier-Stokes-Fourier equations. This is the first rigorous fluid limit result for BBE.
Ling-Bing He, Ning Jiang, Yu-long Zhou
2023-08-22T14:02:15Z
http://arxiv.org/abs/2308.11453v1
Quantitative global well-posedness of Boltzmann-Bose-Einstein equation and incompressible Navier-Stokes-Fourier limit ###### Abstract. In the diffusive scaling and in the whole space, we prove the global well-posedness of the scaled Boltzmann-Bose-Einstein (briefly, BBE) equation with high temperature in the low regularity space \(H_{x}^{2}L^{2}\). In particular, we quantify the fluctuation around the Bose-Einstein equilibrium \(\mathcal{M}_{\lambda,T}(v)\) with respect to the parameters \(\lambda\) and temperature \(T\). Furthermore, the estimate for the diffusively scaled BBE equation is uniform to the Knudsen number \(\epsilon\). As a consequence, we rigorously justify the hydrodynamic limit to the incompressible Navier-Stokes-Fourier equations. This is the first rigorous fluid limit result for BBE. ###### Contents * 1 Introduction * 2 Scaled Boltzmann-Bose-Einstein equation * 3 Linear and nonlinear collision operators analysis * 4 A priori estimate and global well-posedness * 5 Hydrodynamic limits * 6 Appendix AMS Subject Classification (2020): 35Q20, 82C40. ## 1. Introduction Quantum Boltzmann equations were proposed to describe the time evolution of a dilute system of weakly interacting bosons or fermions, which obeys the Bose-Einstein or Fermi-Dirac statistics respectively. Consequently these equations are named Boltzmann-Fermi-Dirac (briefly, BFD) or Boltzmann-Bose-Einstein (briefly, BBE) equations, respectively. The derivations of such equation date back to as early as 1920s by Nordheim [47] and 1933 by Uehling-Uhlenbeck [50]. Consequently, the quantum Boltzmann equations are also called Boltzmann-Nordheim equations or Uehling-Uhlenbeck equations in the literature. Later on, further developments were made by Erdos-Salmhofer-Yau [18], Benedetto-Castella-Esposito-Pulvirenti [7], [10], [8] and a short review [9], Lukkarinen-Spohn [45]. One can refer the classical book [15] for physical backgrounds. When the quantum effects are not considered, the evolution of dilute gas particles are governed by classical Boltzmann equations: \[\partial_{t}f+v\cdot\nabla_{x}f=Q_{B}(f)\,,\] where the Boltzmann collision term \[Q_{B}(f)(v)=\int_{\mathbb{S}^{2}\times\mathbb{R}^{3}}B(v-v_{*},\sigma)\big{\{}f (v_{*}^{\prime})f(v^{\prime})-f(v_{*})f(v)\big{\}}\mathrm{d}\sigma\mathrm{d}v_ {*}\,.\] In the above equation, the unknown non-negative function \(f\equiv f(t,x,v)\) is the so-called number density of the particles. It describes the evolution of the particles at time \(t\geq 0\), at position \(x\in\Omega\), with velocity \(v\in\mathbb{R}^{3}\). The domain \(\Omega\) could be the whole space, torus and other domains with boundaries. In this paper, we assume \(\Omega=\mathbb{R}^{3}\). In this setting, we assume all the particles have the same mass, with velocities \(v\) and \(v_{*}\) before the collisions, and \(v^{\prime}\) and \(v_{*}^{\prime}\) after the collisions. Here we only consider the elastic collisions which conserve the momentum and kinetic energy, i.e. \[v+v_{*}=v^{\prime}+v_{*}^{\prime}\,,\quad|v|^{2}+|v_{*}|^{2}=|v^{\prime}|^{2}+| v_{*}^{\prime}|^{2}\,. \tag{1.1}\] The above conservation laws include four equations, while \(v^{\prime}\) and \(v^{\prime}_{*}\) are six unknowns. So we need parameters in two dimensional manifold \(\mathbb{S}^{2}\) to represent \(v^{\prime}\) and \(v^{\prime}_{*}\) in terms of \(v\) and \(v_{*}\): \[v^{\prime}=\frac{v+v_{*}}{2}+\frac{|v-v_{*}|}{2}\sigma,\quad v^{\prime}_{*}= \frac{v+v_{*}}{2}-\frac{|v-v_{*}|}{2}\sigma\,,\] where \(\sigma\in\mathbb{S}^{2}\). In fact, it is easy to see \(\sigma=\frac{v^{\prime}-v^{\prime}_{*}}{|v^{\prime}-v^{\prime}_{*}|}\,.\) For the general introduction to Boltzmann equation, see the standard references [14, 48, 49]. ### Quantum Boltzmann equations Now we introduce the so-called quantum Boltzmann equations: \[\partial_{t}F+v\cdot\nabla_{x}F=Q_{\Phi,\hbar}(F,F),\ t>0,x\in\mathbb{R}^{3},v \in\mathbb{R}^{3};\quad F|_{t=0}(x,v)=F_{0}(x,v). \tag{1.2}\] Here \(F(t,x,v)\geq 0\) is the density function of particles with velocity \(v\in\mathbb{R}^{3}\) at time \(t\geq 0\) in position \(x\in\mathbb{R}^{3}\). The quantum Boltzmann collision operator \(Q_{\Phi,\hbar}\) acting only on velocity variable \(v\) is defined by \[Q_{\Phi,\hbar}(g,h)(v):=\int_{\mathbb{S}^{2}\times\mathbb{R}^{3}}B_{\Phi, \hbar}(v-v_{*},\sigma)\mathrm{D}\big{(}g^{\prime}_{*}h^{\prime}(1+\delta\hbar ^{3}g_{*})(1+\delta\hbar^{3}h)\big{)}\mathrm{d}\sigma\mathrm{d}v_{*}, \tag{1.3}\] where according to [18] and [10], the quantum Boltzmann collision kernel \(B_{\Phi,\hbar}(v-v_{*},\sigma)\) has the following form \[B_{\Phi,\hbar}(v-v_{*},\sigma):=\hbar^{-4}|v-v_{*}|\big{(}\hat{\Phi}(\hbar^{- 1}|v-v^{\prime}|)+\hat{\Phi}(\hbar^{-1}|v-v^{\prime}_{*}|)\big{)}^{2}\,. \tag{1.4}\] Here the radial function \(\hat{\Phi}(|\xi|):=\hat{\Phi}(\xi)=\int_{\mathbb{R}^{3}}e^{-\mathrm{i}x\cdot \xi}\Phi(x)\mathrm{d}x\) is the Fourier transform of a radial potential function \(\Phi(x)\). Furthermore, in (1.3) \(\hbar\) is the Plank constant, and \(\delta=1\) or \(-1\). Specifically, \(\delta=1\) corresponds to Bose-Einstein statistics, while \(\delta=-1\) corresponds to Fermi-Dirac statistics. In (1.3) and the rest of the article, we use the convenient shorthands \(h=h(v)\), \(g_{*}=g(v_{*})\), \(h^{\prime}=h(v^{\prime})\), \(g^{\prime}_{*}=g(v^{\prime}_{*})\) where \(v^{\prime}\), \(v^{\prime}_{*}\) are given by \[v^{\prime}=\frac{v+v_{*}}{2}+\frac{|v-v_{*}|}{2}\sigma,\quad v^{\prime}_{*}= \frac{v+v_{*}}{2}-\frac{|v-v_{*}|}{2}\sigma,\quad\sigma\in\mathbb{S}^{2}\,. \tag{1.5}\] Now we explain the notation \(\mathrm{D}(\cdot)\) in (1.3). For \(n=1\) or \(n=2\), we denote \[\mathrm{D}^{n}\big{(}f(v,v_{*},v^{\prime},v^{\prime}_{*})\big{)}:=\left(f(v,v_ {*},v^{\prime},v^{\prime}_{*})-f(v^{\prime},v^{\prime}_{*},v,v_{*})\right)^{n}. \tag{1.6}\] If \(n=1\), we write \(\mathrm{D}(\cdot)=\mathrm{D}^{1}(\cdot)\). The term \(\mathrm{D}\) is interpreted as "difference" before and after collision. In particular, in (1.3), \[\mathrm{D}\big{(}g^{\prime}_{*}h^{\prime}(1+\delta\hbar^{3}g_{*})(1+\delta \hbar^{3}h)\big{)}=g^{\prime}_{*}h^{\prime}(1+\delta\hbar^{3}g_{*})(1+\delta \hbar^{3}h)-g_{*}h(1+\delta\hbar^{3}g^{\prime}_{*})(1+\delta\hbar^{3}h^{ \prime})\,. \tag{1.7}\] By the following scaling \[\tilde{F}(t,x,v)=\hbar^{3}F(\hbar^{3}t,x,\hbar^{-3}v),\quad\phi(|x|)=\hbar^{4 }\Phi(\hbar^{4}|x|), \tag{1.8}\] we can normalize the Plank constant \(\hbar\). Indeed, it is easy to check \(F\) is a solution to (1.2) if and only if \(\tilde{F}\) is a solution of the following normalized equation \[\partial_{t}F+v\cdot\nabla_{x}F=Q_{\phi}(F,F),\ t>0,x\in\mathbb{R}^{3},v\in \mathbb{R}^{3};\quad F|_{t=0}(x,v)=\hbar^{3}F_{0}(x,\hbar^{-3}v), \tag{1.9}\] where the operator \(Q_{\phi}\) is defined by \[Q_{\phi}(g,h)(v):=\int_{\mathbb{S}^{2}\times\mathbb{R}^{3}}B_{\phi}(v-v_{*}, \sigma)\mathrm{D}\big{(}g^{\prime}_{*}h^{\prime}(1+\delta g_{*})(1+\delta h) \big{)}\mathrm{d}\sigma\mathrm{d}v_{*} \tag{1.10}\] with the kernel \(B_{\phi}(v-v_{*},\sigma)\) given by \[B_{\phi}(v-v_{*},\sigma):=|v-v_{*}|\big{(}\hat{\phi}(|v-v^{\prime}|)+\hat{ \phi}(|v-v^{\prime}_{*}|)\big{)}^{2}. \tag{1.11}\] In the current paper, we will take the potential \(\phi(x)=\frac{1}{2}\delta(x)\). Then the kernel in (1.11) reduces to that of _hard sphere_ model \[B(v-v_{*},\sigma):=|v-v_{*}| \tag{1.12}\] and the collision operator reduces to \[Q(g,h)(v):=\int_{\mathbb{S}^{2}\times\mathbb{R}^{3}}B(v-v_{*},\sigma)\mathrm{D} \big{(}g^{\prime}_{*}h^{\prime}(1+\delta g_{*})(1+\delta h)\big{)}\mathrm{d} \sigma\mathrm{d}v_{*}\,. \tag{1.13}\] As introduced before, when \(\delta=1\) in the collision kernel (1.13), the corresponding quantum Boltzmann equation (1.9) is called Boltzmann-Bose-Einstein equation, briefly, BBE. When \(\delta=-1\), the equation (1.9) is called Boltzmann-Fermi-Dirac equation, briefly, BFD. In this paper, we focus on BBE equation, i.e. \(\delta=1\) More specifically, we study the fluid dynamics from BBE to incompressible Navier-Stokes-Fourier system. The same limit from BFD (i.e. for \(\delta=-1\) case) was treated in [30]. ### Well-posedness of quantum Boltzmann equations As mentioned above, quantum Boltzmann equations include BFD and BBE equations. The first mathematical question is the well-posedness of these equations, in the framework of corresponding functional spaces of regularity, such as weak solutions, smooth solutions, or between. Compared to the extensive studies on the classical Boltzmann equations, much less has been done on quantum Boltzmann equations. Mathematically, BFD is relatively easier. We first review the studies on the BFD in the past three decades. For mathematical theory of well-posedness of BFD equation, early results were obtained by Dolbeault [16] and Lions [34]. They studied the global weak existence of solutions in mild or distributional sense for the whole space \(\mathbb{R}^{3}\) under some assumptions on the collision kernel. Furthermore, Dolbeault [16] obtained that the solution of BFD equation converges to the solution of the Boltzmann equation as \(\delta\to 0\) (the \(\delta\) appears in (1.3)) for very special bounded collision kernel case. Allemand [3] tried to extend the results of [16] to bounded domains with specular reflection boundary condition for integrable collision kernels. Alexandre [1] obtained another kind of weak solutions satisfying the entropy inequality, the so-called \(H\)-solutions. Up to now, the best results on the global weak solutions to the BFD equation belong to Lu. For general initial data, Lu [35, 43, 38] studied the global existence and stability of weak solutions on torus for very soft potential with a weak angular cutoff. More results on weak solutions are referred to [19, 20]. We remark that in [30], the authors derived uniform in \(\epsilon\in(0,1)\) energy estimate in the incompressible regime. This also gave a global in time smooth solution near global Fermi-Dirac distribution. For the more difficult BBE equation which has more interesting physical phenomena such as the famous Bose-Einstein condensation, the mathematical research is even less. In a series papers, Lu [36, 37, 44, 39, 40, 41, 42, 13] made major contributions on the systematic analytical studies of weak solutions of (mainly homogeneous) BBE, including existence of weak solutions, convergence to equilibrium (i.e. global Bose-Einstein distribution), long time behavior, etc. In particular, the condensation at low temperature was investigated. The focus of the current paper will be on the classical solution of BBE near the equilibrium (i.e. global Bose-Einstein distribution), but at high temperature. We will explain the relation of equilibrium of BBE with temperature later in this section. ### Hydrodynamic limits of quantum Boltzmann equations In the other direction, the hydrodynamic limits from kinetic equations to fluid equations have been very active in recent decades. One of the important features of kinetic equations is their connection to the fluid equations. The so-called hydrodynamic limits are the process that the Knudsen number \(\epsilon\) goes to zero. Here \(\epsilon>0\) is the Knudsen number which is the dimensionless quantity defined as the ratio of mean free path and macroscopic length scale. Depending on the physically scalings, different fluid equations (incompressible or compressible Navier-Stokes, Euler, etc.) can be derived from kinetic equations. Bardos and Ukai [6] proved the global existence of classical solution to diffusively scaled Boltzmann equation uniformly in \(0<\epsilon<1\) for hard potential with cutoff collision kernel. Consequently they justified the limit to incompressible Navier-Stokes equations with small initial data. By employing semigroup approach, Briant [11] also proved the same limit on the torus for hard cutoff potential, in particular, with convergence rate. Jiang, Xu and Zhao [31] proved again the same limit for a more general class of collision kernel. Starting from the solutions to the limiting fluid equations, Caflisch [12] and Nishida [46] proved the compressible Euler limit from the Boltzmann equation in the context of classical solution by the Hilbert expansion, and analytic solutions, respectively. Caflisch's approach was applied to the acoustic limit by Guo, Jang and Jiang [26, 27, 28] by combining with nonlinear energy method. We also mention some more results using Hilbert expansions [25, 29]. For the fluid limits of BFD equation, Zakrevskiy [53, 52] formally derived the compressible Euler and Navier-Stokes limits and incompressible Navier-Stokes limits. We also mention that, Filbet, Hu and Jin [22] introduced a new scheme for quantum Boltzmann equation to capture the Euler limit by numerical computations. In [30], Jiang-Xiong-Zhou proved the global existence of classical solutions near equilibrium and in addition, they also obtained the uniform in \(\epsilon\) energy estimates. As a consequence, the incompressible Navier-Stokes-Fourier limit from BFD equation was justified. Recently, Jiang-Zhou [32] studied the compressible Euler limit from BFD equation using Hilbert expansion method. The main theme of this paper is on the incompressible Navier-Stokes-Fourier limit from BBE equation. The key feature is to obtain a global in time and uniform in Knudsen number \(\epsilon\) estimate of the scaled BBE equation. However, this can be only achieved in the case of high temperature at this stage. We start from the scaled BBE in the diffusive scaling, from which the incompressible Navier-Stokes-Fourier system can be derived. For the high temperature case, the formal derivation is similar to that in [53] for BFD and [5] for classical Boltzmann equations. We emphasize that our result can be considered as the analogue of the corresponding limit for classical Boltzmann equation [31] and BFD equation [30]. All of these results belong to the so-called "bottom-up" type fluid limits, as classified in [31]. More specifically, these limits do not rely on the existence of the limiting equations. In fact, these limits provide the solutions of the limiting equations from the solutions of the kinetic equations. We will start from the following scaled BBE equation (in diffusive scaling): \[\partial_{t}F+\frac{1}{\epsilon}v\cdot\nabla_{x}F=\frac{1}{\epsilon^{2}}Q(F, F),\ t>0,x\in\mathbb{R}^{3},v\in\mathbb{R}^{3}, \tag{1.14}\] where the operator \(Q\) is defined through (1.13) with \(B\) given in (1.12), and \(\delta=1\) since we study BBE equation in this paper. In the above expression, \(\epsilon>0\) is the Knudsen number. The so-called _hydrodynamic limit_ is the process that the Knudsen number \(\epsilon\to 0\). Our goal in this paper is to rigorously justify the limit as \(\epsilon\to 0\), from solutions of the BBE (1.14) to solutions of the incompressible Navier-Stokes-Fourier (NSF) equations (1.38). Precise definitions of solutions to these equations will be given soon. ### Temperature in Boltzmann-Bose-Einstein equation Temperature plays an important role in the study of quantum Boltzmann equation. For example, for the particles obeying Bose-Einstein statistics, the so-called Bose-Einstein-Condensation (BEC) only happens in low temperature. For example, see [21]. We now introduce some basic knowledge about temperature in the quantum context. Let us consider a homogeneous density \(f=f(v)\) with zero mean \(\int f(v)\mathrm{d}v=0\). For \(k\geq 0\), we recall the moment function \[M_{k}(f):=\int|v|^{k}f(v)\mathrm{d}v.\] Let \(M_{0}=M_{0}(f),M_{2}=M_{2}(f)\) for simplicity. Let \(m\) be the mass of a particle, then \(mM_{0}\) and \(\frac{1}{2}mM_{2}\) are the total mass and kinetic energy per unit space volume. Referring [36], for a given density function \(f\), the kinetic temperature \(\bar{T}\) and the critical temperature \(\bar{T}_{c}\) of the particle system are defined by \[\bar{T}=\frac{1}{3k_{B}}\frac{mM_{2}}{M_{0}},\quad\bar{T}_{c}=\frac{m\zeta(5/ 2)}{2\pi k_{B}\zeta(3/2)}\big{(}\frac{M_{0}}{\zeta(3/2)}\big{)}^{\frac{2}{3}}, \tag{1.15}\] where \(k_{B}\) is the Boltzmann constant and \(\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^{s}}\) is the Riemann zeta function. The ratio \(\bar{T}/\bar{T}_{c}\) quantifies high and low temperature. More precisely, high temperature \(\bar{T}/\bar{T}_{c}>1\); critical case \(\bar{T}/\bar{T}_{c}=1\); low temperature \(\bar{T}/\bar{T}_{c}<1\). We now recall some known results about equilibrium distribution. The famous Bose-Einstein distribution has density function \[\mathcal{M}_{\lambda,T}(v):=\frac{1}{\exp(\frac{|v|^{2}}{2T}+\lambda)-1}; \quad\lambda\geq 0,T>0 \tag{1.16}\] The ratio \(\bar{T}/\bar{T}_{c}\) of \(\mathcal{M}_{\lambda,T}\) depends only on \(\lambda\). The critical value \(\lambda=0\) corresponds to the critical temperature \(\bar{T}/\bar{T}_{c}=1\). The critical value \(\lambda>0\) corresponds to high temperature case \(\bar{T}/\bar{T}_{c}>1\). In low temperature \(\bar{T}/\bar{T}_{c}<1\), the equilibrium of BBE equation is the Bose-Einstein distribution (1.16) with \(\lambda=0\) plus some Dirac delta function. That is, the equilibrium contains a Dirac measure. One can refer to [37] for the classification of equilibria. In this article, we work with high temperature and consider the equilibrium \(\mathcal{M}_{\lambda,T}\) with \(\lambda>0\). Note that very high temperature assumption is imposed in [33] to prove global well-posedness of homogeneous BBE equation with (slightly general than) hard sphere collisions. We consider the inhomogeneous case and give a careful analysis on the dependence on \(\lambda,T>0\). In particular, we pay much attention to the effects of \(\lambda,T\to 0\) or \(\lambda,T\to\infty\). Our analysis illustrates that for BBE, the incompressible fluid limits (at least for smooth solutions) only happen at high temperature. ### Perturbation around equilibrium and main results Similarly to the classical Boltzmann equations and BFD, the incompressible fluid regimes are near global equilibrium and long time scale. For the detailed explanation, see [5] for Boltzmann and [53] for BFD. For perturbation around the equilibrium, we define \[\mathcal{N}_{\lambda,T}(v):=\sqrt{\mathcal{M}_{\lambda,T}(v)(1+\mathcal{M}_{ \lambda,T}(v))}=\frac{1}{\exp(\frac{|v|^{2}}{4T}+\frac{\lambda}{2})-1}. \tag{1.17}\] We remark that the function \(\mathcal{N}_{\lambda,T}\) serves as the multiplier in the expansion \(F=\mathcal{M}_{\lambda,T}+\mathcal{N}_{\lambda,T}f\). For simplicity, let \(\mathcal{M}:=\mathcal{M}_{\lambda,T}\) and \(\mathcal{N}:=\mathcal{N}_{\lambda,T}\). With the expansion \(F=\mathcal{M}+\epsilon\mathcal{N}f\), the perturbed quantum Boltzmann equation corresponding to (1.14) reads \[\partial_{t}f+\frac{1}{\epsilon}v\cdot\nabla_{x}f+\frac{1}{\epsilon^{2}} \mathcal{L}^{\lambda,T}f=\frac{1}{\epsilon}\Gamma_{2}^{\lambda,T}(f,f)+\Gamma _{3}^{\lambda,T}(f,f,f),\quad f|_{t=0}=f_{0}^{\epsilon}. \tag{1.18}\] Here the linearized quantum Boltzmann operator \(\mathcal{L}^{\lambda,T}\) is define by \[(\mathcal{L}^{\lambda,T}f)(v):=\int B\mathcal{N}_{*}\mathcal{N}^{\prime} \mathcal{N}_{*}^{\prime}\mathcal{S}(\mathcal{N}^{-1}f)\mathrm{d}\sigma \mathrm{d}v_{*}, \tag{1.19}\] where \(\mathrm{S}(\cdot)\) is defined by \[\mathrm{S}(g):=g+g_{*}-g^{\prime}-g_{*}^{\prime}. \tag{1.20}\] The bilinear term \(\Gamma_{2}^{\lambda,T}(\cdot,\cdot)\) and the trilinear term \(\Gamma_{3}^{\lambda,T}(\cdot,\cdot,\cdot)\) are defined by \[\Gamma_{2}^{\lambda,T}(g,h) := \mathcal{N}^{-1}\int B\Pi_{2}(g,h)\mathrm{d}\sigma\mathrm{d}v_{*}. \tag{1.21}\] \[\Gamma_{3}^{\lambda,T}(g,h,\varrho) := \mathcal{N}^{-1}\int B\mathrm{D}\big{(}(\mathcal{N}g)_{*}^{ \prime}(\mathcal{N}h)^{\prime}((\mathcal{N}\varrho)_{*}+\mathcal{N}\varrho) \big{)}\mathrm{d}\sigma\mathrm{d}v_{*}. \tag{1.22}\] The notation \(\Pi_{2}\) in (1.21) is defined by \[\Pi_{2}(g,h) := \mathrm{D}\big{(}(\mathcal{N}g)_{*}^{\prime}(\mathcal{N}h)^{ \prime}\big{)} \tag{1.23}\] \[+\mathrm{D}\big{(}(\mathcal{N}g)_{*}^{\prime}(\mathcal{N}h)^{ \prime}(\mathcal{M}+\mathcal{M}_{*})\big{)}\] \[+\mathrm{D}\big{(}(\mathcal{N}g)_{*}(\mathcal{N}h)^{\prime}( \mathcal{M}_{*}^{\prime}-\mathcal{M})\big{)}\] \[+\big{(}(\mathcal{N}g)^{\prime}(\mathcal{N}h)\mathrm{D}(\mathcal{ M}_{*}^{\prime})+(\mathcal{N}g)_{*}^{\prime}(\mathcal{N}h)_{*}\mathrm{D}( \mathcal{M}^{\prime})\big{)}. \tag{1.26}\] Remark that the three operators \(\mathcal{L}^{\lambda,T}\), \(\Gamma_{2}^{\lambda,T}(\cdot,\cdot)\) and \(\Gamma_{3}^{\lambda,T}(\cdot,\cdot,\cdot)\) depends on \(\lambda,T\) through \(\mathcal{M}=\mathcal{M}_{\lambda,T}\), and \(\mathcal{N}=\mathcal{N}_{\lambda,T}\). The main result of this paper is to prove global well-posedness of (1.18) _uniformly_ in \(\epsilon\) in the Sobolev space \(H_{x}^{N}L^{2}\), which is defined as \[\|f\|_{H_{x}^{N}L^{2}}^{2}:=\sum_{|\alpha|\leq N}\|\partial_{x}^{\alpha}f\|_{L _{x}^{2}L^{2}}^{2}. \tag{1.27}\] Note that the functional only involves \(x\)-derivatives. Correspondingly, the dissipation functional reads \[\mathcal{D}_{N,T}(f):=\mathcal{D}_{N,T,1}(f)+\mathcal{D}_{N,T,2}(f). \tag{1.28}\] where \[\mathcal{D}_{N,T,1}(f):=|\nabla_{x}\mathbb{P}_{\lambda,T}f|_{H_{x}^{N-1}L^{2 }}^{2},\quad\mathcal{D}_{N,T,2}(f):=\|f-\mathbb{P}_{\lambda,T}f\|_{H_{x}^{N}L^ {2}}^{2}+T^{-\frac{1}{2}}\||v|^{\frac{1}{2}}(f-\mathbb{P}_{\lambda,T}f)\|_{H_{ x}^{N}L^{2}}^{2}. \tag{1.29}\] Here \(\mathbb{P}_{\lambda,T}\) is the projection on the kernel space of \(\mathcal{L}^{\lambda,T}\), which will be defined in (2.23). **Theorem 1.1**.: _Let \(0<\epsilon\leq 1\), \(\lambda,T>0\), and \(N\geq 2\). Let_ \[C_{*}(\lambda,T):=e^{-2\lambda}(1-e^{-\lambda})^{\frac{3\lambda}{2}}\min\{T^{ 3/2},T^{-3/2}\}. \tag{1.30}\] _Then there exists a universal constant \(\delta_{*}>0\), independent of \(\epsilon,\lambda\) and \(T\), such that if the initial datum \(f_{0}\) satisfying_ \[\mathcal{M}_{\lambda,T}+\mathcal{N}_{\lambda,T}f_{0}\geq 0,\quad\|f_{0}\|_{H_{ x}^{2}L^{2}}^{2}\leq\delta_{*}C_{*}(\lambda,T),\quad\|f_{0}\|_{H_{x}^{N}L^{2}} <\infty\,, \tag{1.31}\] _then the Cauchy problem (1.18) with initial datum \(f_{0}\) has a unique global solution \(f=f_{\epsilon}^{\lambda,T}\in L^{\infty}([0,\infty);H_{x}^{N}L^{2})\) satisfying \(\mathcal{M}_{\lambda,T}+\mathcal{N}_{\lambda,T}f(t)\geq 0\) and_ \[\sup_{t}\|f(t)\|_{H_{x}^{N}L^{2}}^{2}+\frac{1}{K(\lambda,T)}\int_{0}^{\infty} \mathcal{D}_{N,T}(f)\mathrm{d}\tau+\frac{\mathrm{C}_{1}(\lambda,T)}{K(\lambda,T )}\frac{1}{\epsilon^{2}}\int_{0}^{\infty}\mathcal{D}_{N,T,2}(f)\mathrm{d}\tau \leq O_{N}(f_{0})\|f_{0}\|_{H_{x}^{N}L^{2}}^{2}. \tag{1.32}\] _where_ \[O_{2}(f_{0})\equiv 12,\quad O_{N}(f_{0}):=24\exp\Big{(}Q_{3}(\lambda,T,N,f_{0}) O_{N-1}(f_{0})T^{-\frac{3}{2}}\|f_{0}\|_{H_{x}^{N-1}L^{2}}^{2}\Big{)}\,\,\,\text{for}\,\,N \geq 3, \tag{1.33}\] \[Q_{3}(\lambda,T,N,f_{0}):=2(Q_{1}(\lambda,T,N)+Q_{2}(\lambda,T,N) O_{N-1}(f_{0})T^{-\frac{3}{2}}\|f_{0}\|_{H_{x}^{N-1}L^{2}}^{2}). \tag{1.34}\] _where \(Q_{1}(\lambda,T,N)\) and \(Q_{2}(\lambda,T,N)\) are some constants defined in (4.64) and (4.64). Here the constants \(K(\lambda,T),\mathrm{C}_{1}(\lambda,T)\) are defined defined in (4.40), (4.41)._ We make some remarks on the above theorem. **Remark 1.1**.: We emphasize that the constant \(\delta_{*}\) in Theorem 1.1 is universal and does not depend on anything. We give an explicit smallness assumption in terms of the two parameters \(\lambda,T>0\). Note that the well-posedness region vanishes (i.e. the constant \(C_{*}(\lambda,T)\) defined in (1.30) will tends to \(0\)) as \(\lambda\to 0\) or \(\lambda\to\infty\) or \(T\to 0\) or \(T\to\infty\). **Remark 1.2**.: Recall that \(H^{2}_{x}L^{2}\) might be the largest the Sobolev space (with integer index) in which global well-posedness can be established for the classical Boltzmann equation in the whole space. We manage to construct the global well-posedness theory in the space for the bosonic Nordheim Boltzmann equation. **Remark 1.3**.: Notice that in the condition (1.31), only smallness assumption on \(\|f^{\epsilon}_{0}\|_{H^{2}_{x}L^{2}}\) is imposed. In other words, \(\|f^{\epsilon}_{0}\|_{H^{N}_{x}L^{2}}(N\geq 3)\) could be arbitrarily large (and bounded). Based on Theorem 1.1, we can prove the hydrodynamical limit of (1.18) to incompressible Navier-Stokes-Fourier equations. **Theorem 1.2**.: _Let \(\delta_{*}\) and \(C_{*}(\lambda,T)\) be the constants in Theorem 1.1. Let \(0<\epsilon\leq 1\), \(\lambda,T>0\), and \(N\geq 2\). Let \(f^{\epsilon}_{0}\) be a family of initial datum satisfying \(\mathcal{M}_{\lambda,T}+\mathcal{N}_{\lambda,T}f^{\epsilon}_{0}\geq 0\) and_ \[\sup_{0<\epsilon<1}\|f^{\epsilon}_{0}\|^{2}_{H^{2}_{x}L^{2}}\leq\delta_{*}C_{ *}(\lambda,T),\quad M_{0}:=\sup_{0<\epsilon<1}\|f^{\epsilon}_{0}\|_{H^{N}_{x}L ^{2}}<\infty, \tag{1.35}\] \[\mathbb{P}_{\lambda,T}f^{\epsilon}_{0}\to f_{0}=(\rho_{0}+u_{0}\cdot\frac{v}{ T^{1/2}}+\theta_{0}(\frac{|v|^{2}}{2T}-K_{\lambda}))\mathcal{N}_{\lambda,T}\text{ as }\epsilon\to 0,\text{ strongly in }H^{N}_{x}L^{2} \tag{1.36}\] _for some \((\rho_{0},u_{0},\theta_{0})\in H^{N}_{x}\) with \(\rho_{0}+\theta_{0}=0\). Here \(K_{\lambda}=K_{A}-1\) is a constant depending only on \(\lambda\). Let \(f^{\epsilon}\) be the solution to the Cauchy problem (1.18) with initial datum \(f^{\epsilon}_{0}\). Then there is a subsequence of \(\{f^{\epsilon}\}\) still denoting it by \(\{f^{\epsilon}\}\) such that,_ \[f^{\epsilon}\rightarrow(\rho+u\cdot\frac{v}{T^{1/2}}+\theta(\frac{|v|^{2}}{2T} -K_{\lambda}))\mathcal{N}_{\lambda,T}\text{ as }\epsilon\to 0,\text{ weakly-* in }L^{\infty}(\mathbb{R}_{+};H^{N}_{x}L^{2}). \tag{1.37}\] _for some \((\rho,u,\theta)\in L^{\infty}(\mathbb{R}_{+};H^{N}_{x})\cap C(\mathbb{R}_{+}; H^{N-1}_{x})\) satisfying_ \[\begin{cases}\rho+\theta=0,\quad\nabla_{x}\cdot u=0,\\ \partial_{t}u+T^{\frac{1}{2}}\mathcal{P}\nabla_{x}\cdot(u\otimes u)=\mu_{ \lambda,T}\Delta_{x}u,\\ \partial_{t}\theta+T^{\frac{1}{2}}u\cdot\nabla_{x}\theta=\kappa_{\lambda,T} \Delta_{x}\theta,\\ \rho|_{t=0}=\rho_{0},\quad u|_{t=0}=\mathcal{P}u_{0},\quad\theta|_{t=0}=\frac{ K_{\lambda}\theta_{0}-\rho_{0}}{K_{\lambda}+1}.\end{cases} \tag{1.38}\] _In addition,_ \[\frac{3T^{-\frac{3}{2}}}{m_{2}}\mathcal{P}(f^{\epsilon},\frac{v}{T^{1/2}} \mathcal{N}_{\lambda,T})\to u\text{ strongly in }C(\mathbb{R}_{+};H^{N-1}_{x})\text{ and weakly-* in }L^{\infty}(\mathbb{R}_{+};H^{N}_{x}), \tag{1.39}\] \[\frac{T^{-\frac{3}{2}}}{C_{A}}\langle f^{\epsilon},(\frac{|v|^{2}}{2T}-K_{A} )\mathcal{N}_{\lambda,T}\rangle\rightarrow\theta\text{ strongly in }C(\mathbb{R}_{+};H^{N-1}_{x})\text{ and weakly-* in }L^{\infty}(\mathbb{R}_{+};H^{N}_{x}). \tag{1.40}\] **Remark 1.4**.: Only smallness assumption on \(|(\rho_{0},u_{0},\theta_{0})|_{H^{2}_{x}}\) is imposed. That is, \(|(\rho_{0},u_{0},\theta_{0})|_{H^{N}_{x}}(N\geq 3)\) can be arbitrarily large. This point is new in the hydrodynamic limit. ### Novelties of the results In terms of condition and conclusion, Theorem 2.1 is closest to the main result (Theorem 1.4) of [33]. More precisely, both of these two results validate global existence of anisotropic solutions. The main differences of these two results are also obvious. To reiterate, [33] considers spatially homogeneous case under very high temperature condition. We work in the whole space \(x\in\mathbb{R}^{3}\) under any temperature higher than the critical one. In terms of mathematical methods, this article is closer to [4] and [30] since all of these works fall into the close-to-equilibrium framework well established for the classical Boltzmann equation. There are many works that contribute to this mathematically satisfactory theory for global well-posedness of the classical Boltzmann equation. For readers reference, we mention [51, 24] for angular cutoff kernels and [23, 2] for non-cutoff kernels. Each of [4] and [30] has their own features and focuses. The work [4] is the first to investigate both relativistic and quantum effect. The article [30] studies the hydrodynamic limit from BFD (but not BBE) to incompressible Navier-Stokes-Fourier equation. These works contribute to the literature of quantum Boltzmann equation from different aspects. Our main results Theorem 1.1 and Theorem 1.2 have some unique features that may better our understanding of quantum Boltzmann equation. Besides the low regularity requirement of the solution space \(H^{2}_{x}L^{2}\), in particular, this article may be the first to investigate * well-posedness theory of BBE for any temperature higher than the critical one in terms of the two parameters \(\lambda,T>0\). * hydrodynamic limit from BBE to incompressible Navier-Stokes-Fourier equation. Specifically, Theorem 1.1 precisely state the dependence of the existence regime on the parameters \(\lambda\) and the temperature \(T\). Furthermore, technically, we obtain the uniform in Knudsen number \(\epsilon\) estimate. This can only be achieved under the incompressible Navier-Stokes scaling. In this sense, our limit is a "bottom-up" type, i.e. we start from the solutions of microscopic kinetic equation, and take the limit \(\epsilon\to 0\) to automatically prove the global existence of the limiting equations. In the whole process, we do not need any information of the limiting equations. This is quite different with the expansion method. The compressible Euler and acoustic limits will be treated in a separate paper which is under preparation. ### Notations In this subsection, we give a list of notations. \(\bullet\) Given a set \(A\), \(1_{A}\) is the characteristic function of \(A\). \(\bullet\) The notation \(a\lesssim b\) means that there is a universal constant \(C\) such that \(a\leq Cb\). \(\bullet\) If both \(a\lesssim b\) and \(b\lesssim a\), we write \(a\sim b\). \(\bullet\) We denote \(C(\lambda_{1},\lambda_{2},\cdots,\lambda_{n})\) or \(C_{\lambda_{1},\lambda_{2},\cdots,\lambda_{n}}\) by a constant depending on \(\lambda_{1},\lambda_{2},\cdots,\lambda_{n}\). \(\bullet\) The bracket \(\langle\cdot\rangle\) is defined by \(\langle v\rangle:=(1+|v|^{2})^{\frac{1}{2}}\). The weight function \(W_{l}(v):=\langle v\rangle^{l}\). \(\bullet\) For \(f,g\in L^{2}(\mathbb{R}^{3})\), \(\langle f,g\rangle:=\int_{\mathbb{R}^{3}}f(v)g(v)\mathrm{d}v\) and \(|f|^{2}_{L^{2}}:=\langle f,f\rangle\). \(\bullet\) For \(f,g\in L^{2}(\mathbb{R}^{3})\), \(\langle f,g\rangle_{x}:=\int_{\mathbb{R}^{3}}f(x)g(x)\mathrm{d}x\) and \(|f|^{2}_{L^{2}_{x}}:=\langle f,f\rangle_{x}\). \(\bullet\) For \(f,g\in L^{2}(\mathbb{R}^{3}\times\mathbb{R}^{3})\), \(\langle f,g\rangle:=\int_{\mathbb{R}^{3}}\times\mathbb{R}^{3}\,f(x,v)g(x,v) \mathrm{d}x\mathrm{d}v\) and \(\|f\|^{2}_{L^{2}_{x}L^{2}}:=(f,f)\). \(\bullet\) For a multi-index \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\in\mathbb{N}^{3}\), define \(|\alpha|:=\alpha_{1}+\alpha_{2}+\alpha_{3}\). \(\bullet\) For \(\alpha\in\mathbb{N}^{3}\) denote \(\partial^{\alpha}:=\partial^{\alpha}_{x}\). We now introduce some norm. \(\bullet\) For \(l\geq 0\) and a function \(f(v)\) on \(\mathbb{R}^{3}\), define \[|f|^{2}_{L^{2}_{l}}:=|f|^{2}_{L^{2}}+1_{l>0}||\cdot|^{l}f|^{2}_{L^{2}},\quad|f |_{L^{2}}=|f|_{L^{2}_{0}}. \tag{1.41}\] \(\bullet\) For \(n\in\mathbb{N}\) and a function \(f(x)\) on \(\mathbb{R}^{3}\), define \[|f|^{2}_{H^{n}_{x}}:=\sum_{|\alpha|\leq n}|\partial^{\alpha}f|^{2}_{L^{2}_{x}}, \quad|f|_{L^{2}_{x}}:=|f|_{H^{0}_{x}},\quad|f|_{L^{\infty}_{x}}:=\operatorname {ess\,sup}_{x\in\mathbb{R}^{3}}|f(x)|.\] \(\bullet\) For \(m\in\mathbb{N},l\geq 0\) and a function \(f(x,v)\) on \(\mathbb{R}^{3}\times\mathbb{R}^{3}\), define \[\|f\|^{2}_{H^{m}_{x}L^{2}_{l}}:=\sum_{|\alpha|\leq m}||\partial^{\alpha}f|_{L^ {2}_{x}}|^{2}_{L^{2}_{x}},\quad\|f\|_{L^{2}_{x}L^{2}_{l}}:=\|f\|_{H^{0}_{x}L^{ 2}_{l}},\quad\|f\|_{H^{m}_{x}L^{2}}:=\|f\|_{H^{m}_{x}L^{2}_{0}}, \tag{1.42}\] ### Organization of this paper Section 2 introduces a simple scaling. (We may also put this section in the introduction) Section 3 contains estimates of linear operators and non-linear operators, including coercivity and upper bound estimate. In Section 4, we first prove a priori estimate and then establish global well-posedness. In Section 5, we give hydrodynamic limit. Section 6 is an appendix in which we put some elementary proof for the sake of completeness. ## 2. Scaled Boltzmann-Bose-Einstein equation In this section, we introduce a simple scaling to free us from the parameter \(T>0\). Fix \(a>0\). Let us define an operator \(A_{a}\) by \(A_{a}f(v):=f(av)\) for all \(v\in\mathbb{R}^{3}\). Making \(A_{T^{1/2}}\) to \(\mathcal{M}_{\lambda,T}\) and \(\mathcal{N}_{\lambda,T}\), we get \[M_{\lambda}(v):=(A_{T^{1/2}}\mathcal{M}_{\lambda,T})(v)=\frac{1}{\exp(\frac{|v |^{2}}{2}+\lambda)-1},\quad N_{\lambda}(v):=(A_{T^{1/2}}\mathcal{N}_{\lambda,T })(v)=\frac{\exp(\frac{|v|^{2}}{4}+\frac{\lambda}{2})}{\exp(\frac{|v|^{2}}{2}+ \lambda)-1}, \tag{2.1}\] Let us make the action \(A_{T^{1/2}}\) to (1.18). Then the equation (1.18) becomes \[\partial_{t}\tilde{f}+\frac{1}{\epsilon}T^{1/2}v\cdot\nabla_{x}\tilde{f}+\frac {1}{\epsilon^{2}}\tilde{\mathcal{L}}^{\lambda,T}\tilde{f}=\frac{1}{\epsilon} \tilde{\Gamma}^{\lambda,T}_{2}(\tilde{f},\tilde{f})+\tilde{\Gamma}^{\lambda,T}_{3} (\tilde{f},\tilde{f},\tilde{f}),\quad\tilde{f}|_{t=0}=\tilde{f}_{0}. \tag{2.2}\] where \(\tilde{f}=A_{T^{1/2}}f,\tilde{f}_{0}=A_{T^{1/2}}f_{0}^{*}\). Here the linear operator \(\tilde{\mathcal{L}}^{\lambda,T}\) is defined by \[(\tilde{\mathcal{L}}^{\lambda,T}f)(v):=\int B_{T}(v-v_{*},\sigma)(N_{\lambda})_{* }(N_{\lambda})^{\prime}(N_{\lambda})^{\prime}_{*}\mathcal{S}(N_{\lambda}^{-1}f )\mathrm{d}\sigma\mathrm{d}v_{*}. \tag{2.3}\] with \[B_{T}(v-v_{*},\sigma)=T^{\frac{3}{2}}B(T^{1/2}(v-v_{*}),\sigma)=T^{2}|v-v_{*}|.\] The bilinear operator \(\tilde{\Gamma}_{2}^{\lambda,T}\) is defined by \[\tilde{\Gamma}_{2}^{\lambda,T}(g,h):=N_{\lambda}^{-1}\int B_{T} \tilde{\Pi}_{2}(g,h)\mathrm{d}\sigma\mathrm{d}v_{*}. \tag{2.4}\] with \(\tilde{\Pi}_{2}\) given by \[\tilde{\Pi}_{2}(g,h) := \mathrm{D}\big{(}(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)^{ \prime}\big{)} \tag{2.5}\] \[+\mathrm{D}\big{(}(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)^{ \prime}(M_{\lambda}+(M_{\lambda})_{*})\big{)}\] (2.6) \[+\mathrm{D}\big{(}(N_{\lambda}g)_{*}(N_{\lambda}h)^{\prime}((M_{ \lambda})^{\prime}_{*}-M_{\lambda})\big{)}\] (2.7) \[+\big{(}(N_{\lambda}g)^{\prime}(N_{\lambda}h)\mathrm{D}((M_{ \lambda})^{\prime}_{*})+(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)_{*}\mathrm{ D}(M^{\prime}_{\lambda})\big{)}. \tag{2.8}\] The trilinear operator \(\tilde{\Gamma}_{3}^{\lambda,T}\) is defined by \[\tilde{\Gamma}_{3}^{\lambda,T}(g,h,\varrho)(v):=N_{\lambda}^{-1} \int B_{T}(v-v_{*},\sigma)\mathrm{D}\big{(}(N_{\lambda}g)^{\prime}_{*}(N_{ \lambda}h)^{\prime}((N_{\lambda}\varrho)_{*}+N_{\lambda}\varrho)\big{)} \mathrm{d}\sigma\mathrm{d}v_{*}. \tag{2.9}\] Indeed, by the change of variable \(v_{*}\to T^{-1/2}v_{*}\), \(\mathrm{d}v_{*}=T^{3/2}\mathrm{d}(T^{-1/2}v_{*})\), it is easy to find \[A_{T^{1/2}}\mathcal{L}^{\lambda,T}f=\tilde{\mathcal{L}}^{ \lambda,T}A_{T^{1/2}}f,\] \[A_{T^{1/2}}\Gamma_{2}^{\lambda,T}(g,h)=\tilde{\Gamma}_{2}^{ \lambda,T}(A_{T^{1/2}}g,A_{T^{1/2}}h),\] \[A_{T^{1/2}}\Gamma_{3}^{\lambda,T}(g,h,\varrho)=\tilde{\Gamma}_{ 3}^{\lambda,T}(A_{T^{1/2}}g,A_{T^{1/2}}h,A_{T^{1/2}}\varrho),\] **Proposition 2.1**.: \(f\) _is a solution to (1.18) with initial datum \(f_{0}\) if and only if \(A_{T^{1/2}}f\) is a solution to (2.2) with initial datum \(A_{T^{1/2}}f_{0}\)._ **Remark 2.1**.: Another scaling choice is to consider the equation of \(g(x,v):=f(T^{1/2}x,T^{1/2}v)\). Note that \[(v\cdot\nabla_{x}g)(x,v)=(v\cdot\nabla_{x}f)(T^{1/2}x,T^{1/2}v).\] So the equation for \(g\) is \[\partial_{t}g+\frac{1}{\epsilon}v\cdot\nabla_{x}g+\frac{1}{ \epsilon^{2}}\tilde{\mathcal{L}}^{\lambda,T}g=\frac{1}{\epsilon}\tilde{\Gamma }_{2}^{\lambda,T}(g,g)+\tilde{\Gamma}_{3}^{\lambda,T}(g,g,g),\quad g|_{t=0}=g _{0}:=f_{0}(T^{1/2}x,T^{1/2}v). \tag{2.10}\] This choice gives a simple streaming term because the factor \(T^{1/2}\) in (2.2) disappears. However, this choice also has some drawbacks. First, the energy functional contains derivatives of the \(x\) variable and so some additional factor \(T^{1/2}\) comes out. Second, such choice may change the spatial domain if one wants to deal with bounded domain such as the torus \([0,l_{x}]^{3}\). Recalling (1.19) and (2.3), the kernel spaces of \(\mathcal{L}^{\lambda,T}\) and \(\tilde{\mathcal{L}}^{\lambda,T}\) are \[\ker\mathcal{L}^{\lambda,T}=\mathrm{span}\{\mathcal{N}_{\lambda, T},\mathcal{N}_{\lambda,T}v_{1},\mathcal{N}_{\lambda,T}v_{2},\mathcal{N}_{ \lambda,T}v_{3},\mathcal{N}_{\lambda,T}|v|^{2}\}, \tag{2.11}\] \[\ker\tilde{\mathcal{L}}^{\lambda,T}=\mathrm{span}\{N_{\lambda},N _{\lambda}v_{1},N_{\lambda}v_{2},N_{\lambda}v_{3},N_{\lambda}|v|^{2}\}. \tag{2.12}\] Observe that \(\ker\tilde{\mathcal{L}}^{\lambda,T}\) depends only on the parameter \(\lambda\), while \(\ker\mathcal{L}^{\lambda,T}\) depends on the two parameters \(\lambda,T\). For notational simplicity, for \(k\geq 0\), we denote the \(k\)-th moments of the density \(M_{\lambda}(1+M_{\lambda})=N_{\lambda}^{2}\) by \(m_{k}\). More precisely, \[m_{k}:=\int_{\mathbb{R}^{3}}|v|^{k}M_{\lambda}(v)(1+M_{\lambda}(v ))\mathrm{d}v=\int_{\mathbb{R}^{3}}|v|^{k}N_{\lambda}^{2}(v)\mathrm{d}v. \tag{2.13}\] **Remark 2.2**.: Note that some of the above moments is infinite if \(\lambda=0\) and \(k\) is small. Indeed, if \(\lambda=0\), then near \(|v|=0\), it holds that \[M_{\lambda}(v)\sim|v|^{-2},\quad M_{\lambda}(v)(1+M_{\lambda}(v ))\sim|v|^{-4}.\] We remark that \(m_{0}=|N_{\lambda}|_{L^{2}}^{2}=\infty\) if \(\lambda=0\). This fact means the \(L^{2}\) framework may not be suitable for the critical temperature \(\lambda=0\). In the following Lemma, we give some estimates of \(m_{k}\). **Lemma 2.1**.: _We have for \(k\geq 1\),_ \[m_{0}=|N_{\lambda}|_{L^{2}}^{2}\sim e^{-\lambda}(1-e^{-\lambda})^{ -1/2}. \tag{2.14}\] \[m_{2k}=||\cdot|^{k}N_{\lambda}|_{L^{2}}^{2}\sim C_{k}e^{-\lambda}. \tag{2.15}\] _Here \(C_{k}\sim 1\) for \(1\leq k\leq 3\)._ Proof.: Recalling (2.1), we have \[m_{0}=|N_{\lambda}|_{L^{2}}^{2}=\int\frac{\exp(\frac{|v|^{2}}{2}+\lambda)}{( \exp(\frac{|v|^{2}}{2}+\lambda)-1)^{2}}\mathrm{d}v=\exp(-\lambda)\int\frac{ \exp(-\frac{|v|^{2}}{2})}{(1-\exp(-\frac{|v|^{2}}{2}-\lambda))^{2}}\mathrm{d}v :=\exp(-\lambda)h(\lambda) \tag{2.16}\] For \(\lambda\geq 1\), we have \[\int\exp(-\frac{|v|^{2}}{2})\mathrm{d}v\leq h(\lambda)\leq(1-\exp(-1))^{-2} \int\exp(-\frac{|v|^{2}}{2})\mathrm{d}v.\] That is \[h(\lambda)\sim 1 \tag{2.17}\] Now consider \(\lambda\leq 1\). For \(|v|\geq 1\), we have \[\int 1_{|v|\geq 1}\frac{\exp(-\frac{|v|^{2}}{2})}{(1-\exp(-\frac{|v|^{2}}{2} -\lambda))^{2}}\mathrm{d}v\lesssim\int 1_{|v|\geq 1}\frac{\exp(-\frac{|v|^{2}}{2})} {(1-\exp(-\frac{1}{2}))^{2}}\mathrm{d}v\lesssim 1\leq\lambda^{-\frac{1}{2}}.\] For \(|v|\leq 1\), we have \(1-\exp(-\frac{|v|^{2}}{2}-\lambda)\sim\frac{|v|^{2}}{2}+\lambda\) and so \[\int 1_{|v|\leq 1}\frac{\exp(-\frac{|v|^{2}}{2})}{(1-\exp(-\frac{|v|^{2}}{2} -\lambda))^{2}}\mathrm{d}v\sim\int 1_{|v|\leq 1}\frac{1}{(\frac{|v|^{2}}{2}+ \lambda)^{2}}\mathrm{d}v.\] Note that \[\int 1_{|v|\leq 1}\frac{1}{(\frac{|v|^{2}}{2}+\lambda)^{2}}\mathrm{d}v= \lambda^{-\frac{1}{2}}\int 1_{|v|\leq\lambda^{-\frac{1}{2}}}\frac{1}{(\frac{|v|^{2}}{2} +1)^{2}}\mathrm{d}v\sim\lambda^{-\frac{1}{2}}\] By these estimates, we find \[h(\lambda)\sim\lambda^{-\frac{1}{2}}. \tag{2.18}\] Patching together (2.17) and (2.18), we conclude that \[h(\lambda)\sim\max\{\lambda^{-\frac{1}{2}},1\}\sim(1-e^{-\lambda})^{-1/2}.\] Recalling (2.16), we arrive at (2.14). Similarly to (2.16) \[||\cdot|^{k}N_{\lambda}|_{L^{2}}^{2}=\int\frac{\exp(\frac{|v|^{2}}{2}+\lambda )}{(\exp(\frac{|v|^{2}}{2}+\lambda)-1)^{2}}\mathrm{d}v=\exp(-\lambda)\int\frac {|v|^{2k}\exp(-\frac{|v|^{2}}{2})}{(1-\exp(-\frac{|v|^{2}}{2}-\lambda))^{2}} \mathrm{d}v:=\exp(-\lambda)h_{k}(\lambda) \tag{2.19}\] For \(|v|\geq 1\), we have \[\int 1_{|v|\geq 1}\frac{|v|^{2k}\exp(-\frac{|v|^{2}}{2})}{(1-\exp(-\frac{|v| ^{2}}{2}-\lambda))^{2}}\mathrm{d}v\sim\int 1_{|v|\geq 1}|v|^{2k}\exp(-\frac{|v|^{2}}{2 })\mathrm{d}v\sim C_{k}.\] For \(|v|\leq 1\) and \(k\geq 1\), we have \[\int 1_{|v|\leq 1}\frac{|v|^{2k}\exp(-\frac{|v|^{2}}{2})}{(1-\exp(-\frac{|v| ^{2}}{2}-\lambda))^{2}}\mathrm{d}v\leq\int 1_{|v|\leq 1}\frac{|v|^{2}\exp(- \frac{|v|^{2}}{2})}{(1-\exp(-\frac{|v|^{2}}{2}))^{2}}\mathrm{d}v\sim\int 1_{|v| \leq 1}|v|^{-2}\sim 1.\] As a result \[h_{k}(\lambda)\sim C_{k}.\] Recalling (2.19), we arrive at (2.15). We construct orthogonal basis for \(\ker\mathcal{L}^{\lambda,T}\) and \(\ker\tilde{\mathcal{L}}^{\lambda,T}\) for \(\lambda,T>0\) as follows \[\{d_{i}^{\lambda,T}\}_{1\leq i\leq 5}:=\{\mathcal{N}_{ \lambda,T},\mathcal{N}_{\lambda,T}v_{1}/T,\mathcal{N}_{\lambda,T}v_{2}/T, \mathcal{N}_{\lambda,T}v_{3}/T,\mathcal{N}_{\lambda,T}(|v|^{2}/T-\tilde{C}_{ \lambda})\}. \tag{2.20}\] \[\{\tilde{d}_{i}^{\lambda}\}_{1\leq i\leq 5}:=\{N_{\lambda},N_{ \lambda}v_{1},N_{\lambda}v_{2},N_{\lambda}v_{3},N_{\lambda}(|v|^{2}-\tilde{C}_ {\lambda})\}, \tag{2.21}\] where \(\tilde{C}_{\lambda}:=m_{2}/m_{0}\) which depends only on \(\lambda\). Note that \(\langle\mathcal{N}_{\lambda,T}|v|^{2},\mathcal{N}_{\lambda,T}\rangle|\mathcal{ N}_{\lambda,T}|_{L^{2}}^{2}\mathcal{N}_{\lambda,T}\) is the projection of \(\mathcal{N}_{\lambda,T}|v|^{2}\) on \(\mathcal{N}_{\lambda,T}\). By normalization, we get orthonormal basis of \(\ker\mathcal{L}^{\lambda,T}\) and \(\ker\tilde{\mathcal{L}}^{\lambda,T}\) \[\{e_{i}^{\lambda,T}\}_{1\leq i\leq 5}:=\{\frac{d_{i}^{\lambda,T}}{|d_{i}^{ \lambda,T}|_{L^{2}}}\}_{1\leq i\leq 5},\quad\{\tilde{e}_{i}^{\lambda}\}_{1\leq i \leq 5}:=\{\frac{\tilde{d}_{i}^{\lambda}}{|\tilde{d}_{i}^{\lambda}|_{L^{2}}}\}_{1 \leq i\leq 5}. \tag{2.22}\] With these orthonormal basis, the projection \(\mathbb{P}_{\lambda,T}\) on \(\ker\mathcal{L}^{\lambda,T}\) and \(\tilde{\mathbb{P}}_{\lambda}\) on \(\ker\tilde{\mathcal{L}}^{\lambda,T}\) are defined by \[\mathbb{P}_{\lambda,T}f:=\sum_{i=1}^{5}\langle f,e_{i}^{\lambda,T} \rangle e_{i}^{\lambda,T},\quad\tilde{\mathbb{P}}_{\lambda}f:=\sum_{i=1}^{5} \langle f,\tilde{e}_{i}^{\lambda}\rangle\tilde{e}_{i}^{\lambda}. \tag{2.23}\] Recalling (2.1), it is elementary to check that \[A_{T^{1/2}}\mathbb{P}_{\lambda,T}f=\tilde{\mathbb{P}}_{\lambda}A_{T^{1/2}}f. \tag{2.24}\] Let us see \(\tilde{\mathbb{P}}_{\lambda}\) more clearly. By (2.23) and rearrangement, we have \[\tilde{\mathbb{P}}_{\lambda}f = \langle f,\frac{N}{|N|_{L^{2}}}\rangle\frac{N}{|N|_{L^{2}}}+\sum_ {i=1}^{3}\langle f,\frac{Nv_{i}}{|Nv_{i}|_{L^{2}}}\rangle\frac{Nv_{i}}{|Nv_{i} |_{L^{2}}}+\langle f,\frac{N(|v|^{2}-\tilde{C}_{\lambda})}{|N(|v|^{2}-\tilde{C }_{\lambda})|_{L^{2}}}\rangle\frac{N(|v|^{2}-\tilde{C}_{\lambda})}{|N(|v|^{2} -\tilde{C}_{\lambda})|_{L^{2}}} \tag{2.25}\] \[= (a_{\lambda}^{f}+b_{\lambda}^{f}\cdot v+c_{\lambda}^{f}|v|^{2})N,\] where \[a_{\lambda}^{f}:=\langle f,(\frac{1}{|N|_{L^{2}}^{2}}+\frac{ \tilde{C}_{\lambda}^{2}}{|N(|v|^{2}-\tilde{C}_{\lambda})|_{L^{2}}^{2}})N-\frac {\tilde{C}_{\lambda}}{|N(|v|^{2}-\tilde{C}_{\lambda})|_{L^{2}}^{2}}N|v|^{2} \rangle,\quad b_{\lambda}^{f}:=\langle f,\frac{Nv}{|Nv_{i}|_{L^{2}}^{2}}\rangle,\] \[c_{\lambda}^{f}:=\langle f,\frac{1}{|N(|v|^{2}-\tilde{C}_{ \lambda})|_{L^{2}}^{2}}N|v|^{2}-\frac{\tilde{C}_{\lambda}}{|N(|v|^{2}-\tilde{ C}_{\lambda})|_{L^{2}}^{2}}N\rangle.\] Note that \(b_{\lambda}^{f}\) is a vector of length \(3\). Note that \[\tilde{C}_{\lambda}=\frac{m_{2}}{m_{0}},\quad|N|_{L^{2}}^{2}=m_{0 },\quad|Nv_{i}|_{L^{2}}^{2}=\frac{1}{3}m_{2},\quad|N(|v|^{2}-\tilde{C}_{\lambda} )|_{L^{2}}^{2}=\frac{m_{0}m_{4}-m_{2}^{2}}{m_{0}}\] Let us define \[l_{\lambda,1}:=\frac{m_{4}}{m_{0}m_{4}-m_{2}^{2}},\quad l_{\lambda,2}:=\frac {m_{2}}{m_{0}m_{4}-m_{2}^{2}},\quad l_{\lambda,3}:=\frac{3}{m_{2}},\quad l_{ \lambda,4}:=\frac{m_{0}}{m_{0}m_{4}-m_{2}^{2}}. \tag{2.26}\] For simplicity, let \(l_{i}=l_{\lambda,i}\). Then there holds \[a_{\lambda}^{f}=\langle f,l_{1}N-l_{2}N|v|^{2}\rangle,\quad b_{ \lambda}^{f}=\langle f,l_{3}Nv\rangle,\quad c_{\lambda}^{f}=\langle f,l_{4}N|v| ^{2}-l_{2}N\rangle. \tag{2.27}\] For simplicity, let \(f_{1}=\tilde{\mathbb{P}}_{\lambda}f,f_{2}=f-\tilde{\mathbb{P}}_{\lambda}f\), and \(\mathcal{A}=(a_{\lambda}^{f},b_{\lambda}^{f},c_{\lambda}^{f})\). By Lemma 2.1, we have \[|\partial^{\alpha}f_{1}|_{L^{2}_{1/2}}\lesssim e^{-\lambda/2}(1-e^{-\lambda})^{ -1/4}|\partial^{\alpha}\mathcal{A}|. \tag{2.28}\] As a result, we can choose \(C_{0}\) is large enough such that \(C_{0}e^{-\lambda}(1-e^{-\lambda})^{-1/2}|\nabla_{x}\mathcal{A}|_{H^{N-1}_{x}}^{2} \geq\|\nabla_{x}f_{1}\|_{H^{N-1}_{x}L^{2}_{1/2}}^{2}\). Then we define the dissipation functional \(\mathcal{D}_{N}(\cdot)\) as \[\mathcal{D}_{N}(f):=C_{0}e^{-\lambda}(1-e^{-\lambda})^{-1/2}| \nabla_{x}\mathcal{A}|_{H^{N-1}_{x}}^{2}+\|f_{2}\|_{H^{N}_{x}L^{2}_{1/2}}^{2}. \tag{2.29}\] Note that \[\mathcal{D}_{N}(f)\geq\|\nabla_{x}f_{1}\|_{H^{N-1}_{x}L^{2}_{1/2}}^{2}+\|f_{2} \|_{H^{N}_{x}L^{2}_{1/2}}^{2}. \tag{2.30}\] Global well-posedness of (2.2) is presented as the following Theorem which is sufficient to derive Theorem 1.1. **Theorem 2.1**.: _Let \(0<\epsilon\leq 1\). Let \(\lambda,T>0\). Let \(N\geq 2\). Let_ \[\tilde{C}_{*}(\lambda,T):=e^{-2\lambda}(1-e^{-\lambda})^{\frac{3 3}{2}}\min\{T^{-3},1\}. \tag{2.31}\] _There exist a universal constant \(\delta_{*}>0\) such that if_ \[M_{\lambda}+N_{\lambda}\tilde{f}_{0}\geq 0,\quad\|\tilde{f}_{0} \|_{H^{2}_{x}L^{2}}\leq\delta_{*}\tilde{C}_{*}(\lambda,T),\quad\|\tilde{f}_{0} \|_{H^{N}_{x}L^{2}}<\infty, \tag{2.32}\] _then the Cauchy problem (2.2) with initial datum \(\tilde{f}_{0}\) has a unique global solution \(\tilde{f}=\tilde{f}_{\epsilon}^{J,T}\in L^{\infty}([0,\infty);H^{N}_{x}L^{2})\) satisfying \(M_{\lambda}+N_{\lambda}\tilde{f}(t)\geq 0\) and_ \[\sup_{t}\|\tilde{f}(t)\|_{H^{N}_{x}L^{2}}^{2}+\frac{1}{K(\lambda,T )}\int_{0}^{\infty}\mathcal{D}_{N}(\tilde{f})\mathrm{d}\tau+\frac{C_{1}( \lambda,T)}{K(\lambda,T)}\frac{1}{\epsilon^{2}}\int_{0}^{\infty}\|\tilde{f}_ {2}\|_{H^{N}_{x}L^{2}}^{2}\mathrm{d}\tau\leq P_{N}(\tilde{f}_{0})\|f_{0}\|_{H ^{N}_{x}L^{2}}^{2}. \tag{2.33}\] _where the formula of \(P_{N}\) is explicitly given (4.49) in Theorem 4.1. Here the constants \(K(\lambda,T),\mathrm{C}_{1}(\lambda,T)\) are defined defined in (4.40), (4.41)._ Proof of Theorem 1.1.: Let \(f_{0}\) be an initial datum satisfying (1.31). Then \(\tilde{f}_{0}=A_{T^{1/2}}f_{0}\) is an initial datum satisfying (2.32). Then by Theorem 2.1, the Cauchy problem (2.2) with initial datum \(\tilde{f}_{0}\) has a unique global solution \(\tilde{f}\) satisfying (2.33). By Proposition 2.1, \(f=A_{T^{-1/2}}\tilde{f}\) is a solution to (1.18) with the initial datum \(f_{0}\). Recalling (2.24) and using the following identity \[a^{3}\langle A_{a}g,A_{a}h\rangle=\langle g,h\rangle, \tag{2.34}\] we have \[\|\tilde{f}\|_{H^{N}_{x}L^{2}}^{2}=T^{-\frac{3}{2}}\|f\|_{H^{N}_ {x}L^{2}}^{2},\quad\|\nabla_{x}\tilde{\mathrm{P}}_{\lambda}\tilde{f}\|_{H^{N -1}_{x}L^{2}}^{2}=T^{-\frac{3}{2}}\|\nabla_{x}\mathbb{P}_{\lambda,T}f\|_{H^{N -1}_{x}L^{2}}^{2},\] \[\|\tilde{f}-\tilde{\mathbb{P}}_{\lambda}\tilde{f}\|_{H^{N}_{x}L^ {2}}^{2}=T^{-\frac{3}{2}}\|f-\mathbb{P}_{\lambda,T}f\|_{H^{N}_{x}L^{2}}^{2}, \quad\|v|^{2}(\tilde{f}-\tilde{\mathbb{P}}_{\lambda}\tilde{f})\|_{H^{N}_{x}L^ {2}}^{2}=T^{-\frac{3}{2}}T^{-\frac{1}{2}}\||v|^{\frac{1}{2}}(f-\mathbb{P}_{ \lambda,T}f)\|_{H^{N}_{x}L^{2}}^{2},\] which gives \[\mathcal{D}_{N}(\tilde{f})\geq T^{-\frac{3}{2}}\mathcal{D}_{N,T}(f), \quad\|\tilde{f}_{2}\|_{H^{N}_{x}L^{2}_{1/2}}^{2}\geq T^{-\frac{3}{2}}\mathcal{ D}_{N,T,2}(f). \tag{2.35}\] From these relations, we obtain all the estimates on \(f\) in Theorem 1.1 and finish the proof. Hydrodynamics limit of (2.2) is presented as the following Theorem which is sufficient to derive Theorem 1.2. **Theorem 2.2**.: _Recall the constant \(\delta_{*}\) in Theorem 2.1. Let \(0<\epsilon<1\). Let \(\lambda,T>0\). Let \(N\geq 2\). Let \(\{f_{0}^{\epsilon}\}_{0<\epsilon<1}\) be a family of initial datum satisfying \(M_{\lambda}+N_{\lambda}f_{0}^{\epsilon}\geq 0\) and_ \[\sup_{0<\epsilon<1}\|f_{0}^{\epsilon}\|_{H^{2}_{x}L^{2}}^{2}\leq \delta_{*}\tilde{C}_{*}(\lambda,T),\quad M_{0}:=\sup_{0<\epsilon<1}\|f_{0}^{ \epsilon}\|_{H^{N}_{x}L^{2}}<\infty, \tag{2.36}\] \[\mathbb{P}_{\lambda}f_{0}^{\epsilon}\to f_{0}=(\rho_{0}+u_{0} \cdot v+\theta_{0}(\frac{|v|^{2}}{2}-K_{\lambda}))N_{\lambda}\text{ as }\epsilon\to 0,\text{ strongly in }H^{N}_{x}L^{2}, \tag{2.37}\] _for some \((\rho_{0},u_{0},\theta_{0})\in H^{N}_{x}\) with \(\rho_{0}+\theta_{0}=0\). Here \(K_{\lambda}=K_{A}-1\) is a constant depending only on \(\lambda\). Let \(f^{\epsilon}\) be the solution to the Cauchy problem (2.2) with initial datum \(f_{0}^{\epsilon}\). Then there is a subsequence of \(\{f^{\epsilon}\}\) still denoting it by \(\{f^{\epsilon}\}\) such that,_ \[f^{\epsilon}\rightarrow(\rho+u\cdot v+\theta(\frac{|v|^{2}}{2}-K_{ \lambda}))N_{\lambda}\text{ as }\epsilon\to 0,\text{ weakly-* in }L^{\infty}(\mathbb{R}_{+};H^{N}_{x}L^{2}), \tag{2.38}\] _for some \((\rho,u,\theta)\in L^{\infty}(\mathbb{R}_{+};H^{N}_{x})\cap C(\mathbb{R}_{+};H^ {N-1}_{x})\). Moreover \((\rho,u,\theta)\) is a weak solution of (1.38). In addition,_ \[\frac{3}{m_{2}}\mathcal{P}\langle f^{\epsilon},vN_{\lambda} \rangle\to u\text{ strongly in }C(\mathbb{R}_{+};H^{N-1}_{x})\text{ and weakly-* in }L^{\infty}(\mathbb{R}_{+};H^{N}_{x}), \tag{2.39}\] \[\frac{1}{C_{A}}\langle f^{\epsilon},(\frac{|v|^{2}}{2}-K_{A})N_{ \lambda}\rangle\rightarrow\theta\text{ strongly in }C(\mathbb{R}_{+};H^{N-1}_{x})\text{ and weakly-* in }L^{\infty}(\mathbb{R}_{+};H^{N}_{x}). \tag{2.40}\] Proof of Theorem 1.2.: By Proposition 2.1 and Theorem 2.2, using the formula (2.34), we get Theorem 1.2. Now in the rest of the article it remains to derive Theorem 2.1 and Theorem 2.2 on the Cauchy problem (2.2). ## 3. Linear and nonlinear collision operators analysis In the rest of the article, in the various functional estimates, the involved functions \(g,h,\varrho,f\) are assumed to be functions on \(\mathbb{R}^{3}\) or \(\mathbb{R}^{3}\times\mathbb{R}^{3}\) such that the corresponding norms of them are well-defined. For simplicity, we use the notation \(\mathrm{d}V:=\mathrm{d}\sigma\mathrm{d}v_{*}\mathrm{d}v\). ### Coercivity estimate We first give some basic properties of \(M_{\lambda}\) and \(N_{\lambda}\) defined in (2.1). **Lemma 3.1**.: _For simplicity, let \(\mu:=\exp(-\frac{|v|^{2}}{2})\). For \(\lambda>0\), it holds that_ \[e^{-\lambda}\mu\leq M_{\lambda}\leq\frac{e^{-\lambda}\mu}{1-e^{- \lambda}},\quad e^{-\lambda/2}\mu^{\frac{1}{2}}\leq N_{\lambda}\leq\frac{e^{- \lambda/2}\mu^{\frac{1}{2}}}{1-e^{-\lambda}}. \tag{3.1}\] _As a direct result, since \(\mu\mu_{*}=\mu^{\prime}\mu_{*}^{\prime}\), it holds that_ \[e^{-2\lambda}\mu\mu_{*}\leq N_{\lambda}(N_{\lambda})_{*}(N_{ \lambda})^{\prime}(N_{\lambda})^{\prime}_{*}\leq(1-e^{-\lambda})^{-4}e^{-2 \lambda}\mu\mu_{*}. \tag{3.2}\] _It holds that_ \[1-e^{-\lambda}\leq\frac{N_{\lambda}(N_{\lambda})_{*}}{(N_{\lambda })^{\prime}(N_{\lambda})^{\prime}_{*}}\leq(1-e^{-\lambda})^{-1}. \tag{3.3}\] Proof.: We only prove (3.3) as the other two results are obvious. Since \(|v|^{2}+|v_{*}|^{2}=|v^{\prime}|^{2}+|v^{\prime}_{*}|^{2}\), \[\frac{N_{\lambda}(N_{\lambda})_{*}}{(N_{\lambda})^{\prime}(N_{ \lambda})^{\prime}_{*}}=\frac{(\exp(\frac{|v^{\prime}|^{2}}{2}+\lambda)-1)( \exp(\frac{|v^{\prime}_{*}|^{2}}{2}+\lambda)-1)}{(\exp(\frac{|v|^{2}}{2}+ \lambda)-1)(\exp(\frac{|v_{*}|^{2}}{2}+\lambda)-1)}.\] Now it suffices to consider for some \(s>1\) the following quantity \[\frac{(w_{1}s-1)(w_{2}s-1)}{(w_{3}s-1)(w_{4}s-1)},\] subject to \(w_{1}w_{2}=w_{3}w_{4},w_{1},w_{2},w_{3},w_{4}\geq 1\). Let \(k^{2}=w_{1}w_{2}=w_{3}w_{4}\) for some \(k\geq 1\), then the numerator and denominator enjoy the same bounds \((k^{2}s-1)(s-1)\leq(w_{1}s-1)(w_{2}s-1),(w_{3}s-1)(w_{4}s-1)\leq(ks-1)^{2}\). Therefore, \[\frac{(k^{2}s-1)(s-1)}{(ks-1)^{2}}\leq\frac{(w_{1}s-1)(w_{2}s-1)}{(w_{3}s-1)( w_{4}s-1)}\leq\frac{(ks-1)^{2}}{(k^{2}s-1)(s-1)}\] It is easy to see \(\frac{(ks-1)^{2}}{(k^{2}s-1)(s-1)}\) is increasing w.r.t. \(k\) and so achieves its maximum \(\frac{s}{s-1}\) at \(k\to\infty\). Therefore \[\frac{s-1}{s}\leq\frac{(w_{1}s-1)(w_{2}s-1)}{(w_{3}s-1)(w_{4}s-1)}\leq\frac{s }{s-1}.\] As a result, by taking \(s=e^{\lambda}\), we get (3.3). Recall that the classical linearized Boltzmann operator \(\mathcal{L}\) with hard sphere kernel is defined by \[(\mathcal{L}f)(v):=\int|v-v_{*}|\mu_{*}\mu^{\frac{1}{2}}\mathrm{S }(\mu^{-\frac{1}{2}}f)\mathrm{d}\sigma\mathrm{d}v_{*}, \tag{3.4}\] Define the functional \(\mathcal{H}(\cdot)\) as \[\mathcal{H}(f)=\int|v-v_{*}|\mu\mu_{*}\mathrm{S}^{2}(\mu^{-1/2}f )\mathrm{d}V. \tag{3.5}\] The coercivity estimate of \(\mathcal{L}\) is that for \(f\in(\ker\mathcal{L})^{\perp}\), \[C_{1}|f|^{2}_{L^{2}_{1/2}}\geq\mathcal{H}(f)=4\langle\mathcal{L}f,f\rangle\geq C_{0}|f|^{2}_{L^{2}_{1/2}}, \tag{3.6}\] where \(0<C_{0}<C_{1}\) are two universal constants. We are ready to get the key coercivity estimate of \(\tilde{\mathcal{L}}^{\lambda,T}\) by using (3.6). **Theorem 3.1**.: _Let \(\lambda,T>0\). Recall (2.12). For \(f\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}\), it holds that_ \[\tilde{C}_{1,\lambda,T}|f|^{2}_{L^{2}_{1/2}}\geq\langle\tilde{ \mathcal{L}}^{\lambda,T}f,f\rangle\geq C_{1,\lambda,T}|f|^{2}_{L^{2}_{1/2}}, \tag{3.7}\] _where_ \[C_{1,\lambda,T}:=T^{2}\frac{C_{0}e^{-\lambda}(1-e^{-\lambda})^{5/2}}{C_{2}}, \quad\tilde{C}_{1,\lambda,T}:=T^{2}C_{3}C_{1}(1-e^{-\lambda})^{-4}e^{-\lambda}, \tag{3.8}\] _for some universal constant \(C_{0},C_{1},C_{2},C_{3}\). Safely speaking \(\frac{1}{100}\leq C_{0}\leq 1\leq C_{1},C_{2},C_{3}\leq 100\). Here \(C_{0},C_{1}\) are the constants appearing in (3.6)._ Proof.: Note that \[\langle\tilde{\mathcal{L}}^{\lambda,T}f,f\rangle=\frac{1}{4}\int B_{T}N_{ \lambda}(N_{\lambda})_{*}(N_{\lambda})^{\prime}(N_{\lambda})_{*}^{\prime}{\rm S }^{2}(N_{\lambda}^{-1}f){\rm d}V. \tag{3.9}\] Thanks to (3.2), we have \[\frac{1}{4}e^{-2\lambda}T^{2}\mathcal{J}_{\lambda}(f)\leq\langle \tilde{\mathcal{L}}^{\lambda,T}f,f\rangle\leq\frac{1}{4}(1-e^{-\lambda})^{-4}e ^{-2\lambda}T^{2}\mathcal{J}_{\lambda}(f), \tag{3.10}\] where we define \[\mathcal{J}_{\lambda}(f)=\int|v-v_{*}|\mu\mu_{*}{\rm S}^{2}(N_{ \lambda}^{-1}f){\rm d}V. \tag{3.11}\] We now study \(\mathcal{J}_{\lambda}(f)\) using (3.6). We now relate \(\mathcal{J}_{\lambda}(\cdot)\) to \(\mathcal{H}(\cdot)\). For a function \(f\), we define \[w_{f}:=N_{\lambda}\mu^{-\frac{1}{2}}\mathbb{P}_{0}(N_{\lambda}^{ -1}\mu^{\frac{1}{2}}f),\quad\Phi_{f}:=(f-w_{f})N_{\lambda}^{-1}\mu^{1/2}, \tag{3.12}\] where \(\mathbb{P}_{0}\) is the projection on \(\ker\mathcal{L}\). It is straightforward to check for \(f\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}\) that \[w_{f}\in\ker\tilde{\mathcal{L}}^{\lambda,T},\quad\Phi_{f}=N_{ \lambda}^{-1}\mu^{\frac{1}{2}}f-\mathbb{P}_{0}(N_{\lambda}^{-1}\mu^{\frac{1}{ 2}}f)\in(\ker\mathcal{L})^{\perp}. \tag{3.13}\] By the above construction, for \(f\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}\), it is easy to see \[\mathcal{J}_{\lambda}(f)=\mathcal{J}_{\lambda}(f-w_{f})=\mathcal{ J}_{\lambda}(N_{\lambda}\mu^{-1/2}\Phi_{f})=\mathcal{H}(\Phi_{f}). \tag{3.14}\] By (3.14), (3.13), using (3.6), we have \[\mathcal{J}_{\lambda}(f)\geq C_{0}|\Phi_{f}|^{2}_{L^{2}_{1/2}} \geq(1-e^{-\lambda})^{2}e^{\lambda}C_{0}|\Phi_{f}N_{\lambda}\mu^{-1/2}|^{2}_{L ^{2}_{1/2}}=(1-e^{-\lambda})^{2}e^{\lambda}C_{0}|f-w_{f}|^{2}_{L^{2}_{1/2}}. \tag{3.15}\] Note that \(f\perp w_{f}\), so we have \[|f-w_{f}|^{2}_{L^{2}_{1/2}}\geq|f-w_{f}|^{2}_{L^{2}}=|f|^{2}_{L^{ 2}}+|w_{f}|^{2}_{L^{2}}\geq|f|^{2}_{L^{2}}. \tag{3.16}\] We also have \[|f-w_{f}|^{2}_{L^{2}_{1/2}}\geq\frac{1}{2}|f|^{2}_{L^{2}_{1/2}}-| w_{f}|^{2}_{L^{2}_{1/2}}\] Note that \(w_{f}=(a_{f}+b_{f}\cdot v+c_{f}|v|^{2})N_{\lambda}\) where \(a_{f},b_{f},c_{f}\) are the constants given by \[a_{f}=\frac{5}{2}\langle f,N_{\lambda}^{-1}\mu\rangle-\frac{1}{ 2}\langle f,|v|^{2}N_{\lambda}^{-1}\mu\rangle,\quad b_{f}=\langle f,vN_{ \lambda}^{-1}\mu\rangle,\quad c_{f}=\frac{1}{6}\langle f,|v|^{2}N_{\lambda}^{ -1}\mu\rangle-\frac{1}{2}\langle f,N_{\lambda}^{-1}\mu\rangle.\] It is easy to see \[|a_{f}|+|b_{f}|+|c_{f}|\lesssim e^{\lambda/2}|f|_{L^{2}}. \tag{3.17}\] By Lemma 2.1 and the estimate (3.17), \[|w_{f}|_{L^{2}_{1/2}}\leq(|a_{f}|+|b_{f}|+|c_{f}|)(|(1+|v|^{2})N _{\lambda}|_{L^{2}_{1/2}})\lesssim(1-e^{-\lambda})^{-1/4}|f|_{L^{2}}.\] Therefore for some universal constant \(C_{2}\geq 1\), we have \[|f-w_{f}|^{2}_{L^{2}_{1/2}}\geq\frac{1}{2}|f|^{2}_{L^{2}_{1/2}}- C_{2}(1-e^{-\lambda})^{-1/2}|f|^{2}_{L^{2}}. \tag{3.18}\] Making a suitable combination between (3.16) and (3.18), we get \[|f-w_{f}|^{2}_{L^{2}_{1/2}}\geq\frac{1}{2(1+C_{2}(1-e^{-\lambda} )^{-1/2})}|f|^{2}_{L^{2}_{1/2}}\geq\frac{1}{4C_{2}(1-e^{-\lambda})^{-1/2}}|f|^{2 }_{L^{2}_{1/2}}.\] Recalling (3.10) and (3.15), we arrive at \[\langle\tilde{\mathcal{L}}^{\lambda,T}f,f\rangle\geq T^{2}\frac{ C_{0}e^{-\lambda}(1-e^{-\lambda})^{5/2}}{16C_{2}}|f|^{2}_{L^{2}_{1/2}}\] By (3.14), (3.13), using the upper bound in (3.6), by noting \(N_{\lambda}^{-1}\mu^{1/2}\leq e^{\lambda/2}\), we can easily get \[\mathcal{J}_{\lambda}(f)\leq C_{1}|\Phi_{f}|^{2}_{L^{2}_{1/2}} \lesssim C_{1}|N_{\lambda}^{-1}\mu^{\frac{1}{2}}f|^{2}_{L^{2}_{1/2}}\lesssim C_{ 1}e^{\lambda}|f|^{2}_{L^{2}_{1/2}}. \tag{3.19}\] Recalling (3.10) and (3.15), we get the upper bound in (3.7). By relabeling the constants, we finish the proof. ### Some preliminary formulas In this subsection, we recall some useful formulas for the computation of Boltzmann type integrals involving \(B(v-v_{*},\sigma)=B(|v-v_{*}|,\frac{v-v_{*}}{|v-v_{*}|}\cdot\sigma)\). The change of variable \((v,v_{*},\sigma)\to(v_{*},v,-\sigma)\) has unit Jacobian and thus \[\int B(v-v_{*},\sigma)f(v,v_{*},v^{\prime},v^{\prime}_{*})\mathrm{d}V=\int B(v- v_{*},\sigma)f(v_{*},v,v^{\prime}_{*},v^{\prime})\mathrm{d}V, \tag{3.20}\] where \(f\) is a general function such that the integral exists. Thanks to the symmetry of elastic collision formula (1.5), the change of variable \((v,v_{*},\sigma)\to(v^{\prime},v^{\prime}_{*},\frac{v-v_{*}}{|v-v_{*}|})\) has unit Jacobian and thus \[\int B(v-v_{*},\sigma)f(v,v_{*},v^{\prime},v^{\prime}_{*})\mathrm{d}V=\int B(v -v_{*},\sigma)f(v^{\prime},v^{\prime}_{*},v,v_{*})\mathrm{d}V. \tag{3.21}\] From now on, we will frequently use the notation (1.6). By (1.6) and the shorthand \(f=f(v),f^{\prime}=f(v^{\prime}),f_{*}=f(v_{*}),f^{\prime}_{*}=f(v^{\prime}_{*})\), it is easy to see \[\mathrm{D}(f)=-\mathrm{D}(f^{\prime}),\quad\mathrm{D}(f_{*})=-\mathrm{D}(f^{ \prime}_{*}),\quad\mathrm{D}^{2}(f)=\mathrm{D}^{2}(f^{\prime}),\quad\mathrm{D }^{2}(f_{*})=\mathrm{D}^{2}(f^{\prime}_{*}).\] Similarly to (1.6), we introduce \[\mathrm{A}(f(v,v_{*},v^{\prime},v^{\prime}_{*})):=f(v,v_{*},v^{\prime},v^{ \prime}_{*})+f(v^{\prime},v^{\prime}_{*},v,v_{*}). \tag{3.22}\] The term \(\mathrm{A}\) is interpreted as "addition" before and after collision. Thanks to the symmetry of elastic collision formula (1.5), we have **Lemma 3.2**.: _Let \(v(\kappa):=v+\kappa(v^{\prime}-v),v_{*}(\iota):=v_{*}+\iota(v^{\prime}_{*}-v_ {*})\) for \(0\leq\kappa,\iota\leq 1\). It holds that_ \[\frac{1}{4}(|v|^{2}+|v_{*}|^{2})\leq|v(\kappa)|^{2}+|v_{*}(\iota)|^{2}\leq 2 (|v|^{2}+|v_{*}|^{2}).\] _As a result, recalling \(\mu(v)=e^{-\frac{1}{2}|v|^{2}}\), for any \(0\leq\kappa,\iota\leq 1\), it holds that_ \[\mu^{2}(v)\mu^{2}(v_{*})\leq\mu(v(\kappa))\mu(v_{*}(\iota))\leq\mu^{\frac{1}{ 4}}(v)\mu^{\frac{1}{4}}(v_{*}). \tag{3.23}\] We omit the proof of Lemma 3.2 as it is elementary. We will frequently use (3.23) to retain the good negative exponential (\(\mu\)-type) weight. We now give a straightforward computation involving the regular change of variable \(v\to v^{\prime}\). **Lemma 3.3**.: _It holds that_ \[\int 1_{\cos\theta\geq 0}|v-v_{*}||g_{*}|f^{2}\mathrm{d}V+\int 1_{\cos \theta\geq 0}|v-v_{*}||g_{*}|(f^{2})^{\prime}\mathrm{d}V\lesssim|g|_{L^{1}_{1}}|f |^{2}_{L^{2}_{1/2}}. \tag{3.24}\] Proof.: For the former integral, as \(|v-v_{*}|\leq(1+|v|)(1+|v_{*}|)\), the estimate is obvious. For the latter integral, for fixed \(\sigma,v_{*}\), using the change of variable \(v\to v^{\prime}\), denoting \(\cos\alpha=\frac{v^{\prime}-v_{*}}{|v-v_{*}|}\cdot\sigma\), we have \(\theta=2\alpha,|v-v_{*}|\cos\alpha=|v^{\prime}-v_{*}|,0\leq\alpha\leq\pi/4, \frac{1}{\sqrt{2}}\leq\cos\alpha\leq 1,|\det(\frac{\mathrm{d}v^{\prime}}{\mathrm{d}v})|= \frac{\cos^{2}\alpha}{4}\). Then \[\int 1_{\cos\theta\geq 0}|v-v_{*}||g_{*}|(f^{2})^{\prime}\mathrm{d}V=\int 1_{ \cos 2\alpha\geq 0}|v^{\prime}-v_{*}||g_{*}|(f^{2})^{\prime}\frac{4}{\cos^{3} \alpha}\mathrm{d}\sigma\mathrm{d}v_{*}\mathrm{d}v^{\prime}\lesssim|g|_{L^{1}_{1 }}|f|^{2}_{L^{2}_{1/2}},\] which ends the proof. Next, we then provide the computation involving the singular change of variable \(v_{*}\to v^{\prime}\). **Lemma 3.4**.: _For any fixed \(v\in\mathbb{R}^{3}\), it holds that_ \[\int 1_{\cos\theta\geq 0}(\sin\frac{\theta}{2})^{3/2}(f^{2})^{\prime}\mathrm{d} \sigma\mathrm{d}v_{*}=\frac{16\pi}{2^{1/4}}|f|^{2}_{L^{2}}. \tag{3.25}\] Proof.: For fixed \(\sigma,v\), using the change of variable \(v_{*}\to v^{\prime}\), denoting \(\cos\alpha=\frac{v^{\prime}-v}{|v^{\prime}-v|}\cdot\sigma\), we have \[\alpha=\frac{\pi}{2}-\frac{\theta}{2},\cos\alpha=\sin\frac{\theta}{2},|v-v_{*}| \cos\alpha=|v^{\prime}-v|,\] \[\pi/4\leq\alpha\leq\pi/2,0\leq\cos\alpha\leq\frac{1}{\sqrt{2}},|\det(\frac{ \mathrm{d}v^{\prime}}{\mathrm{d}v_{*}})|=\frac{\cos^{2}\alpha}{4}\] Then we get \[\int 1_{\cos\theta\geq 0}(\sin\frac{\theta}{2})^{3/2}(f^{2})^{\prime}\mathrm{d} \sigma\mathrm{d}v_{*}=\int 1_{0\leq\cos\alpha\leq\frac{1}{\sqrt{2}}}(f^{2})^{\prime}\frac{4}{ \cos^{1/2}\alpha}\mathrm{d}\sigma\mathrm{d}v^{\prime}\] Using \(\frac{v^{\prime}-v}{|v^{\prime}-v|}\) as north pole to represent \(\sigma\), \[\int 1_{0\leq\cos\alpha\leq\frac{1}{\sqrt{2}}}\frac{4}{\cos^{1/2}\alpha}{\rm d} \sigma=2\pi\int_{\pi/4}^{\pi/2}\frac{4}{\cos^{1/2}\alpha}\sin\alpha{\rm d} \alpha=8\pi\int_{0}^{\frac{1}{\sqrt{2}}}\frac{1}{t^{1/2}}{\rm d}t=\frac{16\pi} {2^{1/4}}.\] Note that \((\sin\frac{\theta}{2})^{3/2}\) is used to cancel the singularity of \(\cos^{-2}\alpha\) near \(\alpha=\pi/2\) in the change of variable \(v_{*}\to v^{\prime}\). ### Non-linear operator estimate Recalling (2.4), (2.5), (2.6), (2.7) and (2.8), we have \[\tilde{\Gamma}_{2}^{\lambda,T}(g,h)=\tilde{\Gamma}_{2,m}^{\lambda,T}(g,h)+ \tilde{\Gamma}_{2,r}^{\lambda,T}(g,h),\quad\tilde{\Gamma}_{2,r}^{\lambda,T}(g, h):=\tilde{\Gamma}_{2,r,1}^{\lambda,T}(g,h)+\tilde{\Gamma}_{2,r,2}^{\lambda,T}(g,h )+\tilde{\Gamma}_{2,r,3}^{\lambda,T}(g,h). \tag{3.26}\] \[\tilde{\Gamma}_{2,m}^{\lambda,T}(g,h):=T^{2}N_{\lambda}^{-1}\int|v-v_{*}|{\rm D }\big{(}(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)^{\prime}\big{)}{\rm d} \sigma{\rm d}v_{*}. \tag{3.27}\] \[\tilde{\Gamma}_{2,r,1}^{\lambda,T}(g,h):=T^{2}N_{\lambda}^{-1}\int|v-v_{*}|{ \rm D}\big{(}(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)^{\prime}(M_{\lambda}+( M_{\lambda})_{*})\big{)}{\rm d}\sigma{\rm d}v_{*}. \tag{3.28}\] \[\tilde{\Gamma}_{2,r,2}^{\lambda,T}(g,h):=T^{2}N_{\lambda}^{-1}\int|v-v_{*}|{ \rm D}\big{(}(N_{\lambda}g)_{*}(N_{\lambda}h)^{\prime}((M_{\lambda})^{\prime}_ {*}-M_{\lambda})\big{)}{\rm d}\sigma{\rm d}v_{*}. \tag{3.29}\] \[\tilde{\Gamma}_{2,r,3}^{\lambda,T}(g,h):=T^{2}N_{\lambda}^{-1}\int|v-v_{*}| \big{(}(N_{\lambda}g)^{\prime}(N_{\lambda}h){\rm D}((M_{\lambda})^{\prime}_{*} )+(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)_{*}{\rm D}(M_{\lambda}^{\prime}) \big{)}{\rm d}\sigma{\rm d}v_{*}. \tag{3.30}\] In the following Proposition, we derive upper bound estimates for the above bilinear operators. **Proposition 3.1**.: _For the bilinear terms (3.27), (3.28), (3.29) and (3.30), we have_ \[|\langle\tilde{\Gamma}_{2,m}^{\lambda,T}(g,h),f\rangle|\lesssim e^{-\lambda/2 }(1-e^{-\lambda})^{-2}T^{2}|g|_{L^{2}}|h|_{L^{2}_{1/2}}|f|_{L^{2}_{1/2}}. \tag{3.31}\] \[|\langle\tilde{\Gamma}_{2,r,1}^{\lambda,T}(g,h),f\rangle|\lesssim e^{-3 \lambda/2}(1-e^{-\lambda})^{-3}T^{2}|g|_{L^{2}}|h|_{L^{2}_{1/2}}|f|_{L^{2}_{1/2 }}. \tag{3.32}\] \[|\langle\tilde{\Gamma}_{2,r,2}^{\lambda,T}(g,h),f\rangle|\lesssim e^{-3 \lambda/2}(1-e^{-\lambda})^{-3}T^{2}|\mu^{\frac{1}{32}}g|_{L^{2}}|\mu^{\frac{1 }{64}}h|_{L^{2}}|\mu^{\frac{1}{64}}f|_{L^{2}}. \tag{3.34}\] _As a result, recalling (3.26),_ \[|\langle\tilde{\Gamma}_{2}^{\lambda,T}(g,h),f\rangle|\lesssim C_{2,\lambda,T} |g|_{L^{2}}|h|_{L^{2}_{1/2}}|f|_{L^{2}_{1/2}}, \tag{3.35}\] _where_ \[C_{2,\lambda,T}:=e^{-\lambda/2}(1-e^{-\lambda})^{-3}T^{2}. \tag{3.36}\] Proof.: Note that \[\langle\tilde{\Gamma}_{2,m}^{\lambda,T}(g,h),f\rangle=T^{2}\int|v-v_{*}|{\rm D }\big{(}(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)^{\prime}\big{)}N_{\lambda}^ {-1}f{\rm d}V.\] Recalling (3.3) and (3.1), we have \[N_{\lambda}^{-1}|{\rm D}\big{(}(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)^{ \prime}\big{)}|\lesssim(1-e^{-\lambda})^{-1}(N_{\lambda})_{*}{\rm A}(|g_{*}h|) \lesssim e^{-\lambda/2}(1-e^{-\lambda})^{-2}\mu^{\frac{1}{2}}_{*}{\rm A}(|g_{*} h|). \tag{3.37}\] Then \[|\langle\tilde{\Gamma}_{2,m}^{\lambda,T}(g,h),f\rangle|\lesssim T^{2}e^{- \lambda/2}(1-e^{-\lambda})^{-2}\int|v-v_{*}|\mu^{\frac{1}{2}}_{*}{\rm A}(|g_{*} h|)|f|{\rm d}V.\] Note that the integral equals to \[\int|v-v_{*}|\mu^{\frac{1}{2}}_{*}|g_{*}hf|{\rm d}V+\int|v-v_{*}|(\mu^{\frac{1 }{2}})^{\prime}_{*}|g_{*}hf^{\prime}|{\rm d}V.\] As \(\int 1_{\cos\theta\geq 0}{\rm d}\sigma=2\pi\), it is easy to derive \[\int|v-v_{*}|\mu^{\frac{1}{2}}_{*}|g_{*}hf|{\rm d}V\lesssim|\mu^{\frac{1}{2}}g |_{L^{2}}|h|_{L^{2}_{1/2}}|f|_{L^{2}_{1/2}}.\] Since \(|v-v_{*}|=|v^{\prime}-v^{\prime}_{*}|\leq\sqrt{2}|v-v^{\prime}_{*}|\), we get \[\int|v-v_{*}|(\mu^{\frac{1}{2}})^{\prime}_{*}|g_{*}hf^{\prime}|{ \rm d}V \leq \big{(}\int|v-v_{*}|(\mu^{\frac{1}{2}})^{\prime}_{*}g_{*}^{2}h^{2}{ \rm d}V\big{)}^{\frac{1}{2}}\Big{(}\int|v-v_{*}|(\mu^{\frac{1}{2}})^{\prime}_{*}( f^{2})^{\prime}{\rm d}V\big{)}^{\frac{1}{2}},\] \[\lesssim \big{(}\int|v)g_{*}^{2}h^{2}{\rm d}V\big{)}^{\frac{1}{2}}\big{(} \int|v-v_{*}|\mu^{\frac{1}{2}}_{*}f^{2}{\rm d}V\big{)}^{\frac{1}{2}},\lesssim|g |_{L^{2}}|h|_{L^{2}_{1/2}}|f|_{L^{2}_{1/2}}.\] Patching the above formulas together, we get (3.31). Note that \[\langle\tilde{\Gamma}^{\lambda,T}_{2,r,1}(g,h),f\rangle=T^{2}\int|v-v_{*}|\mathrm{ D}\big{(}(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)^{\prime}(M_{\lambda}+(M_{ \lambda})_{*})\big{)}N_{\lambda}^{-1}f\mathrm{d}V.\] By (3.1), \(M_{\lambda}\lesssim e^{-\lambda}(1-e^{-\lambda})^{-1}\), from which together with the above argument, we obtain (3.32). By exactly the same derivation, we also get (3.33). Note that \[\langle\tilde{\Gamma}^{\lambda,T}_{2,r,3}(g,h),f\rangle=T^{2}\int|v-v_{*}| \big{(}(N_{\lambda}g)^{\prime}(N_{\lambda}h)\mathrm{D}((M_{\lambda})^{\prime}_ {*})+(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)_{*}\mathrm{D}(M^{\prime}_{ \lambda})\big{)}N_{\lambda}^{-1}f\mathrm{d}V.\] Recalling (3.1) and using (3.23), we get \[N_{\lambda}^{-1}|(N_{\lambda})^{\prime}(N_{\lambda})\mathrm{D}((M_{\lambda})^ {\prime}_{*})|\lesssim e^{-3\lambda/2}(1-e^{-\lambda})^{-2}(\mu^{\frac{1}{2}}) ^{\prime}(\mu_{*}+\mu^{\prime}_{*})\lesssim e^{-3\lambda/2}(1-e^{-\lambda})^ {-2}\mu^{\frac{1}{8}}\mu^{\frac{1}{8}}.\] Recalling (3.3) and (3.1), using (3.23), we can similarly get \[N_{\lambda}^{-1}|(N_{\lambda})^{\prime}_{*}(N_{\lambda})_{*}\mathrm{D}(M^{ \prime}_{\lambda})|\lesssim e^{-3\lambda/2}(1-e^{-\lambda})^{-3}\mu^{\frac{1}{8}} \mu^{\frac{1}{8}}_{*}.\] Therefore \[|\langle\tilde{\Gamma}^{\lambda,T}_{2,r,3}(g,h),f\rangle| \lesssim e^{-3\lambda/2}(1-e^{-\lambda})^{-3}T^{2}\int|v-v_{*}|\mu^{ \frac{1}{8}}\mu^{\frac{1}{8}}_{*}\big{(}|g^{\prime}hf|+|g^{\prime}_{*}h_{*}f| \big{)}\mathrm{d}V\] \[\lesssim e^{-3\lambda/2}(1-e^{-\lambda})^{-3}T^{2}\int\mu^{\frac{1}{16} }\mu^{\frac{1}{8}}_{*}\big{(}|g^{\prime}hf|+|g^{\prime}_{*}h_{*}f|\big{)} \mathrm{d}V.\] Note that we retain the good weight. Let us see \(\int\mu^{\frac{1}{16}}\mu^{\frac{1}{8}}_{*}|g^{\prime}hf|\mathrm{d}V\). By the C-S inequality for the integral w.r.t. \(\mathrm{d}\sigma\mathrm{d}v_{*}\), by (3.25), we have \[\int\mu^{\frac{1}{16}}\mu^{\frac{1}{16}}|g^{\prime}hf|\mathrm{d}V \leq \int\mu^{\frac{1}{16}}|hf|\left(\big{(}\int 1_{\cos\theta\geq 0}( \sin\frac{\theta}{2})^{3/2}(\mu^{\frac{1}{16}}g^{2})^{\prime}\mathrm{d}\sigma \mathrm{d}v_{*}\big{)}^{\frac{1}{8}}\big{(}\int 1_{\cos\theta\geq 0}(\sin \frac{\theta}{2})^{-3/2}\mu^{\frac{1}{16}}\mathrm{d}\sigma\mathrm{d}v_{*} \big{)}^{\frac{1}{8}}\right)\mathrm{d}v\] \[\lesssim |\mu^{\frac{1}{8}}g|_{L^{2}}|^{\mu^{\frac{1}{16}}}h|_{L^{2}}|\mu^ {\frac{1}{16}}f|_{L^{2}}.\] Here we use \[\int 1_{\cos\theta\geq 0}(\sin\frac{\theta}{2})^{-3/2}\mathrm{d}\sigma=8\pi \int_{0}^{\pi/2}\sin^{-\frac{1}{2}}\frac{\theta}{2}\mathrm{d}\sin\frac{\theta }{2}=8\pi\int_{0}^{\frac{1}{\sqrt{2}}}\frac{1}{t^{1/2}}\mathrm{d}t=\frac{16\pi} {2^{1/4}}.\] By using (3.24), it is easy to see \(\int\mu^{\frac{1}{16}}\mu^{\frac{1}{16}}_{*}|g^{\prime}_{*}h_{*}f|\mathrm{d}V= \int\mu^{\frac{1}{16}}\mu^{\frac{1}{16}}_{*}|g^{\prime}hf_{*}|\mathrm{d}V \lesssim|\mu^{\frac{1}{32}}g|_{L^{2}}|\mu^{\frac{1}{6}}h|_{L^{2}}|\mu^{\frac{ 1}{6}}f|_{L^{2}}\). In the following Proposition, we derive upper bound estimate for the trilinear operator \(\tilde{\Gamma}^{\lambda,T}_{3}(\cdot,\cdot,\cdot)\). **Proposition 3.2**.: \[|\langle\tilde{\Gamma}^{\lambda,T}_{3}(g,h,\varrho),f\rangle|\lesssim C_{3, \lambda,T}|g|_{L^{2}}|h|_{L^{2}_{1/2}}|\mu^{\frac{1}{16}}\varrho|_{L^{2}}|f|_{ L^{2}_{1/2}},\] _where_ \[C_{3,\lambda,T}:=e^{-\lambda}(1-e^{-\lambda})^{-3}T^{2}. \tag{3.38}\] Proof.: Recalling (2.9), we have \[\langle\tilde{\Gamma}^{\lambda,T}_{3}(g,h,\varrho),f\rangle=T^{2}\int|v-v_{*}| \mathrm{D}\big{(}(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)^{\prime}((N_{ \lambda}\varrho)_{*}+N_{\lambda}\varrho)\big{)}N_{\lambda}^{-1}f\mathrm{d}V.\] Similarly to (3.37), we get \[N_{\lambda}^{-1}|\mathrm{D}\big{(}(N_{\lambda}g)^{\prime}_{*}(N_{\lambda}h)^{ \prime}((N_{\lambda}\varrho)_{*}+N_{\lambda}\varrho)\big{)}|\lesssim e^{- \lambda}(1-e^{-\lambda})^{-3}\mu^{\frac{1}{2}}_{*}\mathrm{A}(|g_{*}h\varrho^{ \prime}_{*}|(\mu^{\frac{1}{2}})^{\prime}_{*}+|g_{*}h\varrho^{\prime}|(\mu^{ \frac{1}{2}})^{\prime}).\] From this together with \(\mu\mu_{*}=\mu^{\prime}\mu^{\prime}_{*}\), we have \[|\langle\tilde{\Gamma}^{\lambda,T}_{3}(g,h,\varrho),f\rangle|\lesssim T^{2}e^{- \lambda}(1-e^{-\lambda})^{-3}(\mathcal{I}_{1}+\mathcal{I}_{2}+\mathcal{I}_{3}+ \mathcal{I}_{4}),\] where \[\mathcal{I}_{1}:=\int|v-v_{*}|\mu^{\frac{1}{2}}_{*}(\mu^{\frac{1}{2}})^{ \prime}_{*}|g_{*}h\varrho^{\prime}_{*}f|\mathrm{d}V, \quad\mathcal{I}_{2}:=\int|v-v_{*}|\mu^{\prime}_{*}|g_{*}h\varrho^{\prime}_{*}f ^{\prime}|\mathrm{d}V,\] \[\mathcal{I}_{3}:=\int|v-v_{*}|\mu^{\frac{1}{2}}_{*}(\mu^{\frac{1}{ 2}})^{\prime}|g_{*}h\varrho^{\prime}f|\mathrm{d}V,\quad\mathcal{I}_{4}:=\int|v- v_{*}|\mu^{\frac{1}{2}}_{*}\mu^{\frac{1}{2}}_{*}|g_{*}h\varrho^{\prime}f^{\prime}| \mathrm{d}V.\] By the C-S inequality, (3.21) and (3.24), we have \[|\mathcal{I}_{1}|\leq\big{(}\int|v-v_{*}|\mu_{*}|g_{*}h|^{2}\mathrm{d}V\big{)}^{ \frac{1}{2}}\big{(}\int|v-v_{*}|\mu_{*}|\varrho_{*}f^{\prime}|^{2}\mathrm{d}V \big{)}^{\frac{1}{2}}\lesssim|\mu^{\frac{1}{4}}g|_{L^{2}}|h|_{L^{2}_{1/2}}|\mu^ {\frac{1}{4}}\varrho|_{L^{2}}|f|_{L^{2}_{1/2}}.\] By the C-S inequality, (3.21), the fact \(|v-v_{*}|\mu^{\prime}_{*}\lesssim\langle v\rangle\), and the estimate (3.24), we have \[|\mathcal{I}_{2}|\leq\big{(}\int|v-v_{*}|\mu^{\prime}_{*}|g_{*}h|^{2}\mathrm{d }V\big{)}^{\frac{1}{2}}\big{(}\int|v-v_{*}|\mu_{*}|\varrho_{*}f|^{2}\mathrm{d }V\big{)}^{\frac{1}{2}}\lesssim|g|_{L^{2}}|h|_{L^{2}_{1/2}}|\mu^{\frac{1}{4}} \varrho|_{L^{2}}|f|_{L^{2}_{1/2}}.\] By using (3.23), we have \(\mu^{\frac{1}{4}}(\mu^{\frac{1}{2}})^{\prime}\leq\mu^{\frac{1}{8}}\mu^{\frac{ 1}{8}}\) and thus \[|\mathcal{I}_{3}|\leq\int|v-v_{*}|\mu^{\frac{1}{8}}_{*}\mu^{\frac{1}{8}}|g_{*} h\varrho^{\prime}f|\mathrm{d}V\lesssim\int\mu^{\frac{1}{8}}_{*}\mu^{\frac{1}{8}}|g_{*} h\varrho^{\prime}f|\mathrm{d}V\] Using \(\mu\mu_{*}=\mu^{\prime}\mu^{\prime}_{*}\), we get \(\mu^{\frac{1}{8}}_{*}\mu^{\frac{1}{8}}=\mu^{\frac{1}{82}}\mu^{\frac{1}{82}}( \mu^{\frac{1}{82}})^{\prime}_{*}(\mu^{\frac{1}{82}})^{\prime}\) and so \[|\mathcal{I}_{3}|\lesssim\int|(\mu^{\frac{1}{84}}g)_{*}(\mu^{\frac{1}{84}}h)( \mu^{\frac{1}{84}}\varrho)^{\prime}(\mu^{\frac{1}{84}}f)|\mathrm{d}V\] For any fixed \(v\), we bound the integral over \(\mathrm{d}\sigma\mathrm{d}v_{*}\) by the C-S inequality and (3.25) to get \[\int|(\mu^{\frac{1}{84}}g)_{*}(\mu^{\frac{1}{84}}\varrho)^{\prime}|\mathrm{d} \sigma\mathrm{d}v_{*}\leq\big{(}\int(\mu^{\frac{1}{82}}g^{2})_{*}(\sin\frac{ \theta}{2})^{-3/2}\mathrm{d}\sigma\mathrm{d}v_{*}\big{)}^{\frac{1}{2}}\big{(} \int(\mu^{\frac{1}{82}}\varrho^{2})^{\prime}(\sin\frac{\theta}{2})^{3/2} \mathrm{d}\sigma\mathrm{d}v_{*}\big{)}^{\frac{1}{2}}\lesssim|\mu^{\frac{1}{84} }g|_{L^{2}}|\mu^{\frac{1}{84}}\varrho|_{L^{2}}.\] Then \[|\mathcal{I}_{3}|\lesssim|\mu^{\frac{1}{84}}g|_{L^{2}}|\mu^{\frac{1}{84}}h|_{ L^{2}}|\mu^{\frac{1}{84}}\varrho|_{L^{2}}|\mu^{\frac{1}{84}}f|_{L^{2}}.\] Note that \[\mathcal{I}_{4}=\int|v-v_{*}|\mu^{\frac{1}{84}}_{*}\mu^{\frac{1}{84}}|g^{ \prime}_{*}h^{\prime}\varrho f|\mathrm{d}V\lesssim\int|(\mu^{\frac{1}{84}}g)^ {\prime}_{*}(\mu^{\frac{1}{84}}h)^{\prime}(\mu^{\frac{1}{84}}\varrho)(\mu^{ \frac{1}{84}}f)|\mathrm{d}V\] For any fixed \(v\), by the C-S inequality, using the change of variable \(v_{*}\to v^{\prime}_{*}\)(in which the Jacobian \(\frac{4}{\cos^{2}(\theta/2)}\) is bounded) and the estimate (3.25), we have \[\int|(\mu^{\frac{1}{84}}g)^{\prime}_{*}(\mu^{\frac{1}{84}}h)^{\prime}|\mathrm{ d}\sigma\mathrm{d}v_{*}\leq\big{(}\int(\mu^{\frac{1}{84}}g^{2})^{\prime}_{*}(\sin \frac{\theta}{2})^{-3/2}\mathrm{d}\sigma\mathrm{d}v_{*}\big{)}^{\frac{1}{2}} \big{(}\int(\mu^{\frac{1}{84}}h^{2})^{\prime}(\sin\frac{\theta}{2})^{3/2} \mathrm{d}\sigma\mathrm{d}v_{*}\big{)}^{\frac{1}{2}}\lesssim|\mu^{\frac{1}{16 }}g|_{L^{2}}|\mu^{\frac{1}{16}}h|_{L^{2}},\] which gives \[|\mathcal{I}_{4}|\lesssim|\mu^{\frac{1}{84}}g|_{L^{2}}|\mu^{\frac{1}{16}}h|_{ L^{2}}|\mu^{\frac{1}{16}}\varrho|_{L^{2}}|\mu^{\frac{1}{16}}f|_{L^{2}}.\] Patching together the above estimates, we finish the proof. ### Energy estimates For simplicity, from now on \(\mathcal{A}=(a,b,c)\). By (2.14) and (2.15), we have \[|\partial^{\alpha}f_{1}|_{L^{2}_{k}}\lesssim|\partial^{\alpha}f_{1}|_{L^{2}} \leq|\partial^{\alpha}f|_{L^{2}}. \tag{3.39}\] By (3.39) and recalling (2.30), we will frequently use \[0\leq j\leq N,\quad 0\leq|\alpha|\leq N-j\quad\Rightarrow\quad\| \partial^{\alpha}f_{1}\|_{H^{1}_{x}L^{2}_{1/2}}\lesssim\|f\|_{H^{N}_{x}L^{2}}. \tag{3.40}\] \[0\leq j\leq N,\quad 0\leq|\alpha|\leq N-j\quad\Rightarrow\quad\| \partial^{\alpha}f\|_{H^{1}_{x}L^{2}}\leq\|f\|_{H^{N}_{x}L^{2}}.\] (3.41) \[0\leq k\leq N-1,\quad 1\leq|\alpha|\leq N-k\quad\Rightarrow\quad\| \partial^{\alpha}f\|_{H^{1}_{x}L^{2}_{1/2}}^{2}\lesssim\mathcal{D}_{N}(f). \tag{3.42}\] Note that the dissipation \(\mathcal{D}_{N}(f)\) lacks \(\|f_{1}\|_{L^{2}_{x}L^{2}}\lesssim e^{-\lambda/2}(1-e^{-\lambda})^{-1/4}| \mathcal{A}|_{L^{2}_{x}}\). We will use the following embedding in dimension \(3\), \[|f|_{L^{6}}\lesssim|\nabla f|_{L^{2}}. \tag{3.43}\] In the \(3\)-dimensional space \(\mathbb{R}^{3}_{x}\), by embedding \(L^{\infty}_{x}\hookrightarrow H^{2}_{x}\) or \(L^{p}_{x}\hookrightarrow H^{s}_{x}\) with \(\frac{s}{3}=\frac{1}{2}-\frac{1}{p}\). From these basic embedding results, by (3.35), estimates of the bi-linear term \(\tilde{\Gamma}^{\lambda,T}_{2}\) in the full space \((x,v)\) are given as \[|(\tilde{\Gamma}^{\lambda,T}_{2}(g,h),f)|\lesssim C_{2,\lambda,T}\|g\|_{H^{m}_{x}L^ {2}}\|h\|_{H^{m}_{x}L^{2}_{1/2}}\|f\|_{L^{2}_{x}L^{2}_{1/2}}, \tag{3.44}\] \[|(\tilde{\Gamma}^{\lambda,T}_{2}(g,h),f)|\lesssim C_{2,\lambda,T}\|g \|_{H^{1/2}_{x}L^{2}}\|\nabla_{x}h\|_{L^{2}_{x}L^{2}_{1/2}}\|f\|_{L^{2}_{x}L^{2}_{ 1/2}}, \tag{3.45}\] where \(m,n\in\mathbb{N},m+n=2\). Based on (3.44) and (3.45), by making a suitable choice of parameters \(m,n\) to deal with different distribution of derivative order, we get the following estimate. **Theorem 3.2**.: _Let \(N\geq 2\). It holds that_ \[|\sum_{|\alpha|\leq N}(\partial^{\alpha}\tilde{\Gamma}_{2}^{\lambda,T}(f,f), \partial^{\alpha}f)|\lesssim C_{2,\lambda,T}\left(\|f\|_{H_{x}^{2}L^{2}}\mathcal{ D}_{N}^{\frac{1}{2}}(f)+1_{N\geq 3}C_{N}\|f\|_{H_{x}^{N}L^{2}}\mathcal{D}_{N-1}^{ \frac{1}{2}}(f)\right)\|f_{2}\|_{H_{x}^{N}L_{1/2}^{2}}. \tag{3.46}\] Proof.: By binomial formula, we need to consider \((\tilde{\Gamma}_{2}^{\lambda,T}(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}} f),\partial^{\alpha}f)\) for all combinations of \(\alpha_{1}+\alpha_{2}=\alpha\) with \(|\alpha|\leq N\). We first derive \(\tilde{\Gamma}_{2}^{\lambda,T}(g,h)+\tilde{\Gamma}_{2}^{\lambda,T}(h,g)\in( \tilde{\mathcal{L}}^{\lambda,T})^{\perp}\). Recalling (2.4), we have \[\langle\tilde{\Gamma}_{2}^{\lambda,T}(g,h)+\tilde{\Gamma}_{2}^{\lambda,T}(h,g ),f\rangle=\int B(\tilde{\Pi}_{2}(g,h)+\tilde{\Pi}_{2}(h,g))N^{-1}f\mathrm{d}V.\] Recall (2.5) for the definition of \(\tilde{\Pi}_{2}\). By (3.20) and (3.21), we have \[\langle\tilde{\Gamma}_{2}^{\lambda,T}(g,h)+\tilde{\Gamma}_{2}^{\lambda,T}(h,g ),f\rangle=\frac{1}{4}\int B(\tilde{\Pi}_{2}(g,h)+\tilde{\Pi}_{2}(h,g)) \mathcal{S}(N^{-1}f)\mathrm{d}V.\] Therefore, for any \(f\in\tilde{\mathcal{L}}^{\lambda,T}\), we have \[\langle\tilde{\Gamma}_{2}^{\lambda,T}(g,h)+\tilde{\Gamma}_{2}^{\lambda,T}(h,g ),f\rangle=0. \tag{3.47}\] Note that \[\partial^{\alpha}\tilde{\Gamma}_{2}^{\lambda,T}(f,f)=\sum_{\alpha_{1}+\alpha_ {2}=\alpha}C_{\alpha}^{\alpha_{1}}\tilde{\Gamma}_{2}^{\lambda,T}(\partial^{ \alpha_{1}}f,\partial^{\alpha_{2}}f)=\frac{1}{2}\sum_{\alpha_{1}+\alpha_{2}= \alpha}C_{\alpha}^{\alpha_{1}}(\tilde{\Gamma}_{2}^{\lambda,T}(\partial^{ \alpha_{1}}f,\partial^{\alpha_{2}}f)+\tilde{\Gamma}_{2}^{\lambda,T}(\partial^{ \alpha_{2}}f,\partial^{\alpha_{1}}f).\] From this together with (3.47), for any \(\varphi\in\tilde{\mathcal{L}}^{\lambda,T}\), we get \[\langle\partial^{\alpha}\tilde{\Gamma}_{2}^{\lambda,T}(f,f),\varphi\rangle=0. \tag{3.48}\] By (3.48), we have \[(\partial^{\alpha}\tilde{\Gamma}_{2}^{\lambda,T}(f,f),\partial^{\alpha}f)=( \partial^{\alpha}\tilde{\Gamma}_{2}^{\lambda,T}(f,f),\partial^{\alpha}f_{2}).\] We will prove that for all combinations of \(\alpha_{1}+\alpha_{2}=\alpha\) with \(0\leq|\alpha|\leq N\), the following inequality holds \[|(\tilde{\Gamma}_{2}^{\lambda,T}(\partial^{\alpha_{1}}f,\partial^{\alpha_{2} }f),\partial^{\alpha}f_{2})|\lesssim C_{2,\lambda,T}\left(\|f\|_{H_{x}^{2}L^{2 }}\mathcal{D}_{N}^{\frac{1}{2}}(f)+1_{N\geq 3}\|f\|_{H_{x}^{N}L^{2}} \mathcal{D}_{N-1}^{\frac{1}{2}}(f)\right)\|f_{2}\|_{H_{x}^{N}L_{1/2}^{2}}.\] We first deal with the case \(|\alpha|=0\). By (3.45) and (3.42), we get \[|(\tilde{\Gamma}_{2}^{\lambda,T}(f,f),f_{2})|\lesssim C_{2,\lambda,T}\|f\|_{H _{x}^{1}L^{2}}\|\nabla_{x}f\|_{L_{1/2}^{2}L_{1/2}^{2}}\|f_{2}\|_{L_{2}^{2}L_{ 1/2}^{2}}\lesssim C_{2,\lambda,T}\|f\|_{H_{x}^{1}L^{2}}\mathcal{D}_{1}^{\frac{1 }{2}}(f)\|f_{2}\|_{L_{x}^{2}L_{1/2}^{2}}. \tag{3.49}\] Now it remains to consider \(1\leq|\alpha|\leq N\). By (3.44), it suffices to prove that for all combinations of \(\alpha_{1}+\alpha_{2}=\alpha\) with \(1\leq|\alpha|\leq N\), the following inequality \[\|\partial^{\alpha_{1}}f\|_{H_{x}^{-}L^{2}}\|\partial^{\alpha_{2}}f\|_{H_{x}^ {1}L_{1/2}^{2}}\|\partial^{\alpha}f_{2}\|_{L_{2}^{2}L_{1/2}^{2}}\lesssim\|f\|_{ H_{x}^{2}L^{2}}\mathcal{D}_{N}^{\frac{1}{2}}(f)\|f_{2}\|_{H_{x}^{N}L_{1/2}^{2}}+1_{N \geq 3}\|f\|_{H_{x}^{N}L^{2}}\mathcal{D}_{N-1}^{\frac{1}{2}}(f)\|f_{2}\|_{H_{x}^{N} L_{1/2}^{2}}.\] holds for some \(m,n\) verifying \(m,n\in\mathbb{N},m+n=2\). If \(\alpha_{2}=\alpha\), Taking \((m,n)=(2,0)\) and using (3.42), we have \[\|f\|_{H_{x}^{2}L^{2}}\|\partial^{\alpha}f\|_{H_{x}^{0}L_{1/2}^{2}}\|\partial^{ \alpha}f_{2}\|_{L_{2}^{2}L_{1/2}^{2}}\lesssim\|f\|_{H_{x}^{2}L^{2}}\mathcal{D}_{ N}^{\frac{1}{2}}(f)\|f_{2}\|_{H_{x}^{N}L_{1/2}^{2}}\] Therefore in the following we only need to consider all combinations of \(\alpha_{1}+\alpha_{2}=\alpha\) with \(1\leq|\alpha|\leq N,|\alpha_{2}|\leq|\alpha|-1\). The following is divided into three cases: \(|\alpha|=1;|\alpha|=2;3\leq|\alpha|\leq N\). _Case 1:_\(|\alpha|=1\). In this case, there remains only one case: \(\alpha_{2}=0,\alpha_{1}=\alpha\). Taking \((m,n)=(0,2)\), using (3.40), (3.41) and (3.42), we have \[\|\partial^{\alpha}f\|_{H_{x}^{0}L^{2}}\|f\|_{H_{x}^{2}L_{1/2}^{2}} \|\partial^{\alpha}f_{2}\|_{L_{x}^{2}L_{1/2}^{2}}\] \[\lesssim \|\partial^{\alpha}f\|_{H_{x}^{0}L^{2}}\big{(}\|f\|_{H_{x}^{1}L_{1/2 }^{2}}+\|f\|_{H_{x}^{1}L_{1/2}^{2}}\big{)}\|\partial^{\alpha}f_{2}\|_{L_{x}^{2}L_ {1/2}^{2}}\] \[\lesssim \mathcal{D}_{1}^{\frac{1}{2}}(f)\|f\|_{H_{x}^{2}L^{2}}\|f\|_{H_{ x}^{1}L_{1/2}^{2}}+\|f\|_{H_{x}^{1}L^{2}}\mathcal{D}_{2}^{\frac{1}{2}}(f)\|f_{2}\|_{H_{x}^{1}L_{1/2}^{ 2}}\lesssim\|f\|_{H_{x}^{2}L^{2}}\mathcal{D}_{2}^{\frac{1}{2}}(f)\|f_{2}\|_{H_{x}^ {1}L_{1/2}^{2}}\] _Case 2:_\(|\alpha|=2\). In this case, it remains to consider two subcases: \(|\alpha_{2}|=0;|\alpha_{2}|=1\). In the first subcase \(|\alpha_{2}|=0,\alpha_{1}=\alpha\). Taking \((m,n)=(0,2)\), similarly to (3.50), we get \[\|\partial^{\alpha}f\|_{H_{x}^{0}L^{2}}\|f\|_{H_{x}^{1}L_{1/2}^{2}}\|\partial^{ \alpha}f_{2}\|_{L_{x}^{2}L_{1/2}^{2}}\lesssim\|f\|_{H_{x}^{2}L^{2}}\mathcal{D}_{ 2}^{\frac{1}{2}}(f)\|f\|_{H_{x}^{2}L_{1/2}^{2}} \tag{3.51}\] In the second subcase \(|\alpha_{2}|=|\alpha_{1}|=1\). Taking \((m,n)=(1,1)\) and using (3.42), we have \[\|\partial^{\alpha_{1}}f\|_{H_{x}^{ _Case 3: \(3\leq|\alpha|\leq N\)._ In this case, it remains to consider three subcases: \(|\alpha_{2}|=0;|\alpha_{2}|=1;2\leq|\alpha_{2}|\leq|\alpha|-1\). In the first subcase \(|\alpha_{2}|=0,\alpha_{1}=\alpha\). Taking \((m,n)=(0,2)\), similarly to (3.50), we get \[\|\partial^{\alpha}f\|_{H_{x}^{0}L^{2}}\|f\|_{H_{x}^{2}L_{1/2}^{2}}\|\partial^ {\alpha}f_{2}\|_{L_{x}^{2}L_{1/2}^{2}}\lesssim\|f\|_{H_{x}^{2}L^{2}}\mathcal{D }_{N}^{\frac{1}{2}}(f)\|f_{2}\|_{H_{x}^{N}L_{1/2}^{2}}+\|f\|_{H_{x}^{N}L^{2}} \mathcal{D}_{2}^{\frac{1}{2}}(f)\|f_{2}\|_{H_{x}^{N}L_{1/2}^{2}}. \tag{3.52}\] In the second subcase \(|\alpha_{2}|=1,|\alpha_{1}|=N-1\). Taking \((m,n)=(1,1)\) and using (3.42), we have \[\|\partial^{\alpha_{1}}f\|_{H_{x}^{1}L^{2}}\|\partial^{\alpha_{2}}f\|_{H_{x}^ {1}L_{1/2}^{2}}\|\partial^{\alpha}f_{2}\|_{L_{x}^{2}L_{1/2}^{2}}\lesssim\|f\| _{H_{x}^{N}L^{2}}\mathcal{D}_{2}^{\frac{1}{2}}(f)\|f\|_{H_{x}^{N}L_{1/2}^{2}}.\] In the third subcase \(2\leq|\alpha_{2}|\leq|\alpha|-1\leq N-1,|\alpha_{1}|\leq N-2\). Taking \((m,n)=(2,0)\) and using (3.42), we have \[\|\partial^{\alpha_{1}}f\|_{H_{x}^{2}L^{2}}\|\partial^{\alpha_{2}}f\|_{H_{x}^ {0}L_{1/2}^{2}}\|\partial^{\alpha}f_{2}\|_{L_{x}^{2}L_{1/2}^{2}}\lesssim\|f\| _{H_{x}^{N}L^{2}}\mathcal{D}_{N-1}^{\frac{1}{2}}(f)\|f\|_{H_{x}^{N}L_{1/2}^{2}}.\] Patching together all the above estimates, observing that _Case 3_ happens only if \(N\geq 3\), we obtain (3.46). From Proposition 3.2, by Holder's inequality and Sobolev embedding inequalities, estimates of the trilinear term \(\tilde{\Gamma}_{3}^{\lambda,T}\) in the full space \((x,v)\) are given as \[|(\tilde{\Gamma}_{3}^{\lambda,T}(g,h,\varrho),f)|\lesssim C_{3, \lambda,T}\|g\|_{H_{x}^{1}L^{2}}\|h\|_{H_{x}^{\tau_{2}}L_{1/2}^{2}}\|\varrho\| _{H_{x}^{\tau_{3}}L^{2}}\|f\|_{L_{x}^{2}L_{1/2}^{2}}, \tag{3.53}\] \[|(\tilde{\Gamma}_{2}^{\lambda,T}(g,h),f)|\lesssim C_{3,\lambda,T} \|\nabla_{x}g\|_{L_{x}^{2}L^{2}}\|\nabla_{x}h\|_{L_{x}^{2}L_{1/2}^{2}}\| \nabla_{x}\varrho\|_{L_{x}^{2}L^{2}}\|f\|_{L_{x}^{2}L_{1/2}^{2}}, \tag{3.54}\] where \(r_{1},r_{2},r_{3}\in\mathbb{N},0\leq r_{1},r_{2},r_{3}\leq 2,r_{1}+r_{2}+r_{3}=4\). Based on (3.53) and (3.54), by making a suitable choice of parameters \(r_{1},r_{2},r_{3}\) to deal with different distribution of derivative order, we get the following estimate. **Theorem 3.3**.: _Let \(N\geq 2\). It holds that_ \[|\sum_{|\alpha|\leq N}(\partial^{\alpha}\tilde{\Gamma}_{3}^{\lambda,T}(f,f,f),\partial^{\alpha}f)|\lesssim C_{3,\lambda,T}\left(\|f\|_{H_{x}^{2}L^{2}}^{2} \mathcal{D}_{N}(f)+1_{N\geq 3}C_{N}\|f\|_{H_{x}^{N-1}L^{2}}\|f\|_{H_{x}^{N}L^{2}} \mathcal{D}_{\tilde{N}-1}^{\frac{1}{2}}(f)\mathcal{D}_{\tilde{N}}^{\frac{1}{2} }(f)\right) \tag{3.55}\] Proof.: By binomial formula, we need to consider \((\tilde{\Gamma}_{3}^{\lambda,T}(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}}f,\partial^{\alpha_{3}}f),\partial^{\alpha}f)\) for all combinations of \(\alpha_{1}+\alpha_{2}+\alpha_{3}=\alpha\) with \(|\alpha|\leq N\). For \(|\alpha|=0\), it is easy to check \[(\tilde{\Gamma}_{3}^{\lambda,T}(f,f,f),f)=(\tilde{\Gamma}_{3}^{\lambda,T}(f,f, f),f_{2}).\] By taking (3.54) and (3.42), we get \[|(\tilde{\Gamma}_{3}^{\lambda,T}(f,f,f),f_{2})|\lesssim C_{3, \lambda,T}\|\nabla_{x}f\|_{L_{x}^{2}L^{2}}^{2}\|\nabla_{x}f\|_{L_{x}^{2}L_{1/2} ^{2}}\|f_{2}\|_{L_{x}^{2}L_{1/2}^{2}}\lesssim C_{3,\lambda,T}\|f\|_{H_{x}^{1} L^{2}}^{2}\mathcal{D}_{1}(f).\] Now it remains to consider \(1\leq|\alpha|\leq N\). By (3.53), it suffices to prove that for all combinations of \(\alpha_{1}+\alpha_{2}+\alpha_{3}=\alpha\) with \(1\leq|\alpha|\leq N\), the following inequality \[\|\partial^{\alpha_{1}}f\|_{H_{x}^{\tau_{1}}L^{2}}\|\partial^{ \alpha_{2}}f\|_{H_{x}^{\tau_{2}}L_{1/2}^{2}}\|\partial^{\alpha_{3}}f\|_{H_{x}^ {\tau_{3}}L^{2}}\|\partial^{\alpha}f\|_{L_{x}^{2}L_{1/2}^{2}}\] \[\lesssim \|f\|_{H_{x}^{2}L^{2}}^{2}\mathcal{D}_{N}(f)+1_{N\geq 3}C_{N}\|f\|_{H_{x}^{N-1}L ^{2}}\|f\|_{H_{x}^{N}L^{2}}\mathcal{D}_{\tilde{N}-1}^{\frac{1}{2}}(f)\mathcal{D}_{ \tilde{N}}^{\frac{1}{2}}(f).\] holds for some \(r_{1},r_{2},r_{3}\in\mathbb{N}\) verifying \(0\leq r_{1},r_{2},r_{3}\leq 2,r_{1}+r_{2}+r_{3}=4\). If \(\alpha_{2}=\alpha\), Taking \((r_{1},r_{2},r_{3})=(2,0,2)\) and using (3.42), we have \[\|f\|_{H_{x}^{2}L^{2}}\|\partial^{\alpha}f\|_{H_{x}^{\tau_{2}}L_{1/2}^{2}}\|f\|_{ H_{x}^{2}L^{2}}\|\partial^{\alpha}f\|_{L_{x}^{2}L_{1/2}^{2}}\lesssim\|f\|_{H_{x}^{2}L^{2}} ^{2}\mathcal{D}_{N}(f)\] Therefore in the following we only need to consider all combinations of \(\alpha_{1}+\alpha_{2}+\alpha_{3}=\alpha\) with \(1\leq|\alpha|\leq N,|\alpha_{2}|\leq|\alpha|-1\). Note that \(\alpha_{1}\) and \(\alpha_{3}\) play the same role. We can always assume \(|\alpha_{1}|\leq|\alpha_{3}|\). The following is divided into three cases: \(|\alpha|=1;|\alpha|=2;3\leq|\alpha|\leq N\). _Case 1: \(|\alpha|=1\)._ In this case, there remains only one case: \(\alpha_{1}=\alpha_{2}=0,\alpha_{3}=\alpha\). Taking \((r_{1},r_{2},r_{3})=(2,2,0)\), using (3.40), (3.41) and (3.42), we have \[\|f\|_{H_{x}^{2}L^{2}}\|f\|_{H_{x}^{2}L_{1/2}^{2}}\|\partial^{ \alpha}f\|_{H_{x}^{0}L^{2}}\|\partial^{\alpha}f\|_{L_{x}^{2}L_{1/2}^{2}}\] \[\lesssim \|f\|_{H_{x}^{2}L^{2}}(\|f\|_{H_{x}^{2}L_{1/2}^{2}}+\|f\|_{H_{x}^ {2}L_{1/2}^{2}})\|\partial^{\alpha}f\|_{H_{x}^{0}L^{2}}\|\partial^{\alpha}f\|_{L_{x}^ {2}L_{1/2}^{2}}\] \[\lesssim \|f\|_{H_{x} _Case 2: \(|\alpha|=2\)._ In this case, it remains to consider two subcases: \(|\alpha_{2}|=0,|\alpha_{1}|\leq 1;|\alpha_{2}|=1,|\alpha_{1}|=0\). In the first subcase \(|\alpha_{2}|=0,|\alpha_{1}|\leq 1\leq|\alpha_{3}|,|\alpha_{1}|+|\alpha_{3}|\leq 2\). Taking \((r_{1},r_{2},r_{3})=(2-|\alpha_{1}|,2,|\alpha_{1}|)\) and using the same argument as in (3.56), we will get \[\|\partial^{\alpha_{1}}f\|_{H^{2-|\alpha_{1}|}_{x}L^{2}}\|f\|_{H^{2}_{x}L^{2}_{ 1/2}}\|\partial^{\alpha_{3}}f\|_{H^{|\alpha_{1}|}_{x}L^{2}}\|\partial^{\alpha} f\|_{L^{2}_{x}L^{2}_{1/2}}\lesssim\|f\|_{H^{2}_{x}L^{2}}^{2}\mathcal{D}_{2}(f).\] In the second subcase \(|\alpha_{2}|=|\alpha_{3}|=1,|\alpha_{1}|=0\). Taking \((r_{1},r_{2},r_{3})=(2,1,1)\) and using (3.42), we have \[\|f\|_{H^{2}_{x}L^{2}}\|\partial^{\alpha_{2}}f\|_{H^{1}_{x}L^{2}_{1/2}}\| \partial^{\alpha_{3}}f\|_{H^{1}_{x}L^{2}}\|\partial^{\alpha}f\|_{L^{2}_{x}L^{ 2}_{1/2}}\lesssim\|f\|_{H^{2}_{x}L^{2}}^{2}\mathcal{D}_{2}(f).\] _Case 3: \(3\leq|\alpha|\leq N\)._ In this case, it remains to consider five subcases: \(|\alpha_{2}|=|\alpha|-1;|\alpha_{2}|=|\alpha|-2;1\leq|\alpha_{2}|\leq|\alpha| -3;|\alpha_{2}|=0,|\alpha_{1}|\leq 2;|\alpha_{2}|=0,|\alpha_{1}|\geq 3\). In the first subcase, \(|\alpha_{2}|=|\alpha|-1\geq 2\), then \(|\alpha_{1}|=1,|\alpha_{3}|=0\). Taking \((r_{1},r_{2},r_{3})=(1,1,2)\) and using (3.42), we get \[\|\partial^{\alpha_{1}}f\|_{H^{1}_{x}L^{2}}\|\partial^{\alpha_{2}}f\|_{H^{1}_ {x}L^{2}_{1/2}}\|f\|_{H^{2}_{x}L^{2}}\|\partial^{\alpha}f\|_{L^{2}_{x}L^{2}_{1 /2}}\lesssim\|f\|_{H^{2}_{x}L^{2}}^{2}\mathcal{D}_{N}(f).\] In the second subcase, \(|\alpha_{2}|=|\alpha|-2\geq 1\), then \(|\alpha_{1}|+|\alpha_{3}|\leq 2\). Taking \((r_{1},r_{2},r_{3})=(2-|\alpha_{1}|,2,|\alpha_{1}|)\) and using (3.42), we get \[\|\partial^{\alpha_{1}}f\|_{H^{2-|\alpha_{1}|}_{x}L^{2}}\|\partial^{\alpha_{2 }}f\|_{H^{2}_{x}L^{2}_{1/2}}\|\partial^{\alpha_{3}}f\|_{H^{|\alpha_{1}|}_{x}L ^{2}}\|\partial^{\alpha}f\|_{L^{2}_{x}L^{2}_{1/2}}\lesssim\|f\|_{H^{2}_{x}L^{2} }^{2}\mathcal{D}_{N}(f).\] In the third subcase, \(1\leq|\alpha_{2}|\leq|\alpha|-3\), then \(N\geq 4,|\alpha_{1}|\leq\frac{N}{2}\leq|\alpha_{3}|\leq N-1\). Note that \(\frac{N}{2}+2\leq N\). Taking \((r_{1},r_{2},r_{3})=(2,2,0)\) and using (3.42), we get \[\|\partial^{\alpha_{1}}f\|_{H^{2}_{x}L^{2}}\|\partial^{\alpha_{2}}f\|_{H^{2}_ {x}L^{2}_{1/2}}\|\partial^{\alpha_{3}}f\|_{H^{0}_{x}L^{2}}\|\partial^{\alpha} f\|_{L^{2}_{x}L^{2}_{1/2}}\lesssim\|f\|_{H^{N}_{x}L^{2}}\mathcal{D}_{N}^{ \frac{1}{2}}(f)\|f\|_{H^{N-1}_{x}L^{2}}\mathcal{D}_{N}^{\frac{1}{2}}(f).\] In the fourth subcase \(|\alpha_{2}|=0,|\alpha_{1}|\leq 2,|\alpha_{3}|=|\alpha|-|\alpha_{1}|\geq 1\). Taking \((r_{1},r_{2},r_{3})=(2-|\alpha_{1}|,2,|\alpha_{1}|)\), similarly to (3.56), we get \[\|\partial^{\alpha_{1}}f\|_{H^{2-|\alpha_{1}|}_{x}L^{2}}\|f\|_{H^{2}_{x}L^{2}_ {1/2}}\|\partial^{\alpha_{3}}f\|_{H^{|\alpha_{1}|}_{x}L^{2}}\|\partial^{\alpha} f\|_{L^{2}_{x}L^{2}_{1/2}}\lesssim\|f\|_{H^{2}_{x}L^{2}}^{2}\mathcal{D}_{N}(f)+\|f\|_{H^{2}_{x}L ^{2}}\|f\|_{H^{N}_{x}L^{2}}\mathcal{D}_{2}^{\frac{1}{2}}(f)\mathcal{D}_{N}^{ \frac{1}{2}}(f).\] In the fifth subcase \(|\alpha_{2}|=0,|\alpha_{1}|\geq 3\). Note that \(N\geq 6,3\leq|\alpha_{1}|\leq\frac{N}{2}\leq|\alpha_{3}|\leq N-3\). Taking \((r_{1},r_{2},r_{3})=(0,2,2)\), similarly to (3.56), we get \[\|\partial^{\alpha_{1}}f\|_{H^{0}_{x}L^{2}}\|f\|_{H^{2}_{x}L^{2}_{1/2}}\| \partial^{\alpha_{3}}f\|_{H^{2}_{x}L^{2}}\|\partial^{\alpha}f\|_{L^{2}_{x}L^{2} _{1/2}}\lesssim\|f\|_{H^{N-1}_{x}L^{2}}\|f\|_{H^{N}_{x}L^{2}}\mathcal{D}_{N-1}^ {\frac{1}{2}}(f)\mathcal{D}_{N}^{\frac{1}{2}}(f).\] Patching together all the above estimates, observing that _Case 3_ happens only if \(N\geq 3\), we obtain (3.55). By revising the above proof of Theorem 3.2 and Theorem 3.3, we can similarly derive the following two results. Let \(N\geq 2\). Let \(\psi\) be a function of the variable \(v\) satisfying \(|\psi|_{L^{2}_{1/2}}\lesssim 1\), then it holds that \[\sum_{|\alpha|\leq N}|\langle\partial^{\alpha}\tilde{\Gamma}_{2}^{\lambda,T}(f,f ),\psi\rangle|_{L^{2}_{x}}^{2}\lesssim C_{2,\lambda,T}^{2}\left(\|f\|_{H^{2}_{x} L^{2}}^{2}\mathcal{D}_{N}(f)+1_{N\geq 3}C_{N}\|f\|_{H^{N}_{x}L^{2}}^{2}\mathcal{D}_{N-1}(f)\right), \tag{3.57}\] \[\sum_{|\alpha|\leq N}|\langle\partial^{\alpha}\tilde{\Gamma}_{3}^{\lambda,T}(f,f,f),\psi\rangle|_{L^{2}_{x}}^{2}\lesssim C_{3,\lambda,T}^{2}\left(\|f\|_{H^{2}_{x} L^{2}}^{4}\mathcal{D}_{N}(f)+1_{N\geq 3}C_{N}\|f\|_{H^{N-1}_{x}L^{2}}^{2}\|f\|_{H^{N}_{x}L^{2}}^{2} \mathcal{D}_{N-1}(f)\right). \tag{3.58}\] ## 4. A priori estimate and global well-posedness This section is devoted to the proof to Theorem 2.1. For fixed \(\lambda,T>0,0<\epsilon<1\), it is not difficult to derive local existence of the problem (2.2). So we focus on the uniform-in-\(\epsilon\)_a priori_ estimate for the equation (2.2). This estimate is given as Theorem 4.1. Then by a standard continuity argument the global well-posedness result in Theorem 2.1 can be established. ### A priori estimate of a general equation This subsection is devoted to some uniform-in-\(\epsilon\)_a priori_ estimate of the following equation \[\partial_{t}f+\frac{1}{\epsilon}T^{1/2}v\cdot\nabla_{x}f+\frac{1}{\epsilon^{2}} \tilde{\mathcal{L}}^{\lambda,T}f=g,\quad t>0,x\in\mathbb{R}^{3},v\in \mathbb{R}^{3}, \tag{4.1}\] where \(g\) is a given function. Let \(f\) be a solution to (4.1). Recalling the formula (2.25) of projection operator \(\tilde{\mathbb{P}}_{\lambda}\), we denote \[(\tilde{\mathbb{P}}_{\lambda}f)(t,x,v)=(a(t,x)+b(t,x)\cdot v+c(t,x)|v|^{2})N, \tag{4.2}\] where \( Here we recall (2.27) for the definition of \((a_{\lambda}^{f(t,x,\cdot)},b_{\lambda}^{f(t,x,\cdot)},c_{\lambda}^{f(t,x,\cdot)})\) for fixed \(t,x\). Note that in (4.3) we omit \(f,\lambda,T\) for brevity. However, we should always keep in mind that \((a,b,c)\) are functions of \((t,x)\) originating from the solution \(f\) to (4.1) for fixed \(\lambda,T\). We set \(f_{1}:=\tilde{\mathbb{P}}_{\lambda}f\) and \(f_{2}:=f-\tilde{\mathbb{P}}_{\lambda}f\). We first recall some basics of macro-micro decomposition. Plugging the macro-micro decomposition \(f=f_{1}+f_{2}\) into (4.1) and using the fact \(\tilde{\mathcal{L}}^{\lambda,T}f_{1}=0\), we get \[\epsilon\partial_{t}f_{1}+T^{1/2}v\cdot\nabla_{x}f_{1}=-\epsilon \partial_{t}f_{2}-T^{1/2}v\cdot\nabla_{x}f_{2}-\frac{1}{\epsilon}\tilde{ \mathcal{L}}^{\lambda,T}f_{2}+\epsilon g. \tag{4.4}\] Recalling (4.2), the left-hand of (4.4) reads \[(\epsilon\partial_{t}a+\sum_{i=1}^{3}\epsilon\partial_{t}b_{i}v _{i}+\epsilon\partial_{t}c|v|^{2})N+T^{1/2}(\sum_{i=1}^{3}\partial_{i}av_{i}+ \sum_{i\neq j}\partial_{i}b_{j}v_{i}v_{j}+\sum_{i=1}^{3}\partial_{i}cv_{i}|v| ^{2})N. \tag{4.5}\] Here \(\partial_{i}=\partial_{x_{i}}\) for \(i=1,2,3,b=(b_{1},b_{2},b_{3})\) and \(v=(v_{1},v_{2},v_{3})\). We order the \(13\) functions of \(v\) in (4.5) as \[e_{1}=N,\quad e_{2}=v_{1}N,\quad e_{3}=v_{2}N,\quad e_{4}=v_{3}N, \quad e_{5}=v_{1}^{2}N,\quad e_{6}=v_{2}^{2}N,\quad e_{7}=v_{3}^{2}N,\] \[e_{8}=v_{1}v_{2}N,\quad e_{9}=v_{2}v_{3}N,\quad e_{10}=v_{3}v_{1 }N,\quad e_{11}=|v|^{2}v_{1}N,\quad e_{12}=|v|^{2}v_{2}N,\quad e_{13}=|v|^{2}v _{3}N.\] We emphasize that \(e_{i}\) depends on \(\lambda\) through \(N=N_{\lambda}\). We also order the \(13\) functions of \((t,x)\) in (4.5) as \[x_{1}:=\epsilon\partial_{t}a,\quad x_{2}:=\epsilon\partial_{t}b _{1}+T^{1/2}\partial_{1}a,\quad x_{3}:=\epsilon\partial_{t}b_{2}+T^{1/2} \partial_{2}a,\quad x_{4}:=\epsilon\partial_{t}b_{3}+T^{1/2}\partial_{3}a,\] \[x_{5}:=\epsilon\partial_{t}c+T^{1/2}\partial_{1}b_{1},\quad x_{6 }:=\epsilon\partial_{t}c+T^{1/2}\partial_{2}b_{2},\quad x_{7}:=\epsilon \partial_{t}c+T^{1/2}\partial_{3}b_{3},\] \[x_{8}:=T^{1/2}(\partial_{1}b_{2}+\partial_{2}b_{1}),\quad x_{9}: =T^{1/2}(\partial_{2}b_{3}+\partial_{3}b_{2}),\quad x_{10}:=T^{1/2}(\partial_ {3}b_{1}+\partial_{1}b_{3}),\] \[x_{11}:=T^{1/2}\partial_{1}c,\quad x_{12}:=T^{1/2}\partial_{2}c,\quad x_{13}:=T^{1/2}\partial_{3}c.\] Use \(\mathsf{T}\) to denote vector transpose. For simplicity, we define two column vectors \[E:=(e_{1},\cdots,e_{13})^{\mathsf{T}},\quad X:=(x_{1},\cdots,x_{13})^{ \mathsf{T}}.\] With these two column vectors, (4.5) is \(\epsilon\partial_{t}f_{1}+T^{1/2}v\cdot\nabla_{x}f_{1}=E^{\mathsf{T}}X\) and thus (4.4) is written as \[E^{\mathsf{T}}X=-\epsilon\partial_{t}f_{2}-T^{1/2}v\cdot\nabla_{x}f_{2}-\frac {1}{\epsilon}\tilde{\mathcal{L}}^{\lambda,T}f_{2}+\epsilon g.\] Taking inner product with \(E\) in the space \(L^{2}(\mathbb{R}^{3})\), since \(X\) depends on \((t,x)\) but not on \(v\), we get \[\langle E,E^{\mathsf{T}}\rangle X=\langle E,-\epsilon\partial_{t}f_{2}-T^{1/2 }v\cdot\nabla_{x}f_{2}-\frac{1}{\epsilon}\tilde{\mathcal{L}}^{\lambda,T}f_{2}+ \epsilon g\rangle.\] We will see soon that the \(13\times 13\) matrix \(\langle E,E^{\mathsf{T}}\rangle=(\langle e_{i},e_{j}))_{1\leq i\leq 13,1\leq j\leq 13}\) is invertible and so \[X=(\langle E,E^{\mathsf{T}}\rangle)^{-1}\langle E,-\epsilon\partial_{t}f_{2}- T^{1/2}v\cdot\nabla_{x}f_{2}-\frac{1}{\epsilon}\tilde{\mathcal{L}}^{ \lambda,T}f_{2}+\epsilon g\rangle. \tag{4.6}\] We now prove that \(\langle E,E^{\mathsf{T}}\rangle\) is invertible and give some estimate on its inverse. **Lemma 4.1**.: _Let \(\lambda>0\). The matrix \(\langle E,E^{\mathsf{T}}\rangle\) is invertible and_ \[|(\langle E,E^{\mathsf{T}}\rangle)^{-1}E|_{L_{2}^{2}}\lesssim e^{ \lambda/2}. \tag{4.7}\] Proof.: Recalling (2.13), by directly computing \(\langle e_{i},e_{j}\rangle\) for \(1\leq i,j\leq 13\), we obtain \[\langle E,E^{\mathsf{T}}\rangle=\begin{pmatrix}m_{0}&0_{1\times 3}&\frac{m_{2}}{3} 1_{1\times 3}&0_{1\times 3}&0_{1\times 3}\\ 0_{3\times 1}&\frac{m_{2}}{3}I_{3\times 3}&0_{3\times 3}&0_{3\times 3}&\frac{m_{ 4}}{3}I_{3\times 3}\\ \frac{m_{2}}{3}1_{3\times 1}&0_{3\times 3}&\frac{m_{4}}{15}A&0_{3\times 3}&0_{3 \times 3}\\ 0_{3\times 1}&0_{3\times 3}&0_{3\times 3}&\frac{m_{4}}{15}I_{3\times 3}&0_{3 \times 3}\\ 0_{3\times 1}&\frac{m_{4}}{3}I_{3\times 3}&0_{3\times 3}&0_{3\times 3}&\frac{m_{ 4}}{3}I_{3\times 3}\end{pmatrix},\] where \[A=\begin{pmatrix}3&1&1\\ 1&3&1\\ 1&1&3\end{pmatrix}.\] Note that \(\langle E,E^{\mathsf{T}}\rangle\) is represented as a \(5\times 5\) block matrix. We then calculate the determinate of \(\langle E,E^{\mathsf{T}}\rangle\) and end with \[\det(\langle E,E^{\mathsf{T}}\rangle)=\frac{4m_{4}^{5}(m_{0}m_{4}-m_{2}^{2})(m _{2}m_{6}-m_{4}^{2})^{3}}{1660753125}.\] By the Cauchy-Schwarz inequality, \(\det(\langle E,E^{\mathsf{T}}\rangle)>0\) and so \(\langle E,E^{\mathsf{T}}\rangle\) is invertible. Then we calculate the inverse and find \[(\langle E,E^{\mathsf{T}}\rangle)^{-1}=\begin{pmatrix}\frac{m_{4}}{m_{0}m_{4}-m _{2}^{2}}&0_{1\times 3}&\frac{m_{2}}{m_{0}m_{4}-m_{2}^{2}}1_{1\times 3}&0_{1 \times 3}&0_{1\times 3}\\ 0_{3\times 1}&\frac{3m_{4}}{m_{2}m_{6}-m_{4}^{2}}I_{3\times 3}&0_{3\times 3}&0_{3 \times 3}&\frac{3m_{4}}{m_{2}m_{6}-m_{4}^{2}}I_{3\times 3}\\ \frac{m_{2}}{m_{0}m_{4}-m_{2}^{2}}1_{3\times 1}&0_{3\times 3}&A&0_{3\times 3}&0_{ 3\times 3}\\ 0_{3\times 1}&0_{3\times 3}&0_{3\times 3}&\frac{15}{m_{4}}I_{3\times 3}&0_{3 \times 3}\\ 0_{3\times 1}&\frac{3m_{4}}{m_{2}m_{6}-m_{4}^{2}}I_{3\times 3}&0_{3\times 3}& \frac{3m_{2}}{m_{2}m_{6}-m_{4}^{2}}I_{3\times 3}\end{pmatrix},\] where \[A=\begin{pmatrix}a&b&b\\ b&a&b\\ b&b&a\end{pmatrix},\quad a=\frac{6m_{0}m_{4}-5m_{2}^{2}}{m_{4}(m_{0}m_{4}-m_{2} ^{2})},\quad b=\frac{3m_{0}m_{4}-5m_{2}^{2}}{2m_{4}(m_{0}m_{4}-m_{2}^{2})}.\] By Lemma 2.1, \[m_{0}\sim e^{-\lambda}(1-e^{-\lambda})^{-1/2},\quad m_{2},m_{4},m_{6}\sim e^{ -\lambda}. \tag{4.8}\] Then it is easy to derive \[m_{0}m_{4}-m_{2}^{2}\sim e^{-2\lambda}(1-e^{-\lambda})^{-1/2},\quad m_{2}m_{6 }-m_{4}^{2}\sim e^{-2\lambda}. \tag{4.9}\] Let \((\langle E,E^{\mathsf{T}}\rangle)^{-1}=(a_{ij})_{1\leq i,j\leq 13}\). By (4.8) and (4.9), for the elements in the first column or row, we have \[0\leq a_{1j},a_{j1}\lesssim e^{\lambda}(1-e^{-\lambda})^{1/2}\text{ for }1\leq j\leq 13. \tag{4.10}\] For the elements except the first column or row, we have \[|a_{ij}|\lesssim e^{\lambda}\text{ for }2\leq i,j\leq 13. \tag{4.11}\] Note that \((\langle E,E^{\mathsf{T}}\rangle)^{-1}E=(\sum_{j=1}^{13}a_{ij}e_{j})_{1\leq i \leq 13}\), and so for \(1\leq i\leq 13\), \[|\sum_{j=1}^{13}a_{ij}e_{j}|_{L_{2}^{2}}\lesssim\sum_{j=1}^{13}|a_{ij}||e_{j}| _{L_{2}^{2}}=|a_{i1}||e_{1}|_{L_{2}^{2}}+\sum_{j=2}^{13}|a_{ij}||e_{j}|_{L_{2} ^{2}}.\] By (2.14) and (4.10), we have \[|a_{i1}||e_{1}|_{L_{2}^{2}}\lesssim e^{\lambda}(1-e^{-\lambda})^{1/2}\times e ^{-\lambda/2}(1-e^{-\lambda})^{-1/4}=e^{\lambda/2}(1-e^{-\lambda})^{1/4}.\] By (2.15) and (4.11), we have \[\sum_{j=2}^{13}|a_{ij}||e_{j}|_{L_{2}^{2}}\lesssim e^{\lambda}\times e^{- \lambda/2}=e^{\lambda/2}.\] Patching together the previous two estimates, we finish the proof. For simplicity, we denote the terms in the right-hand side of (4.6) by \[\mathcal{U}=(\mathcal{U}^{(0)},\{\mathcal{U}^{(1)}_{i}\}_{1\leq i\leq 3},\{ \mathcal{U}^{(2)}_{i}\}_{1\leq i\leq 3},\{\mathcal{U}^{(2)}_{ij}\}_{1\leq i \leq j\leq 3},\{\mathcal{U}^{(3)}_{i}\}_{1\leq i\leq 3})^{\mathsf{T}}:=(\langle E,E^{ \mathsf{T}}\rangle)^{-1}\langle E,f_{2}\rangle\] \[\mathcal{V}=(\mathcal{V}^{(0)},\{\mathcal{V}^{(1)}_{i}\}_{1\leq i\leq 3},\{ \mathcal{V}^{(2)}_{i}\}_{1\leq i\leq 3},\{\mathcal{V}^{(2)}_{ij}\}_{1\leq i<j\leq 3},\{ \mathcal{V}^{(3)}_{i}\}_{1\leq i\leq 3})^{\mathsf{T}}:=(\langle E,E^{\mathsf{T}} \rangle)^{-1}\langle E,-T^{1/2}v\cdot\nabla_{x}f_{2}\rangle\] \[\mathcal{W}=(\mathcal{W}^{(0)},\{\mathcal{W}^{(1)}_{i}\}_{1\leq i\leq 3},\{ \mathcal{W}^{(2)}_{i}\}_{1\leq i\leq 3},\{\mathcal{W}^{(2)}_{ij}\}_{1\leq i\leq j \leq 3},\{\mathcal{W}^{(3)}_{i}\}_{1\leq i\leq 3})^{\mathsf{T}}:=(\langle E,E^{ \mathsf{T}}\rangle)^{-1}\langle E,-\frac{1}{\epsilon}\tilde{\mathcal{L}}^{ \lambda,T}f_{2}\rangle\] \[\mathcal{X}=(\mathcal{X}^{(0)},\{\mathcal{X}^{(1)}_{i}\}_{1\leq i\leq 3},\{ \mathcal{X}^{(2)}_{i}\}_{1\leq i\leq 3},\{\mathcal{X}^{(2)}_{ij}\}_{1\leq i\leq j \leq 3},\{\mathcal{X}^{(3)}_{i}\}_{1\leq i\leq 3})^{\mathsf{T}}:=(\langle E,E^{ \mathsf{T}}\rangle)^{-1}\langle E,\epsilon g\rangle\] \[\mathcal{T}=(\mathcal{T}^{(0)},\{\mathcal{T}^{(1)}_{i}\}_{1\leq i\leq 3},\{ \mathcal{T}^{(2)}_{i}\}_{1\leq i\leq 3},\{\mathcal{T}^{(2)}_{ij}\}_{1\leq i<j\leq 3},\{ \mathcal{T}^{(3)}_{i}\}_{1\leq i\leq 3})^{\mathsf{T}}:=-\epsilon\partial\mathcal{U}+ \mathcal{V}+\mathcal{W}+\mathcal{X}.\] Now (4.6) is written as \[X=\mathcal{T}=-\epsilon\partial\mathcal{U}+\mathcal{V}+\mathcal{W}+\mathcal{X}. \tag{4.12}\] The estimates \(\mathcal{U},\mathcal{V},\mathcal{W},\mathcal{X}\) are given in the following lemma. **Lemma 4.2**.: _It holds that_ \[\sum_{|\alpha|\leq N}|\partial^{\alpha}\mathcal{U}|_{L_{x}^{2}}^{2}\leq e^{ \lambda}\|f_{2}\|_{H_{x}^{N}L^{2}}^{2},\quad\sum_{|\alpha|\leq N-1}|\partial^{ \alpha}\mathcal{V}|_{L_{x}^{2}}^{2}\leq e^{\lambda}T\|f_{2}\|_{H_{x}^{N}L^{2}}^{2},\] \[\sum_{|\alpha|\leq N-1}|\partial^{\alpha}\mathcal{W}|_{L_{x}^{2}}^{2}\leq e^{ \lambda}\frac{1}{\epsilon^{2}}\tilde{C}_{1,\lambda,T}^{2}\|f_{2}\|_{H_{x}^{N-1}L_{ 1/2}^{2}}^{2},\quad\sum_{|\alpha|\leq N-1}|\partial^{\alpha}\mathcal{X}|_{L_{x}^ {2}}^{2}\leq\epsilon^{2}\mathcal{C}_{\lambda,N-1}(g),\] _where_ \[\mathcal{C}_{\lambda,n}(g):=\sum_{|\alpha|\leq n}|\langle(\langle(E,E^{\mathsf{T}}) \rangle^{-1}E,\partial^{\alpha}g\rangle|_{L^{2}_{x}}^{2}.\] Proof.: Note that \(\partial^{\alpha}\mathcal{U}=(\langle E,E^{\mathsf{T}}\rangle)^{-1}\langle E, \partial^{\alpha}f_{2}\rangle\). By (4.7), \[|\partial^{\alpha}\mathcal{U}|\lesssim e^{\lambda/2}|\partial^{ \alpha}f_{2}|_{L^{2}}, \tag{4.13}\] which gives the first inequality on \(\mathcal{U}\). Note that \(\partial^{\alpha}\mathcal{V}=(\langle E,E^{\mathsf{T}}\rangle)^{-1}\langle E, -T^{1/2}v\cdot\nabla_{x}\partial^{\alpha}f_{2}\rangle\). Then a similar argument yields the second inequality on \(\mathcal{V}\). By (3.7), we have \[\langle\bar{\mathcal{L}}^{\lambda,T}g,h\rangle=\langle\bar{ \mathcal{L}}^{\lambda,T}g_{2},h_{2}\rangle\leq(\langle\bar{\mathcal{L}}^{ \lambda,T}g_{2},g_{2}\rangle)^{1/2}(\langle\bar{\mathcal{L}}^{\lambda,T}h_{2 },h_{2}\rangle)^{1/2}\leq\bar{C}_{1,\lambda,T}|g_{2}|_{L^{2}_{1/2}}|h_{2}|_{ L^{2}_{1/2}}.\] For any \(g\) and \(k\geq 0\), one can derive that \(|g_{2}|_{L^{2}_{k}}\lesssim|g_{L^{2}_{k}},|g_{1}|_{L^{2}_{k}}\lesssim|g|_{L^{2}}\). From this together with (4.7), we have \[|\partial^{\alpha}\mathcal{W}|=|(\langle E,E^{\mathsf{T}}\rangle)^{-1}\langle E,-\frac{1}{\epsilon}\bar{\mathcal{L}}^{\lambda,T}\partial^{\alpha}f_{2} \rangle|\lesssim\frac{1}{\epsilon}e^{\lambda/2}\bar{C}_{1,\lambda,T}|\partial^ {\alpha}f_{2}|_{L^{2}_{1/2}},\] which gives the third inequality on \(\mathcal{W}\). The last result on \(\mathcal{X}\) is trivial. We now estimate the dynamics of \((a,b,c)\) in the following lemma. **Lemma 4.3**.: _Recall (2.26) and (2.27). It holds that_ \[\epsilon^{2}\sum_{|\alpha|\leq N-1}|\partial^{\alpha}\partial_{t }a|_{L^{2}_{x}}^{2}\lesssim Te^{\lambda}\|f_{2}\|_{H^{N}_{x}L^{2}}^{2}+ \epsilon^{2}\mathcal{P}_{\lambda,N-1}(g), \tag{4.14}\] \[\epsilon^{2}\sum_{|\alpha|\leq N-1}|\partial^{\alpha}\partial_{t }(b,c)|_{L^{2}_{x}}^{2}\lesssim T|\nabla_{x}(a,b,c)|_{H^{N-1}_{x}}^{2}+Te^{ \lambda}\|f_{2}\|_{H^{N}_{x}L^{2}}^{2}+\epsilon^{2}\mathcal{P}_{\lambda,N-1}(g), \tag{4.15}\] _where_ \[\mathcal{P}_{\lambda,n}(g):=\sum_{|\alpha|\leq n}(|\langle l_{1}N -l_{2}N|v|^{2},\partial^{\alpha}g\rangle|_{L^{2}_{x}}^{2}+|\langle l_{3}Nv, \partial^{\alpha}g\rangle|_{L^{2}_{x}}^{2}+|\langle l_{4}N|v|^{2}-l_{2}N, \partial^{\alpha}g\rangle|_{L^{2}_{x}}^{2}).\] Proof.: Taking inner products between equation (4.1) and the functions \(l_{1}N-l_{2}N|v|^{2},l_{3}Nv,l_{4}N|v|^{2}-l_{2}N\), using \(\langle\bar{\mathcal{L}}^{\lambda,T}f,N\rangle=\langle\bar{\mathcal{L}}^{ \lambda,T}f,Nv\rangle=\langle\bar{\mathcal{L}}^{\lambda,T}f,N|v|^{2}\rangle=0\), we get \[\epsilon\partial_{t}a+\langle T^{1/2}v\cdot\nabla_{x}f,l_{1}N-l _{2}N|v|^{2}\rangle=\langle\epsilon g,l_{1}N-l_{2}N|v|^{2}\rangle,\] \[\epsilon\partial_{t}b+\langle T^{1/2}v\cdot\nabla_{x}f,l_{3}Nv \rangle=\langle\epsilon g,l_{3}Nv\rangle,\] \[\epsilon\partial_{t}c+\langle T^{1/2}v\cdot\nabla_{x}f,l_{4}N|v |^{2}-l_{2}N\rangle=\langle\epsilon g,l_{4}N|v|^{2}-l_{2}N\rangle.\] Since \(\langle v_{i}N,N\rangle=\langle v_{i}v_{j}N,v_{k}N\rangle=\langle v_{i}|v|^{2} N,|v|^{2}N\rangle=0\) for \(i,j,k\in\{1,2,3\}\). Recalling the definition of \(l_{1},l_{2}\) in (2.26), it is straightforward to see \[\langle v\cdot\nabla_{x}f_{1},l_{1}N-l_{2}N|v|^{2}\rangle=\frac{ 1}{3}\langle N|v|^{2},l_{1}N-l_{2}N|v|^{2}\rangle\nabla_{x}\cdot b=\frac{1}{3}( m_{2}l_{1}-m_{4}l_{2})\nabla_{x}\cdot b=0,\] \[\langle v\cdot\nabla_{x}f_{1},l_{3}Nv\rangle=\langle v\cdot \nabla_{x}(aN+cN|v|^{2}),l_{3}Nv\rangle=\langle Nv_{i},l_{3}Nv_{i}\rangle \nabla_{x}a+\langle Nv_{i},l_{3}N|v|^{2}v_{i}\rangle\nabla_{x}c\] \[=\frac{m_{2}}{3}l_{3}\nabla_{x}a+\frac{m_{4}}{3}l_{3}\nabla_{x}c= \nabla_{x}a+\frac{m_{4}}{m_{2}}\nabla_{x}c,\] \[\langle v\cdot\nabla_{x}f_{1},l_{4}N|v|^{2}-l_{2}N\rangle= \langle v\cdot\nabla_{x}(b\cdot vN),l_{4}N|v|^{2}-l_{2}N\rangle\] \[=\langle Nv_{i}^{2},l_{4}N|v|^{2}-l_{2}N\rangle\nabla_{x}\cdot b= \frac{1}{3}(m_{4}l_{4}-m_{2}l_{2})\nabla_{x}\cdot b=\frac{1}{3}\nabla_{x}\cdot b,\] which gives the local conservation laws \[\begin{cases}\epsilon\partial_{t}a=\langle\epsilon g-T^{1/2}v\cdot\nabla_{x}f_{ 2},l_{1}N-l_{2}N|v|^{2}\rangle,\\ \epsilon\partial_{t}b+T^{1/2}\nabla_{x}a+T^{1/2}\frac{m_{4}}{m_{2}}\nabla_{x}c= \langle\epsilon g-T^{1/2}v\cdot\nabla_{x}f_{2},l_{3}Nv\rangle,\\ \epsilon\partial_{t}c+T^{1/2}\frac{1}{3}\nabla_{x}\cdot b=\langle\epsilon g-T^{1/ 2}v\cdot\nabla_{x}f_{2},l_{4}N|v|^{2}-l_{2}N\rangle.\end{cases} \tag{4.16}\] Note that \(\frac{m_{4}}{m_{2}}\leq 5\). By (4.8) and (4.9), we have \[l_{1}\sim e^{\lambda}(1-e^{-\lambda})^{1/2},\quad l_{2}\sim e^{ \lambda}(1-e^{-\lambda})^{1/2},\quad l_{3}\sim e^{\lambda},\quad l_{4}\sim e^{ \lambda}. \tag{4.17}\] From this together with (2.14) and (2.15), \[|l_{1}N-l_{2}N|v|^{2}|_{L^{2}_{2}}\lesssim e^{\lambda/2},\quad|l_{3}Nv|_{L^{2}_{2} }\lesssim e^{\lambda/2},\quad|l_{4}N|v|^{2}-l_{2}N|_{L^{2}_{2}}\lesssim e^{ \lambda/2}, \tag{4.18}\] which gives \[|\langle v\cdot\nabla_{x}f_{2},l_{1}N-l_{2}N|v|^{2}\rangle|+|\langle v\cdot \nabla_{x}f_{2},l_{3}Nv\rangle|+|\langle v\cdot\nabla_{x}f_{2},l_{4}N|v|^{2}- l_{2}N\rangle|\lesssim e^{\lambda/2}|\nabla_{x}f_{2}|_{L^{2}}.\] Patching together the above results, we get (4.14) and (4.15). By (4.7) and (4.18), \(\mathcal{C}_{\lambda,n}(g)\) and \(\mathcal{P}_{\lambda,n}(g)\) are bounded by some linear combination of the following quantities, \[|\langle\psi(v),\partial^{\alpha}g\rangle|^{2}_{L^{2}_{x}}\text{ with }|\psi|_{L^{2}_{2}}\lesssim e^{\lambda/2},|\alpha|\leq n. \tag{4.19}\] With Lemma 4.2 and Lemma 4.3, based on the macroscopic system (4.12), using integration by parts to balance derivative, we derive macroscopic dissipation in the following lemma. **Lemma 4.4**.: _Let \(N\geq 1,T>0\). Let \(f\in L^{\infty}([0,T];H^{N}_{x}L^{2})\) be a solution to (4.1), then there exists a universal constant \(C>0\) such that_ \[\epsilon\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{I}_{N}(f)+\frac{1} {2}T^{1/2}|\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}} \leq CT^{1/2}e^{\lambda}\|f_{2}\|^{2}_{H^{N}_{x}L^{2}}\] \[+CT^{-1/2}e^{\lambda}\epsilon^{-2}\bar{C}^{2}_{1,\lambda,T}\|f_{ 2}\|^{2}_{H^{N}_{x}L^{2}_{1/2}}+CT^{-1/2}\epsilon^{2}\mathcal{Q}_{\lambda,N-1} (g),\] _where \(\mathcal{I}_{N}(f)\) is defined in (4.37) and satisfying_ \[|\mathcal{I}_{N}(f)|\leq Ce^{\lambda}\|f\|^{2}_{H^{N}_{x}L^{2}}. \tag{4.21}\] _Here \(\mathcal{Q}_{\lambda,n}(g)\) is bounded by some linear combination of the quantities in (4.19)._ Proof.: Note that (4.12) is equivalent to \[\epsilon\partial_{t}a = \mathcal{T}^{(0)}=-\epsilon\partial_{t}\mathcal{U}^{(0)}+\mathcal{ V}^{(0)}+\mathcal{W}^{(0)}+\mathcal{X}^{(0)}, \tag{4.22}\] \[\epsilon\partial_{t}b_{i}+T^{1/2}\partial_{i}a = \mathcal{T}^{(1)}_{i}=-\epsilon\partial_{t}\mathcal{U}^{(1)}_{i}+ \mathcal{V}^{(1)}_{i}+\mathcal{W}^{(1)}_{i}+\mathcal{X}^{(1)}_{i},\ 1\leq i\leq 3,\] (4.23) \[\epsilon\partial_{t}c+T^{1/2}\partial_{i}b_{i} = \mathcal{T}^{(2)}_{i}=-\epsilon\partial_{t}\mathcal{U}^{(2)}_{i}+ \mathcal{V}^{(2)}_{i}+\mathcal{W}^{(2)}_{i}+\mathcal{X}^{(2)}_{i},\ 1\leq i\leq 3,\] (4.24) \[T^{1/2}(\partial_{i}b_{j}+\partial_{j}b_{i}) = \mathcal{T}^{(2)}_{ij}=-\epsilon\partial_{t}\mathcal{U}^{(2)}_{ij }+\mathcal{V}^{(2)}_{ij}+\mathcal{W}^{(2)}_{ij}+\mathcal{X}^{(2)}_{ij},\ 1\leq i<j\leq 3,\] (4.25) \[T^{1/2}\partial_{i}c = \mathcal{T}^{(3)}_{i}=-\epsilon\partial_{t}\mathcal{U}^{(3)}_{i}+ \mathcal{V}^{(3)}_{i}+\mathcal{W}^{(3)}_{i}+\mathcal{X}^{(3)}_{i},\ 1\leq i\leq 3. \tag{4.26}\] The derivation of \(|\nabla_{x}c|^{2}_{H^{N-1}_{x}}\) is the easiest one and so we begin with \(c\). By (4.26), we have \[-T^{1/2}\Delta_{x}c=-\sum_{j}\partial_{j}\mathcal{T}^{(3)}_{j}. \tag{4.27}\] Applying \(\partial^{\alpha}\) with \(|\alpha|\leq N-1\) to equation (4.27), then by taking inner product with \(\partial^{\alpha}c\), one has \[T^{1/2}|\nabla_{x}\partial^{\alpha}c|^{2}_{L^{2}_{x}}=\langle-\sum_{j}\partial _{j}\partial^{\alpha}\mathcal{T}^{(3)}_{j},\partial^{\alpha}c\rangle_{x}.\] Recalling (4.12), \(\mathcal{T}=-\epsilon\partial_{t}\mathcal{U}+\mathcal{V}+\mathcal{W}+ \mathcal{X}\). Then \[\langle-\sum_{j}\partial_{j}\partial^{\alpha}\mathcal{T}^{(3)}_{j},\partial^{ \alpha}c\rangle_{x}=\langle\epsilon\partial_{t}\sum_{j}\partial_{j}\partial^{ \alpha}\mathcal{U}^{(3)}_{j},\partial^{\alpha}c\rangle_{x}-\langle\sum_{j} \partial_{j}\partial^{\alpha}(\mathcal{V}^{(3)}_{j}+\mathcal{W}^{(3)}_{j}+ \mathcal{X}^{(3)}_{j}),\partial^{\alpha}c\rangle_{x}.\] For the term involving \(\mathcal{U}^{(3)}_{j}\), interchanging the \(t\)-derivative and \(L^{2}_{x}\) inner product, we have \[\langle\epsilon\partial_{t}\sum_{j}\partial_{j}\partial^{\alpha}\mathcal{U}^{( 3)}_{j},\partial^{\alpha}c\rangle_{x}=-\epsilon\frac{\mathrm{d}}{\mathrm{d}t}T^{ c}_{\alpha}(f)+\mathfrak{U}^{c}_{\alpha},\] where \[\mathcal{I}^{c}_{\alpha}(f):=-\sum_{j}\langle\partial_{j}\partial^{\alpha} \mathcal{U}^{(3)}_{j},\partial^{\alpha}c\rangle_{x},\quad\mathfrak{U}^{c}_{ \alpha}:=-\sum_{j}\langle\partial_{j}\partial^{\alpha}\mathcal{U}^{(3)}_{j}, \epsilon\partial_{t}\partial^{\alpha}c\rangle_{x}. \tag{4.28}\] For the term involving \(\mathcal{V}^{(3)}_{j}+\mathcal{W}^{(3)}_{j}+\mathcal{X}^{(3)}_{j}\), via integrating by parts we have \[-\langle\sum_{j}\partial_{j}\partial^{\alpha}(\mathcal{V}^{(3)}_{j}+\mathcal{W}^ {(3)}_{j}+\mathcal{X}^{(3)}_{j}),\partial^{\alpha}c\rangle_{x}=\sum_{j}\langle \partial^{\alpha}(\mathcal{V}^{(3)}_{j}+\mathcal{W}^{(3)}_{j}+\mathcal{X}^{(3)}_ {j}),\partial_{j}\partial^{\alpha}c\rangle_{x}:=\mathfrak{D}^{c}_{\alpha}.\] Patching together the above formulas, we get \[T^{1/2}|\nabla_{x}\partial^{\alpha}c|^{2}_{L^{2}_{x}}+\epsilon\frac{\mathrm{d}}{ \mathrm{d}t}\mathcal{T}^{c}_{\alpha}(f)=\mathfrak{U}^{c}_{\alpha}+\mathfrak{D} ^{c}_{\alpha}.\] By the Cauchy-Schwartz inequality, for any \(0<\eta<1\), one has \[|\mathfrak{U}^{c}_{\alpha}|\leq\frac{\eta}{T^{1/2}}\epsilon^{2}|\partial^{ \alpha}\partial_{t}c|^{2}_{L^{2}_{x}}+\frac{CT^{1/2}}{\eta}|\nabla_{x}\partial ^{\alpha}\mathcal{U}|^{2}_{L^{2}_{x}},\] \[|\mathfrak{D}^{c}_{\alpha}|\leq\eta T^{1/2}|\nabla_{x}\partial^{\alpha}c|^{2}_{ L^{2}_{x}}+\frac{C}{\eta T^{1/2}}(|\partial^{\alpha}\mathcal{V}|^{2}_{L^{2}_{x}}+| \partial^{\alpha}\mathcal{W}|^{2}_{L^{2}_{x}}+|\partial^{\alpha}\mathcal{X}| ^{2}_{L^{2}_{x}}).\] Taking sum over \(|\alpha|\leq N-1\), by Lemma 4.3 and (4.15), we get \[T^{1/2}|\nabla_{x}c|^{2}_{H^{N-1}_{x}}+\epsilon\frac{\mathrm{d}}{\mathrm{d}t }\sum_{|\alpha|\leq N-1}\mathcal{T}^{c}_{\alpha}(f)\leq\mathfrak{H}_{\eta,C}, \tag{4.29}\] where for simplicity, \[\mathfrak{H}_{\eta,C} := \eta T^{1/2}|\nabla_{x}(a,b,c)|^{2}_{H^{N-1}_{x}}+\eta^{-1}CT^{1/ 2}e^{\lambda}\|f_{2}\|^{2}_{H^{N}_{x}L^{2}}\] \[+\eta^{-1}CT^{-1/2}e^{\lambda}\epsilon^{-2}\tilde{C}^{2}_{1, \lambda,T}\|f_{2}\|^{2}_{H^{N}_{x}L^{2}_{1/2}}+\eta^{-1}CT^{-1/2}\epsilon^{2} \mathcal{Q}_{\lambda,N-1}(g).\] Here and in the rest of this proof, \(0<\eta<1\) is an arbitrary constant and \(C\) is a universal constant that could change across different lines. We now derive \(|\nabla_{x}b|^{2}_{H^{N-1}_{x}}\). Based on equations (4.24) and (4.25), we have \[-T^{1/2}\Delta_{x}b_{j}-T^{1/2}\partial_{j}^{2}b_{j} = \sum_{i\neq j}\partial_{j}(\epsilon\partial_{t}c+T^{1/2}\partial _{i}b_{i})-\sum_{i\neq j}\partial_{i}(T^{1/2}\partial_{i}b_{j}+T^{1/2}\partial _{j}b_{i})-2\partial_{j}(\epsilon\partial_{t}c+T^{1/2}\partial_{j}b_{j}) \tag{4.31}\] \[= \sum_{i\neq j}\partial_{j}\mathcal{T}^{(2)}_{i}-\sum_{i\neq j} \partial_{i}\mathcal{T}^{(2)}_{ij}-2\partial_{j}\mathcal{T}^{(2)}_{j}.\] For \(|\alpha|\leq N-1\), applying \(\partial^{\alpha}\) to equation (4.31) for \(b_{j}\), then taking inner product with \(\partial^{\alpha}b_{j}\), one has \[T^{1/2}|\nabla_{x}\partial^{\alpha}b_{j}|^{2}_{L^{2}_{x}}+T^{1/2}|\partial_{j} \partial^{\alpha}b_{j}|^{2}_{L^{2}_{x}}=\langle\sum_{i\neq j}\partial_{j} \partial^{\alpha}\mathcal{T}^{(2)}_{i}-\sum_{i\neq j}\partial_{i}\partial^{ \alpha}\mathcal{T}^{(2)}_{ij}-2\partial_{j}\partial^{\alpha}\mathcal{T}^{(2)}_ {j},\partial^{\alpha}b_{j}\rangle_{x}.\] Recalling (4.12), \(\mathcal{T}=-\epsilon\partial_{t}\mathcal{U}+\mathcal{V}+\mathcal{X}\). For the term \(-\epsilon\partial_{t}\mathcal{U}\), interchanging the \(t\)-derivative and \(L^{2}_{x}\) inner product, we have \[-\langle\epsilon\partial_{t}(\sum_{i\neq j}\partial_{j}\partial^{\alpha} \mathcal{U}^{(2)}_{i}-\sum_{i\neq j}\partial_{i}\partial^{\alpha}\mathcal{U}^{ (2)}_{ij}-2\partial_{j}\partial^{\alpha}\mathcal{U}^{(2)}_{j}),\partial^{ \alpha}b_{j}\rangle_{x}=-\epsilon\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{I}^{b _{j}}_{\alpha}(f)+\mathfrak{U}^{b_{j}}_{\alpha},\] where \[\mathcal{I}^{b_{j}}_{\alpha}(f):=\langle\sum_{i\neq j}\partial_{j}\partial^{ \alpha}\mathcal{U}^{(2)}_{i}-\sum_{i\neq j}\partial_{i}\partial^{\alpha} \mathcal{U}^{(2)}_{ij}-2\partial_{j}\partial^{\alpha}\mathcal{U}^{(2)}_{j}, \partial^{\alpha}b_{j}\rangle_{x}, \tag{4.32}\] \[\mathfrak{U}^{b_{j}}_{\alpha}:=\langle\sum_{i\neq j}\partial_{j}\partial^{ \alpha}\mathcal{U}^{(2)}_{i}-\sum_{i\neq j}\partial_{i}\partial^{\alpha} \mathcal{U}^{(2)}_{ij}-2\partial_{j}\partial^{\alpha}\mathcal{U}^{(2)}_{j}, \epsilon\partial_{t}\partial^{\alpha}b_{j}\rangle_{x}.\] For the term \(\mathcal{V}+\mathcal{W}+\mathcal{X}\), via integrating by parts, we have \[\langle\sum_{i\neq j}\partial_{j}\partial^{\alpha}(\mathcal{V}^{ (2)}_{i}+\mathcal{W}^{(2)}_{i}+\mathcal{X}^{(2)}_{i})-\sum_{i\neq j}\partial_{i} \partial^{\alpha}(\mathcal{V}^{(2)}_{ij}+\mathcal{W}^{(2)}_{ij}+\mathcal{X}^{( 2)}_{ij})-2\partial_{j}\partial^{\alpha}(\mathcal{V}^{(2)}_{j}+\mathcal{W}^{(2 )}_{j}+\mathcal{X}^{(2)}_{j}),\partial^{\alpha}b_{j}\rangle_{x}\] \[= -\langle\sum_{i\neq j}\partial^{\alpha}(\mathcal{V}^{(2)}_{i}+ \mathcal{W}^{(2)}_{i}+\mathcal{X}^{(2)}_{i}),\partial_{j}\partial^{\alpha}b_{j} \rangle_{x}+\sum_{i\neq j}\langle\partial^{\alpha}(\mathcal{V}^{(2)}_{ij}+ \mathcal{W}^{(2)}_{ij}+\mathcal{X}^{(2)}_{ij}),\partial_{i}\partial^{\alpha}b_{ j}\rangle_{x}\] \[+2\langle\partial^{\alpha}(\mathcal{V}^{(2)}_{j}+\mathcal{W}^{(2)}_{ j}+\mathcal{X}^{(2)}_{j}),\partial_{j}\partial^{\alpha}b_{j}\rangle_{x}:=\mathfrak{D}^{b_{j}}_{\alpha}.\] Patching together the above formulas, we get \[T^{1/2}|\nabla_{x}\partial^{\alpha}b_{j}|^{2}_{L^{2}_{x}}+T^{1/2}|\partial_{j} \partial^{\alpha}b_{j}|^{2}_{L^{2}_{x}}+\epsilon\frac{\mathrm{d}}{\mathrm{d}t} \mathcal{I}^{b_{j}}_{\alpha}(f)=\mathfrak{U}^{b_{j}}_{\alpha}+\mathfrak{D}^{b_{j}}_ {\alpha}.\] By the Cauchy-Schwartz inequality, one has \[|\mathfrak{U}^{b_{j}}_{\alpha}|\leq\frac{\eta}{T^{1/2}}\epsilon^{2}|\partial^{ \alpha}\partial_{t}b_{j}|^{2}_{L^{2}_{x}}+\frac{CT^{1/2}}{\eta}|\nabla_{x} \partial^{\alpha}\mathcal{U}|^{2}_{L^{2}_{x}},\] \[|\mathfrak{D}^{b_{j}}_{\alpha}|\leq\eta T^{1/2}|\nabla_{x}\partial^{\alpha}b_{j}|^{ 2}_{L^{2}_{x}}+\frac{C}{\eta T^{1/2}}(|\partial^{\alpha}\mathcal{V}|^{2}_{L^{2}_{x} }+|\partial^{\alpha}\mathcal{W}|^{2}_{L^{2}_{x}}+|\partial^{\alpha}\mathcal{X}|^{2 }_{L^{2}_{x}}).\] Taking sum over \(1\leq j\leq 3\), taking sum over \(|\alpha|\leq N-1\), by Lemma 4.3 and (4.15), we get \[T^{1/2}|\nabla_{x}b|^{2}_{H^{N-1}_{x}}+\epsilon\frac{\mathrm{d}}{\mathrm{d}t} \sum_{|\alpha|\leq N-1}\sum_{j=1}^{3}\mathcal{I}^{b_{j}}_{\alpha}(f)\leq \mathfrak{H}_{\eta,C}. \tag{4.33}\] We now derive \(|\nabla_{x}a|^{2}_{H^{N-1}_{x}}\). By (4.23), we have \[-T^{1/2}\Delta_{x}a=\sum_{j}\partial_{j}\epsilon\partial_{t}b_{j}-\sum_{j} \partial_{j}\mathcal{T}^{(1)}_{j}. \tag{4.34}\] Applying \(\partial^{\alpha}\) to equation (4.34) for, by taking inner product with \(\partial^{\alpha}a\), one has \[T^{1/2}|\nabla_{x}\partial^{\alpha}a|^{2}_{L^{2}_{x}} = \langle\sum_{j}\partial_{j}\partial^{\alpha}\epsilon\partial_{t} b_{j}-\sum_{j}\partial_{j}\partial^{\alpha}\mathcal{T}^{(1)}_{j},\partial^{ \alpha}a\rangle_{x}\] \[= \langle\sum_{j}\partial_{j}\partial^{\alpha}\epsilon\partial_{t }(b_{j}+\mathcal{U}^{(1)}_{j})-\sum_{j}\partial_{j}\partial^{\alpha}( \mathcal{V}^{(1)}_{j}+\mathcal{W}^{(1)}_{j}+\mathcal{X}^{(1)}_{j}),\partial^{ \alpha}a\rangle_{x}.\] where we recall \(\mathcal{T}=-\epsilon\partial_{t}\mathcal{U}+\mathcal{V}+\mathcal{W}+ \mathcal{X}\) from (4.12). For the term involving \(\epsilon\partial_{t}(b_{j}+\mathcal{U}^{(1)}_{j})\), interchanging the \(t\)-derivative and \(L^{2}_{x}\) inner product, we have \[\langle\epsilon\partial_{t}\sum_{j}\partial_{j}\partial^{\alpha}(b_{j}+ \mathcal{U}^{(1)}_{j}),\partial^{\alpha}a\rangle_{x}=-\epsilon\frac{\mathrm{d }}{\mathrm{d}t}\mathcal{I}^{a}_{\alpha}(f)+\mathfrak{U}^{a}_{\alpha},\] where \[\mathcal{I}^{a}_{\alpha}(f):=-\sum_{j}\langle\partial_{j}\partial^{\alpha}b_{j },\partial^{\alpha}a\rangle_{x}-\sum_{j}\langle\partial_{j}\partial^{\alpha} \mathcal{U}^{(1)}_{j},\partial^{\alpha}a\rangle_{x}, \tag{4.35}\] \[\mathfrak{U}^{a}_{\alpha}:=-\langle\sum_{j}\partial_{j}\partial^{\alpha}b_{j },\epsilon\partial_{t}\partial^{\alpha}a\rangle_{x}-\langle\sum_{j}\partial_ {j}\partial^{\alpha}\mathcal{U}^{(1)}_{j},\epsilon\partial_{t}\partial^{\alpha }a\rangle_{x}.\] For the term involving \(\mathcal{V}+\mathcal{W}+\mathcal{X}\), via integrating by parts, we have \[-\langle\sum_{j}\partial_{j}\partial^{\alpha}(\mathcal{V}^{(1)}_{j}+\mathcal{ W}^{(1)}_{j}+\mathcal{X}^{(1)}_{j}),\partial^{\alpha}a\rangle_{x}=\sum_{j} \langle\partial^{\alpha}(\mathcal{V}^{(1)}_{j}+\mathcal{W}^{(1)}_{j}+\mathcal{ X}^{(1)}_{j}),\partial_{j}\partial^{\alpha}a\rangle_{x}:=\mathcal{D}^{a}_{\alpha}.\] Patching together the above formulas, we get \[|\nabla_{x}\partial^{\alpha}a|^{2}_{L^{2}_{x}}+\epsilon\frac{\mathrm{d}}{ \mathrm{d}t}\mathcal{I}^{a}_{\alpha}(f)=\mathfrak{U}^{a}_{\alpha}+\mathfrak{O}^ {a}_{\alpha}.\] By the Cauchy-Schwartz inequality, one has \[|\mathfrak{U}^{a}_{\alpha}|\leq\eta T^{1/2}|\nabla_{x}\partial^{\alpha}b|^{2} _{L^{2}_{x}}+\frac{C}{\eta T^{1/2}}|\epsilon\partial_{t}\partial^{\alpha}a|^{ 2}_{L^{2}_{x}}+\frac{CT^{1/2}}{\eta}|\nabla_{x}\partial^{\alpha}\mathcal{U}|^{ 2}_{L^{2}_{x}},\] \[|\mathfrak{O}^{a}_{\alpha}|\leq\eta T^{1/2}|\nabla_{x}\partial^{\alpha}a|^{2} _{L^{2}_{x}}+\frac{C}{\eta T^{1/2}}(|\partial^{\alpha}\mathcal{V}|^{2}_{L^{2}_ {x}}+|\partial^{\alpha}\mathcal{W}|^{2}_{L^{2}_{x}}+|\partial^{\alpha} \mathcal{X}|^{2}_{L^{2}_{x}}).\] Taking sum over \(|\alpha|\leq N-1\), by Lemma 4.3 and (4.14), we get \[|\nabla_{x}a|^{2}_{H^{N-1}_{x}}+\epsilon\frac{\mathrm{d}}{\mathrm{d}t}\sum_{| \alpha|\leq N-1}\mathcal{I}^{a}_{\alpha}(f)\leq\mathfrak{H}_{\eta,C}. \tag{4.36}\] For brevity, let us define the temporal energy functional \(\mathcal{I}_{N}(f)\) as in [17] by \[\mathcal{I}_{N}(f):=\sum_{|\alpha|\leq N-1}(\mathcal{I}^{a}_{\alpha}(f)+\sum_ {j=1}^{3}\mathcal{I}^{b_{j}}_{\alpha}(f)+\mathcal{I}^{c}_{\alpha}(f)), \tag{4.37}\] where \(\mathcal{I}^{a}_{\alpha}(f),\mathcal{I}^{b_{j}}_{\alpha}(f),\mathcal{I}^{c}_{ \alpha}(f)\) are defined in (4.35), (4.32), (4.28). Patching together (4.36), (4.33) and (4.29), recalling (4.37) and (4.30), we have \[\epsilon\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{I}_{N}(f)+T^{1/2}|\nabla_{x}(a, b,c)|^{2}_{H^{N-1}_{x}}\leq\mathfrak{H}_{\eta,C}.\] Taking \(\eta=\frac{1}{2}\), we arrive at (4.20). We emphasize that \(C\) is a universal constant. In particular, it is even independent of the integer \(N\) by the above derivation. Recall \(f=f_{1}+f_{2}\), then \(\partial^{\alpha}f=(\partial^{\alpha}f)_{1}+(\partial^{\alpha}f)_{2}=\partial^{ \alpha}f_{1}+\partial^{\alpha}f_{2}\). That is, the projection operator \(\tilde{\mathbb{P}}_{\lambda}\) and the spatial derivative \(\partial^{\alpha}\) is commutative. So we have \[|\partial^{\alpha}f|_{L^{2}_{x}L^{2}}^{2}=|\partial^{\alpha}f_{1}|_{L^{2}_{x}L^ {2}}+|\partial^{\alpha}f_{2}|_{L^{2}_{x}L^{2}}^{2}.\] Recalling the definition of \(\mathcal{I}_{N}(f)\) in (4.37), back to (4.35), (4.32) and (4.28), by the C-S inequality inequality, recalling (2.27), using (4.18) and (4.13), we get \[|\mathcal{I}_{N}(f)|\lesssim|(a,b,c)|_{H^{N}_{x}}^{2}+|\mathcal{U}|_{H^{N}_{x} }^{2}\lesssim e^{\lambda}|f_{1}|_{H^{N}_{x}L^{2}}^{2}+e^{\lambda}|f_{2}|_{H^{ N}_{x}L^{2}}^{2}=e^{\lambda}|f|_{H^{N}_{x}L^{2}}^{2}.\] That is, (4.21) holds for some universal constant \(C\). We derive the following _a priori_ estimate for equation (4.1). **Proposition 4.1**.: _Let \(N\geq 1;\lambda,T>0,T_{*}>0\). Let \(f\in L^{\infty}([0,T_{*}];H^{N}_{x}L^{2})\) be a solution to (4.1), then for any \(0\leq t\leq T_{*}\), it holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\Xi_{N}^{\lambda,T}(f)+\frac{1}{2} \mathcal{D}_{N}(f)+\mathrm{C}_{1}(\lambda,T)\frac{1}{\epsilon^{2}}\|f_{2}\|_{ H^{N}_{x}L^{2}_{t/2}}^{2}\] \[\lesssim \mathrm{C}_{2}(\lambda,T)\sum_{|\alpha|\leq N}|(\partial^{\alpha }g,\partial^{\alpha}f)|+\mathrm{C}_{3}(\lambda,T)\epsilon^{2}\mathcal{Q}_{ \lambda,N-1}(g),\] _where_ \[\Xi_{N}^{\lambda,T}(f):=\epsilon T^{-1/2}e^{-\lambda}(1-e^{- \lambda})^{-\frac{1}{2}}C_{0}\mathcal{I}_{N}(f)+K(\lambda,T)\|f\|_{H^{N}_{x}L ^{2}}^{2}, \tag{4.39}\] \[K(\lambda,T):=C_{*}e^{2\lambda}(1-e^{-\lambda})^{-\frac{2i}{2}} \max\{T,T^{-2}\},\] (4.40) \[\mathrm{C}_{1}(\lambda,T):=e^{\lambda}(1-e^{-\lambda})^{-\frac{2i }{2}}\max\{T,T^{-2}\}T^{2},\] (4.41) \[\mathrm{C}_{3}(\lambda,T):=e^{-\lambda}(1-e^{-\lambda})^{-\frac{2i }{2}}\max\{T,T^{-2}\},\] (4.42) \[\mathrm{C}_{3}(\lambda,T):=e^{-\lambda}(1-e^{-\lambda})^{-\frac{1 }{2}}T^{-1}, \tag{4.43}\] _for some large universal constant \(C_{*}>0\). Moreover, it holds that_ \[\frac{1}{2}K(\lambda,T)\|f\|_{H^{N}_{x}L^{2}}^{2}\leq\Xi_{N}^{\lambda,T}(f) \leq\frac{3}{2}K(\lambda,T)\|f\|_{H^{N}_{x}L^{2}}^{2}. \tag{4.44}\] Proof.: Note that \(\Xi_{N}^{\lambda,T}(f)\) is a combination of \(\mathcal{I}_{N}(f)\) and \(\|f\|_{H^{N}_{x}L^{2}}^{2}\). We already have \(\mathcal{I}_{N}(f)\) from Lemma 4.4. That is, the solution \(f\) verifies (4.20). By (4.20) and recalling the constant \(\tilde{C}_{1,\lambda,T}\) from (3.8), \[T^{-1/2}\epsilon\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{I}_{N}(f)+\frac{1}{2}| \nabla_{x}(a,b,c)|_{H^{N-1}_{x}}^{2}\leq CT^{3}e^{\lambda}(1-e^{-\lambda})^{-8 }\epsilon^{-2}\|f_{2}\|_{H^{N}_{x}L^{2}_{t/2}}^{2}+CT^{-1}\epsilon^{2}\mathcal{ Q}_{\lambda,N-1}(g), \tag{4.45}\] where \(C\) is a universal constant. Applying \(\partial^{\alpha}\) to equation (4.1), taking inner product with \(\partial^{\alpha}f\), recalling \(f_{2}=(\mathbb{I}-\tilde{\mathbb{P}}_{\lambda})f\) and \((\partial^{\alpha}f)_{2}=\partial^{\alpha}f_{2}\), by the lower bound in (3.7), taking sum over \(|\alpha|\leq N\), we have \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|f\|_{H^{N}_{x}L^{2}}^{2}+C_{1, \lambda,T}\frac{1}{\epsilon^{2}}\|f_{2}\|_{H^{N}_{x}L^{2}_{1/2}}^{2}\leq\sum_{| \alpha|\leq N}|(\partial^{\alpha}g,\partial^{\alpha}f)|. \tag{4.46}\] We now use the term \(\|f_{2}\|_{H^{N}_{x}L^{2}_{t/2}}^{2}\) in (4.46) to control the right-hand of (4.45). The combination \((\ref{eq:4.46})\times 2K(\lambda,T)+(\ref{eq:4.45})\times C_{0}e^{-\lambda}(1-e^{- \lambda})^{-\frac{1}{2}}\) gives \[\frac{\mathrm{d}}{\mathrm{d}t}(\epsilon T^{-1/2}C_{0}e^{-\lambda} (1-e^{-\lambda})^{-\frac{1}{2}}\mathcal{I}_{N}(f)+K(\lambda,T)\|f\|_{H^{N}_{x} L^{2}}^{2})\] \[+(\frac{1}{2}C_{0}e^{-\lambda}(1-e^{-\lambda})^{-\frac{1}{2}}| \nabla_{x}(a,b,c)|_{H^{N-1}_{x}}^{2}+K(\lambda,T)C_{1,\lambda,T}\frac{1}{ \epsilon^{2}}\|f_{2}\|_{H^{N}_{x}L^{2}_{t/2}}^{2})\] \[\leq Ce^{-\lambda}(1-e^{-\lambda})^{-\frac{1}{2}}T^{-1}\epsilon^{2} \mathcal{Q}_{\lambda,N-1}(g)+2K(\lambda,T)\sum_{|\alpha|\leq N}|(\partial^{ \alpha}g,\partial^{\alpha}f)|,\] by taking \(C_{*}\) large enough in (4.40) such that \[K(\lambda,T)C_{1,\lambda,T}\geq\frac{1}{4}CT^{3}e^{\lambda}(1-e^{-\lambda})^{-8}.\] Thanks to (4.21), we can also ask \(C_{*}\) large enough in (4.40) such that (4.44) holds. Recalling from (2.29) that \(\mathcal{D}_{N}(f)=C_{0}e^{-\lambda}(1-e^{-\lambda})^{-\frac{1}{2}}|\nabla_{x} (a,b,c)|_{H^{N-1}_{x}}^{2}+\|f_{2}\|_{H^{N}_{x}L^{2}_{t/2}}^{2}\), by taking \(C_{*}\) large enough in (4.40) such that \(K(\lambda,T)C_{1,\lambda,T}\geq\frac{1}{8}\), we get the dissipation \(\frac{1}{2}\mathcal{D}_{N}(f)\) and finish the proof. ### A priori estimate of the quantum Boltzmann equation In this subsection, we derive the following uniform-in-\(\epsilon\) _a priori_ estimate of the Cauchy problem (2.2). **Theorem 4.1**.: _Let \(\lambda,T>0\). Recall the constants \(\tilde{C}_{*}(\lambda,T),K(\lambda,T),\mathrm{C}_{1}(\lambda,T),\mathrm{C}_{2}( \lambda,T),\mathrm{C}_{3}(\lambda,T)\) defined in (2.31), (4.40), (4.41), (4.42), (4.43). There exists a constant \(\delta_{2}>0\) which is independent of \(\epsilon,\lambda\) and \(T\), such that if a solution \(f\) to the Cauchy problem (2.2) satisfies_ \[\sup_{t}\|f(t)\|_{H^{2}_{x}L^{2}}^{2}\leq\delta_{2}\tilde{C}_{*}(\lambda,T), \tag{4.47}\] _then \(f\) verifies for any \(N\geq 2\),_ \[\sup_{t}\|f(t)\|_{H^{N}_{x}L^{2}}^{2}+\frac{1}{K(\lambda,T)}\int_{0}^{\infty} \mathcal{D}_{N}(f)\mathrm{d}\tau+\frac{\mathrm{C}_{1}(\lambda,T)}{K(\lambda, T)}\frac{1}{\epsilon^{2}}\int_{0}^{\infty}\|f_{2}\|_{H^{N}_{x}L^{2}_{1/2}}^{2} \mathrm{d}\tau\leq P_{N}(f_{0})\|f(t)\|_{H^{N}_{x}L^{2}}^{2}. \tag{4.48}\] _where_ \[P_{2}(f_{0})\equiv 6,\quad P_{N}(f_{0}):=12\exp\left(Q_{3}( \lambda,T,N,f_{0})P_{N-1}(f_{0})\|f_{0}\|_{H^{N-1}_{x}L^{2}}^{2}\right)\ \text{for}\ N\geq 3, \tag{4.49}\] \[Q_{3}(\lambda,T,N,f_{0}):=2(Q_{1}(\lambda,T,N)+Q_{2}(\lambda,T,N )P_{N-1}(f_{0})\|f_{0}\|_{H^{N-1}_{x}L^{2}}). \tag{4.50}\] _where the constants \(Q_{1}(\lambda,T,N)\) and \(Q_{2}(\lambda,T,N)\) are defined in (4.64) and (4.65) respectively._ Proof.: Observe that \(f\) solves (4.1) with \(g=\frac{1}{\epsilon}\tilde{\Gamma}_{2}^{\lambda,T}(f,f)+\tilde{\Gamma}_{3}^{ \lambda,T}(f,f,f)\). By Proposition 4.1, recalling that \(\mathcal{Q}_{\lambda,n}(g)\) is bounded by some linear combination of the quantities in (4.19), we have \[\frac{\mathrm{d}}{\mathrm{d}t}\Xi_{N}^{\lambda,T}(f)+\frac{1}{2}\mathcal{D}_{ N}(f)+\mathrm{C}_{1}(\lambda,T)\frac{1}{\epsilon^{2}}\|f_{2}\|_{H^{N}_{x}L^{2}_{1/2} }^{2}\lesssim\mathrm{C}_{2}(\lambda,T)\mathcal{I}_{1}(f)+\mathrm{C}_{3}( \lambda,T)\epsilon^{2}\mathcal{I}_{2}(f). \tag{4.51}\] Here the two terms on the right-hand of (4.51) are \[\mathcal{I}_{1}(f) := \sum_{|\alpha|\leq N}|(\frac{1}{\epsilon}\partial^{\alpha} \tilde{\Gamma}_{2}^{\lambda,T}(f,f)+\partial^{\alpha}\Gamma_{3}^{\lambda,T}(f,f,f),\partial^{\alpha}f)|,\] \[\mathcal{I}_{2}(f) := \sum_{|\alpha|\leq N-1}|(\frac{1}{\epsilon}\partial^{\alpha} \tilde{\Gamma}_{2}^{\lambda,T}(f,f)+\partial^{\alpha}\Gamma_{3}^{\lambda,T}(f,f,f),\psi)|_{L^{2}_{2}}^{2}.\] By (3.46) and (3.55), we have \[\mathcal{I}_{1}(f) \lesssim \frac{1}{\epsilon}C_{2,\lambda,T}\left(\|f\|_{H^{2}_{x}L^{2}} \mathcal{D}_{N}(f)+1_{N\geq 3}C_{N}\|f\|_{H^{N}_{x}L^{2}}\mathcal{D}_{N-1}(f) \right)\|f_{2}\|_{H^{N}_{x}L^{2}_{1/2}}\] \[+C_{3,\lambda,T}\left(\|f\|_{H^{2}_{x}L^{2}}^{2}\mathcal{D}_{N}(f )+1_{N\geq 3}C_{N}\|f\|_{H^{N-1}_{x}L^{2}}\|f\|_{H^{N}_{x}L^{2}}^{2}\mathcal{D} _{N-1}(f)\mathcal{D}_{N}^{\frac{1}{2}}(f)\right).\] By (3.57) and (3.58), we have \[\mathcal{I}_{2}(f) \lesssim \frac{1}{\epsilon^{2}}C_{2,\lambda,T}^{2}\left(\|f\|_{H^{2}_{x}L^ {2}}^{2}\mathcal{D}_{N}(f)+1_{N\geq 3}C_{N}\|f\|_{H^{N}_{x}L^{2}}^{2}\mathcal{D}_{N-1}(f)\right)\] \[+C_{3,\lambda,T}^{2}\left(\|f\|_{H^{4}_{x}L^{2}}^{2}\mathcal{D}_{N }(f)+1_{N\geq 3}C_{N}\|f\|_{H^{N-1}_{x}L^{2}}^{2}\|f\|_{H^{N}_{x}L^{2}}^{2} \mathcal{D}_{N-1}(f)\right).\] Plugging the previous two inequalities into (4.51), we have \[\frac{\mathrm{d}}{\mathrm{d}t}\Xi_{N}^{\lambda,T}(f)+\frac{1}{2} \mathcal{D}_{N}(f)+\mathrm{C}_{1}(\lambda,T)\frac{1}{\epsilon^{2}}\|f_{2}\|_{H^ {N}_{x}L^{2}_{1/2}}^{2} \tag{4.52}\] \[\leq CC_{2}(\lambda,T)\frac{1}{\epsilon}C_{2,\lambda,T}\left(\|f\|_{H ^{2}_{x}L^{2}}\mathcal{D}_{N}^{\frac{1}{2}}(f)+1_{N\geq 3}C_{N}\|f\|_{H^{N}_{x}L^{2}} \mathcal{D}_{N-1}^{\frac{1}{2}}(f)\right)\|f_{2}\|_{H^{N}_{x}L^{2}_{1/2}}\] (4.53) \[+CC_{2}(\lambda,T)C_{3,\lambda,T}\left(\|f\|_{H^{2}_{x}L^{2}}^{2} \mathcal{D}_{N}(f)+1_{N\geq 3}C_{N}\|f\|_{H^{N-1}_{x}L^{2}}^{2}\|f\|_{H^{N}_{x}L^{2}} \mathcal{D}_{N-1}(f)\right)\] (4.54) \[+CC_{3}(\lambda,T)\epsilon^{2}C_{3,\lambda,T}^{2}\left(\|f\|_{H^{ 2}_{x}L^{2}}^{2}\mathcal{D}_{N}(f)+1_{N\geq 3}C_{N}\|f\|_{H^{N-1}_{x}L^{2}}^{2} \mathcal{D}_{N-1}(f)\right)\] (4.55) \[+CC_{3}(\lambda,T)\epsilon^{2}C_{3,\lambda,T}^{2}\left(\|f\|_{H^{ 4}_{x}L^{2}}^{4}\mathcal{D}_{N}(f)+1_{N\geq 3}C_{N}\|f\|_{H^{N-1}_{x}L^{2}}^{2} \|f\|_{H^{N}_{x}L^{2}}^{2}\mathcal{D}_{N-1}(f)\right). \tag{4.56}\] Here \(C\geq 2\) is a universal constant. Recalling (3.36) for the definition of \(C_{2,\lambda,T}\). Recalling (3.38) for the definition of \(C_{3,\lambda,T}\). Note that the line (4.53) is bounded by \[CC_{2}(\lambda,T)\frac{1}{\epsilon}C_{2,\lambda,T}\|f\|_{H^{2}_{x}L^{2}} \mathcal{D}_{N}^{\frac{1}{2}}(f)\|f_{2}\|_{H^{N}_{x}L^{2}_{1/2}}\leq\frac{1}{16} \mathcal{D}_{N}(f)+4C^{2}\mathrm{C}_{2}^{2}(\lambda,T)C_{2,\lambda,T}^{2}\|f\|_{H^ {2}_{x}L^{2}}^{2}\frac{1}{\epsilon^{2}}\|f\|_{H^{N}_{x}L^{2}_{1/2}}^{2}.\] Under the following conditions \[4C^{2}\mathrm{C}_{2}^{2}(\lambda,T)C_{2,\lambda,T}^{2}\|f\|_{H_{x}^{2}L^{2}}^{2} \leq\frac{1}{2}\mathrm{C}_{1}(\lambda,T), \tag{4.57}\] \[CC_{2}(\lambda,T)C_{3,\lambda,T}\|f\|_{H_{x}^{2}L^{2}}^{2}\leq\frac{1}{16}, \tag{4.58}\] \[CC_{3}(\lambda,T)C_{2,\lambda,T}^{2}\|f\|_{H_{x}^{2}L^{2}}^{2}\leq\frac{1}{16}, \tag{4.59}\] \[CC_{3}(\lambda,T)C_{3,\lambda,T}^{2}\|f\|_{H_{x}^{2}L^{2}}^{4}\leq\frac{1}{16}, \tag{4.60}\] we get \[N=2 \frac{\mathrm{d}}{\mathrm{d}t}\Xi_{N}^{\lambda,T}(f)+\frac{1}{4} \mathcal{D}_{N}(f)+\frac{1}{2}\mathrm{C}_{1}(\lambda,T)\frac{1}{\epsilon^{2}} \|f_{2}\|_{H_{x}^{N}L_{1/2}^{2}}^{2}\leq 0, \tag{4.61}\] \[N\geq 3 \frac{\mathrm{d}}{\mathrm{d}t}\Xi_{N}^{\lambda,T}(f)+\frac{1}{4 }\mathcal{D}_{N}(f)+\frac{1}{2}\mathrm{C}_{1}(\lambda,T)\frac{1}{\epsilon^{2}} \|f_{2}\|_{H_{x}^{N}L_{1/2}^{2}}^{2}\] \[\leq C_{N}\mathrm{C}_{2}(\lambda,T)\frac{1}{\epsilon}C_{2,\lambda,T} \|f\|_{H_{x}^{N}L^{2}}\mathcal{D}_{N-1}^{\frac{1}{2}}(f)\|f_{2}\|_{H_{x}^{N}L _{1/2}^{2}}^{2}\] \[+C_{N}\mathrm{C}_{2}(\lambda,T)C_{3,\lambda,T}\|f\|_{H_{x}^{N-1}L ^{2}}\|f\|_{H_{x}^{N}L^{2}}\mathcal{D}_{N-1}^{\frac{1}{2}}(f)\mathcal{D}_{N}^ {\frac{1}{2}}(f)\] \[+C_{N}\mathrm{C}_{3}(\lambda,T)C_{2,\lambda,T}^{2}\|f\|_{H_{x}^{N }L^{2}}^{2}\mathcal{D}_{N-1}(f)\] \[+C_{N}\mathrm{C}_{3}(\lambda,T)\epsilon^{2}C_{3,\lambda,T}^{2}\| f\|_{H_{x}^{N-1}L^{2}}^{2}\|f\|_{H_{x}^{N}L^{2}}^{2}\mathcal{D}_{N-1}(f).\] Here \(C_{N}\) is a large constant that depends only on \(N\) and could change from line to line. Note that (4.57) is stronger than (4.58), (4.59), (4.60) since \(C\geq 2\). By taking \(\delta_{2}=\frac{1}{8C^{2}}\), (4.47) yields the condition (4.57). We now prove (4.48) for \(N\geq 2\). If \(N=2\), integrating (4.61) w.r.t. time, we get \[\Xi_{N}^{\lambda,T}(f(t))+\frac{1}{4}\int_{0}^{t}\mathcal{D}_{N}(f)\mathrm{d} \tau+\frac{1}{2}\mathrm{C}_{1}(\lambda,T)\frac{1}{\epsilon^{2}}\int_{0}^{t} \|f_{2}\|_{H_{x}^{N}L_{1/2}^{2}}^{2}\mathrm{d}\tau\leq\Xi_{N}^{\lambda,T}(f_{0})\] Recalling (4.44), we get \[\sup_{t}\|f(t)\|_{H_{x}^{2}L^{2}}^{2}+\frac{1}{K(\lambda,T)}\int_{0}^{\infty} \mathcal{D}_{2}(f)\mathrm{d}\tau+\frac{\mathrm{C}_{1}(\lambda,T)}{K(\lambda,T) }\frac{1}{\epsilon^{2}}\int_{0}^{t}\|f_{2}\|_{H_{x}^{2}L_{1/2}^{2}}^{2} \mathrm{d}\tau\leq 6\|f_{0}\|_{H_{x}^{2}L^{2}}^{2}. \tag{4.63}\] Now we use mathematical induction to establish (4.48) for \(N\geq 2\). Suppose (4.48) is valid for \(N-1\geq 2\), we now prove it is also valid for \(N>3\). By (4.62), using \(ab\leq\eta a^{2}+\frac{1}{4\eta}b^{2}\), we get \[\frac{\mathrm{d}}{\mathrm{d}t}\Xi_{N}^{\lambda,T}(f)+\frac{1}{8} \mathcal{D}_{N}(f)+\frac{1}{4}\mathrm{C}_{1}(\lambda,T)\frac{1}{\epsilon^{2}} \|f_{2}\|_{H_{x}^{N}L_{1/2}^{2}}^{2}\] \[\leq \mathrm{C}_{1}^{-1}(\lambda,T)C_{N}^{2}\mathrm{C}_{2}^{2}( \lambda,T)C_{2,\lambda,T}^{2}\|f\|_{H_{x}^{N}L^{2}}^{2}\mathcal{D}_{N-1}(f)\] \[+2C_{N}^{2}\mathrm{C}_{2}^{2}(\lambda,T)C_{3,\lambda,T}^{2}\|f\|_ {H_{x}^{N}-L_{1}}^{2}f\|_{H_{x}^{N}L^{2}}^{2}\mathcal{D}_{N-1}(f)\] \[+C_{N}\mathrm{C}_{3}(\lambda,T)C_{2,\lambda,T}^{2}\|f\|_{H_{x}^{N }L^{2}}^{2}\mathcal{D}_{N-1}(f)\] \[+C_{N}\mathrm{C}_{3}(\lambda,T)\epsilon^{2}C_{3,\lambda,T}^{2}\|f \|_{H_{x}^{N-1}L^{2}}^{2}\|f\|_{H_{x}^{N}L^{2}}^{2}\mathcal{D}_{N-1}(f)\] \[\leq Q_{1}(\lambda,T)\|f\|_{H_{x}^{N}L^{2}}^{2}\mathcal{D}_{N-1}(f)+Q _{2}(\lambda,T)\|f\|_{H_{x}^{N-1}L^{2}}^{2}\|f\|_{H_{x}^{N}L^{2}}^{2}\mathcal{D}_{ N-1}(f),\] where we define for simplicity \[Q_{1}(\lambda,T,N):=\mathrm{C}_{1}^{-1}(\lambda,T)C_{N}^{2} \mathrm{C}_{2}^{2}(\lambda,T)C_{2,\lambda,T}^{2}+C_{N}\mathrm{C}_{3}(\lambda,T)C_{ 2,\lambda,T}^{2}, \tag{4.64}\] \[Q_{2}(\lambda,T,N):=2C_{N}^{2}\mathrm{C}_{2}^{2}(\lambda,T)C_{3, \lambda,T}^{2}+C_{N}\mathrm{C}_{3}(\lambda,T)C_{3,\lambda,T}^{2}. \tag{4.65}\] By the induction assumption, \[\sup_{t}\|f(t)\|_{H_{x}^{N-1}L^{2}}^{2}+\frac{1}{K(\lambda,T)}\int_{0}^{\infty} \mathcal{D}_{N-1}(f)\mathrm{d}\tau+\frac{\mathrm{C}_{1}(\lambda,T)}{K(\lambda,T) }\frac{1}{\epsilon^{2}}\int_{0}^{t}\|f_{2}\|_{H_{x}^{N-1}L_{1/2}^{2}}^{2} \mathrm{d}\tau\leq P_{N-1}(f_{0})\|f_{0}\|_{H_{x}^{N-1}L^{2}}^{2}.\] Then the energy inequality becomes \[\frac{\mathrm{d}}{\mathrm{d}t}\Xi_{N}^{\lambda,T}(f)+\frac{1}{8} \mathcal{D}_{N}(f)+\frac{1}{4}\mathrm{C}_{1}(\lambda,T)\frac{1}{\epsilon^{2}} \|f_{2}\|_{H^{N}_{x}L_{1/2}^{2}}^{2}\] \[\leq (Q_{1}(\lambda,T,N)+Q_{2}(\lambda,T,N)P_{N-1}(f_{0})\|f_{0}\|_{H^ {N-1}_{x}L^{2}})\|f\|_{H^{N}_{x}L^{2}}^{2}\mathcal{D}_{N-1}(f)\] \[\leq Q_{3}(\lambda,T,N,f_{0})\Xi_{N}^{,T}(f)\frac{\mathcal{D}_{N-1}(f )}{K(\lambda,T)},\] where we recall (4.44) and (4.50). Then by Gronwall's inequality and the induction assumption, we get \[\Xi_{N}^{\lambda,T}(f(t))+\int_{0}^{t}\big{(}\frac{1}{8}\mathcal{ D}_{N}(f)+\frac{1}{4}\mathrm{C}_{1}(\lambda,T)\frac{1}{\epsilon^{2}}\|f_{2}\|_{ H^{N}_{x}L_{1/2}^{2}}^{2}\big{)}\mathrm{d}\tau\] \[\leq \exp\left(Q_{3}(\lambda,T,N,f_{0})\int_{0}^{\infty}\frac{ \mathcal{D}_{N-1}(f(t))}{K(\lambda,T)}\mathrm{d}t\right)\Xi_{N}^{\lambda,T}(f _{0})\leq\exp\left(Q_{3}(\lambda,T,N,f_{0})P_{N-1}(f_{0})\|f_{0}\|_{H^{N-1}_{ x}L^{2}}^{2}\right)\Xi_{N}^{\lambda,T}(f_{0}).\] Recalling (4.44), we get \[\sup_{t}\|f(t)\|_{H^{N}_{x}L^{2}}^{2}+\frac{1}{K(\lambda,T)}\int _{0}^{t}\big{(}\mathcal{D}_{N}(f)+\mathrm{C}_{1}(\lambda,T)\frac{1}{\epsilon^ {2}}\|f_{2}\|_{H^{N}_{x}L_{1/2}^{2}}^{2}\big{)}\mathrm{d}\tau\] \[\leq 12\exp\left(Q_{3}(\lambda,T,N,f_{0})P_{N-1}(f_{0})\|f_{0}\|_{H^{ N-1}_{x}L^{2}}^{2}\right)\|f_{0}\|_{H^{N}_{x}L^{2}}^{2}.\] By recalling (4.49), we finish the proof of (4.48). ## 5. Hydrodynamic limits This whole section is devoted to prove Theorem 2.2. We first derive some basic formulas involving Bose-Einstein distribution. Recall from (2.1) that \(M_{\lambda}\) is a radial function. Sometimes, we also write for \(r\geq 0\), \[M_{\lambda}(r):=\frac{1}{\exp(\frac{r^{2}}{2}+\lambda)-1}. \tag{5.1}\] It is easy to check that \[\frac{\mathrm{d}M_{\lambda}}{\mathrm{d}r}=-rM_{\lambda}(1+M_{ \lambda}),\quad\frac{\mathrm{d}[M_{\lambda}(1+M_{\lambda})]}{\mathrm{d}r}=-rM _{\lambda}(1+M_{\lambda})(1+2M_{\lambda}). \tag{5.2}\] Then by (5.2), polar coordinates, integration by parts formula, it is easy to derive \[\int_{\mathbb{R}^{3}}f(|v|)|v|^{k}M_{\lambda}(v)(1+M_{\lambda}(v ))\mathrm{d}v=(k+1)\int_{\mathbb{R}^{3}}f(|v|)|v|^{k-2}M_{\lambda}(v)\mathrm{d}v, \tag{5.3}\] \[\int_{\mathbb{R}^{3}}f(|v|)|v|^{k}M_{\lambda}(v)(1+M_{\lambda}(v ))(1+2M_{\lambda}(v))\mathrm{d}v=(k+1)\int_{\mathbb{R}^{3}}f(|v|)|v|^{k-2}M_{ \lambda}(v)(1+M_{\lambda}(v))\mathrm{d}v. \tag{5.4}\] Recall (2.13) for \(m_{k}\). Let \[K_{A}=\frac{m_{4}}{2m_{2}},\quad K_{\lambda}=K_{A}-1=\frac{m_{4 }}{2m_{2}}-1,\quad C_{A}=\frac{m_{4}}{4m_{2}^{2}}(m_{4}m_{0}-m_{2}^{2}). \tag{5.5}\] Then it is elementary to check \[\int_{\mathbb{R}^{3}}(\frac{|v|^{2}}{2}-K_{A})|v|^{2}M_{\lambda}(v )(1+M_{\lambda}(v))\mathrm{d}v=\frac{1}{2}m_{4}-K_{A}m_{2}=0, \tag{5.6}\] \[\frac{1}{3}\int_{\mathbb{R}^{3}}(\frac{|v|^{2}}{2}-K_{A})^{2}|v|^{ 2}M_{\lambda}(v)(1+M_{\lambda}(v))(1+2M_{\lambda}(v))\mathrm{d}v=C_{A}. \tag{5.7}\] Let us introduce \[A(v)=N_{\lambda}(v)(\frac{|v|^{2}}{2}-K_{A})v,\quad B(v)=N_{ \lambda}(v)(v\otimes v-\frac{|v|^{2}}{3}I_{3}). \tag{5.8}\] Note that \(A\) is a vector, and \(B\) is a symmetric matrix. With the help of (5.6), \(A_{i},B_{ij}\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}\). More precisely, for any \(f\in\ker\tilde{\mathcal{L}}^{\lambda,T}\), it holds that \[\langle A_{i},f\rangle=\langle B_{ij},f\rangle=0. \tag{5.9}\] By rotational invariance, it is standard to derive **Theorem 5.1**.: _There exist unique radial functions \(\alpha_{\lambda,T}(|v|),\beta_{\lambda,T}(|v|)\) such that_ \[\tilde{\mathcal{L}}^{\lambda,T}\left(\alpha_{\lambda,T}A\right)=A,\quad\tilde{ \mathcal{L}}^{\lambda,T}\left(\beta_{\lambda,T}B\right)=B. \tag{5.10}\] _For later reference, let us denote_ \[\hat{A}(v):=\alpha_{\lambda,T}(|v|)A(v),\quad\hat{B}(v):=\beta_{\lambda,T}(|v| )B(v). \tag{5.11}\] See Theorem 6.1 for why \(\hat{A},\hat{B}\) must be of the form in (5.11) in order to satisfy \(\tilde{\mathcal{L}}^{\lambda,T}(\hat{A})=A,\tilde{\mathcal{L}}^{\lambda,T}( \hat{B})=B\). Proof of Theorem 2.2.: By the well-posedness theory in Theorem 2.1, the family \(\{f_{\epsilon}\}_{0<\epsilon<1}\) verifies \[M_{\infty}:=\sup_{0<\epsilon<1}\sup_{t\geq 0}\|f^{\epsilon}(t)\|_{H_{x}^{N}L^{ 2}}\leq C(M_{0}), \tag{5.12}\] \[M_{2}:=\sup_{0<\epsilon<1}\left(\frac{1}{\epsilon^{2}}\int_{0}^{\infty}(\|f^ {\epsilon}(t)-\mathbb{P}f^{\epsilon})\|_{H_{x}^{N}L^{2}}^{2}\mathrm{d}t\right) \leq C(M_{0}). \tag{5.13}\] Here \(C(M_{0})\) is a constant depending the constant \(M_{0}\) given in (2.36). Note that for brevity, we drop the dependence on \(\lambda,T,N\). By (5.12), there is a subsequence of \(\{f^{\epsilon}\}\) still denoting it by \(\{f^{\epsilon}\}\) such that \[f^{\epsilon}\to f^{0}\text{ as }\epsilon\to 0,\text{ weakly-* in }L^{\infty}(\mathbb{R}_{+};H_{x}^{N}L^{2}), \tag{5.14}\] for some \(f^{0}\in L^{\infty}(\mathbb{R}_{+};H_{x}^{N}L^{2})\). By (5.13), \(\{f^{\epsilon}-\mathbb{P}f^{\epsilon}\}_{0<\epsilon<1}\) converges to \(0\) in \(L_{t}^{2}H_{x}^{N}L^{2}\). \[f^{\epsilon}-\mathbb{P}f^{\epsilon}\to 0\text{ as }\epsilon\to 0,\text{ strongly in }L^{2}(\mathbb{R}_{+};H_{x}^{N}L^{2}). \tag{5.15}\] By (5.14) and (5.15), we have \(f^{0}\in\ker\tilde{\mathcal{L}}^{\lambda,T}\). Then there exists \((\rho,u,\theta)\in L^{\infty}(\mathbb{R}_{+};H_{x}^{N})\) such that \[f^{0}(t,x,v)=(\rho(t,x)+u(t,x)\cdot v+\theta(t,x)(\frac{|v|^{2}}{2}-K_{ \lambda}))N_{\lambda}(v). \tag{5.16}\] Note that we already get (2.38). We now prove that \((\rho,u,\theta)\in C(\mathbb{R}_{+};H_{x}^{N-1})\) satisfies the system (1.38) and the moment convergence (2.39) and (2.40). This is done by looking at \((\rho^{\epsilon},u^{\epsilon},\theta^{\epsilon})\), the macroscopic components of \(f^{\epsilon}\). Recalling (2.25), (2.26) and (2.27), we have \[\mathbb{P}_{\lambda}f^{\epsilon}=\left(\rho^{\epsilon}+u^{\epsilon}\cdot v+ \theta^{\epsilon}(\frac{|v|^{2}}{2}-K_{\lambda})\right)N_{\lambda},\] where \[\rho^{\epsilon}=\frac{K_{A}}{C_{A}}\langle f^{\epsilon},N_{ \lambda}\rangle+\frac{K_{A}}{C_{A}}(\frac{K_{\lambda}m_{0}}{m_{2}}-\frac{1}{2} )\langle f^{\epsilon},|v|^{2}N_{\lambda}\rangle,\quad u^{\epsilon}=\frac{3}{m _{2}}\langle f^{\epsilon},vN_{\lambda}\rangle,\] \[\theta^{\epsilon}=\frac{K_{A}}{C_{A}}\frac{m_{0}}{m_{2}}\langle f ^{\epsilon},|v|^{2}N_{\lambda}\rangle-\frac{K_{A}}{C_{A}}\langle f^{\epsilon}, N_{\lambda}\rangle.\] By (5.14), (5.15) and (5.16), we have \[(\rho^{\epsilon},u^{\epsilon},\theta^{\epsilon})\rightarrow(\rho,u,\theta) \text{ as }\epsilon\to 0,\text{ weakly-* in }L^{\infty}(\mathbb{R}_{+};H_{x}^{N}). \tag{5.17}\] Taking inner products between (2.2) and the following functions \[\frac{K_{A}}{C_{A}}N_{\lambda}(1+(\frac{K_{\lambda}m_{0}}{m_{2}}-\frac{1}{2} )|v|^{2}),\quad\frac{3}{m_{2}}N_{\lambda}v,\quad\frac{K_{A}}{C_{A}}\frac{m_{0 }}{m_{2}}N_{\lambda}(|v|^{2}-\frac{m_{2}}{m_{0}}),\] we get \[\partial_{t}\rho^{\epsilon}+\frac{T^{\frac{1}{2}}}{\epsilon} \langle v\cdot\nabla_{x}f^{\epsilon},\frac{K_{A}}{C_{A}}N_{\lambda}(1+(\frac{K _{\lambda}m_{0}}{m_{2}}-\frac{1}{2})|v|^{2})\rangle = 0, \tag{5.18}\] \[\partial_{t}u^{\epsilon}+\frac{T^{\frac{1}{2}}}{\epsilon}\langle v \cdot\nabla_{x}f^{\epsilon},\frac{3}{m_{2}}N_{\lambda}v\rangle = 0,\] (5.19) \[\partial_{t}\theta^{\epsilon}+\frac{T^{\frac{1}{2}}}{\epsilon} \langle v\cdot\nabla_{x}f^{\epsilon},\frac{K_{A}}{C_{A}}\frac{m_{0}}{m_{2}}N_{ \lambda}(|v|^{2}-\frac{m_{2}}{m_{0}})\rangle = 0. \tag{5.20}\] Now we need to compute the inner products in (5.18), (5.19) and (5.20). Recall the macro-micro decomposition \(f^{\epsilon}=f_{1}^{\epsilon}+f_{2}^{\epsilon}\) where \(f_{1}^{\epsilon}=\mathbb{P}_{\lambda}f^{\epsilon},\quad f_{2}^{\epsilon}=f^{ \epsilon}-\mathbb{P}_{\lambda}f^{\epsilon}\). Using \[\int v_{i}\phi(|v|)\mathrm{d}v=0,\int v_{i}v_{j}\phi(|v|)\mathrm{d}v=\delta_{ij} \int\frac{|v|^{2}}{3}\phi(|v|)\mathrm{d}v,\int v_{i}v_{j}v_{k}\phi(|v|)\mathrm{d }v=0,\] it is easy to see \[\langle v\cdot\nabla_{x}f_{1}^{\epsilon},N_{\lambda}\rangle=\langle v\cdot\nabla_{ x}(\rho^{\epsilon}+u^{\epsilon}\cdot v+\theta^{\epsilon}(\frac{|v|^{2}}{2}-K_{ \lambda}))N_{\lambda},N_{\lambda}\rangle=\langle v\cdot\nabla_{x}(u^{\epsilon }\cdot v),N_{\lambda}^{2}\rangle=\frac{m_{2}}{3}\nabla_{x}\cdot u^{\epsilon},\] \[\langle v\cdot\nabla_{x}f_{1}^{\epsilon},|v|^{2}N_{\lambda}\rangle=\langle v \cdot\nabla_{x}(\rho^{\epsilon}+u^{\epsilon}\cdot v+\theta^{\epsilon}(\frac{|v |^{2}}{2}-K_{\lambda}))N_{\lambda},|v|^{2}N_{\lambda}\rangle=\langle v\cdot \nabla_{x}(u^{\epsilon}\cdot v),|v|^{2}N_{\lambda}^{2}\rangle=\frac{m_{4}}{3} \nabla_{x}\cdot u^{\epsilon}.\] As a result, by recalling (5.5), we have \[\langle v\cdot\nabla_{x}f_{1}^{\epsilon},\frac{K_{A}}{C_{A}}N_{ \lambda}(1+(\frac{K_{\lambda}m_{0}}{m_{2}}-\frac{1}{2})|v|^{2})\rangle=\frac {K_{A}}{C_{A}}(\frac{m_{2}}{3}+(\frac{K_{\lambda}m_{0}}{m_{2}}-\frac{1}{2}) \frac{m_{4}}{3})\nabla_{x}\cdot u^{\epsilon}=\frac{2}{3}K_{\lambda}\nabla_{x} \cdot u^{\epsilon}, \tag{5.21}\] \[\langle v\cdot\nabla_{x}f_{1}^{\epsilon},\frac{K_{A}}{C_{A}}\frac {m_{0}}{m_{2}}N_{\lambda}(|v|^{2}-\frac{m_{2}}{m_{0}})\rangle=\frac{K_{A}}{C_{ A}}(\frac{m_{2}}{3}+(\frac{K_{\lambda}m_{0}}{m_{2}}-\frac{1}{2})\frac{m_{4}}{3}) \nabla_{x}\cdot u^{\epsilon}=\frac{2}{3}\nabla_{x}\cdot u^{\epsilon}. \tag{5.22}\] Note that \[\langle v\cdot\nabla_{x}f_{1}^{\epsilon},vN_{\lambda}\rangle = \langle v\cdot\nabla_{x}(\rho^{\epsilon}+u^{\epsilon}\cdot v+ \theta^{\epsilon}(\frac{|v|^{2}}{2}-K_{\lambda}))N_{\lambda},vN_{\lambda}\rangle\] \[= \langle v\cdot\nabla_{x}(\rho^{\epsilon}+\theta^{\epsilon}(\frac{ |v|^{2}}{2}-K_{\lambda}))N_{\lambda},vN_{\lambda}\rangle\] \[= \langle v\cdot\nabla_{x}(\rho^{\epsilon}+\theta^{\epsilon}),vN_{ \lambda}^{2}\rangle=\frac{m_{2}}{3}\nabla_{x}(\rho^{\epsilon}+\theta^{\epsilon }),\] where we recall \(K_{\lambda}=K_{A}-1\) and use (5.9). As a result \[\langle v\cdot\nabla_{x}f_{1}^{\epsilon},\frac{3}{m_{2}}N_{\lambda}v\rangle= \nabla_{x}(\rho^{\epsilon}+\theta^{\epsilon}). \tag{5.23}\] Since \(f_{2}^{\epsilon}\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}\), we have \[\langle v\cdot\nabla_{x}f_{2}^{\epsilon},N_{\lambda}\rangle=\nabla_{x}\cdot \langle f_{2}^{\epsilon},vN_{\lambda}\rangle=0, \tag{5.24}\] \[\langle v\cdot\nabla_{x}f_{2}^{\epsilon},vN_{\lambda}\rangle=\nabla_{x}\cdot \langle f_{2}^{\epsilon},v\otimes vN_{\lambda}\rangle=\nabla_{x}\cdot \langle f_{2}^{\epsilon},(v\otimes v-\frac{1}{3}|v|^{2}|I_{3})N_{\lambda}\rangle,\] \[\langle v\cdot\nabla_{x}f_{2}^{\epsilon},\frac{|v|^{2}}{2}N_{\lambda}\rangle= \nabla_{x}\cdot\langle f_{2}^{\epsilon},\frac{|v|^{2}}{2}vN_{\lambda}\rangle= \nabla_{x}\cdot\langle f_{2}^{\epsilon},(\frac{|v|^{2}}{2}-K_{A})vN_{\lambda}\rangle.\] Recalling (5.8) and (5.10), since \(\tilde{\mathcal{L}}^{\lambda,T}\) is self-adjoint and \(f_{2}^{\epsilon}\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}\), we have \[\langle v\cdot\nabla_{x}f_{2}^{\epsilon},vN_{\lambda}\rangle=\nabla_{x}\cdot \langle f_{2}^{\epsilon},B\rangle=\nabla_{x}\cdot\langle f_{2}^{\epsilon}, \tilde{\mathcal{L}}^{\lambda,T}\hat{B}\rangle=\nabla_{x}\cdot\langle\tilde{ \mathcal{L}}^{\lambda,T}f_{2}^{\epsilon},\hat{B}\rangle=\nabla_{x}\cdot \langle\tilde{\mathcal{L}}^{\lambda,T}f^{\epsilon},\hat{B}\rangle, \tag{5.25}\] \[\langle v\cdot\nabla_{x}f_{2}^{\epsilon},\frac{|v|^{2}}{2}N_{\lambda}\rangle= \nabla_{x}\cdot\langle f_{2}^{\epsilon},A\rangle=\nabla_{x}\cdot\langle f_{2}^ {\epsilon},\tilde{\mathcal{L}}^{\lambda,T}\hat{A}\rangle=\nabla_{x}\cdot \langle\tilde{\mathcal{L}}^{\lambda,T}f_{2}^{\epsilon},\hat{A}\rangle=\nabla_{ x}\cdot\langle\tilde{\mathcal{L}}^{\lambda,T}f^{\epsilon},\hat{A}\rangle. \tag{5.26}\] Plugging (5.21)-(5.26) into (5.18)-(5.20), we get \[\partial_{t}\rho^{\epsilon}+\frac{T^{\frac{1}{2}}}{\epsilon}\frac {2}{3}K_{\lambda}\nabla_{x}\cdot u^{\epsilon}+\frac{T^{\frac{1}{2}}}{\epsilon} \frac{K_{A}}{C_{A}}\frac{2K_{\lambda}m_{0}-m_{2}}{m_{2}}\nabla_{x}\cdot\langle \hat{A},\tilde{\mathcal{L}}^{\lambda,T}f^{\epsilon}\rangle = 0, \tag{5.27}\] \[\partial_{t}u^{\epsilon}+\frac{T^{\frac{1}{2}}}{\epsilon}\nabla_{x} (\rho^{\epsilon}+\theta^{\epsilon})+\frac{T^{\frac{1}{2}}}{\epsilon}\frac{3}{m_{ 2}}\nabla_{x}\cdot\langle\hat{B},\tilde{\mathcal{L}}^{\lambda,T}f^{\epsilon}\rangle = 0,\] (5.28) \[\partial_{t}\theta^{\epsilon}+\frac{T^{\frac{1}{2}}}{\epsilon}\frac{2 }{3}\nabla_{x}\cdot u^{\epsilon}+\frac{T^{\frac{1}{2}}}{\epsilon}\frac{2K_{A}}{C_ {A}}\frac{m_{0}}{m_{2}}\nabla_{x}\cdot\langle\hat{A},\tilde{\mathcal{L}}^{ \lambda,T}f^{\epsilon}\rangle = 0. \tag{5.29}\] By (5.27) and recalling (5.26), by (5.12) and (5.13), in the distributional sense, \[T^{\frac{1}{2}}\frac{2}{3}K_{\lambda}\nabla_{x}\cdot u^{\epsilon}=-\epsilon \partial_{t}\rho^{\epsilon}-T^{\frac{1}{2}}\frac{K_{A}}{C_{A}}\frac{2K_{ \lambda}m_{0}-m_{2}}{m_{2}}\nabla_{x}\cdot\langle f_{2}^{\epsilon},A\rangle \to 0. \tag{5.30}\] By (5.30) and recalling (5.17), we get \[\nabla_{x}\cdot u=0. \tag{5.31}\] By (5.28) and recalling (5.25), by (5.12) and (5.13), in the distributional sense, \[T^{\frac{1}{2}}\nabla_{x}(\rho^{\epsilon}+\theta^{\epsilon})=-\epsilon \partial_{t}u-T^{\frac{1}{2}}\frac{3}{m_{2}}\nabla_{x}\cdot\langle f_{2}^{ \epsilon},B\rangle\to 0. \tag{5.32}\] By (5.32) and recalling (5.17), we get \[\nabla_{x}(\rho+\theta)=0,\quad\Rightarrow\quad\rho+\theta=0. \tag{5.33}\] _Convergence of \(\frac{K_{\lambda}\theta^{\epsilon}-\rho^{\epsilon}}{K_{\lambda}+1}\) and regularity of \(\theta\)_. Making a suitable combination of (5.27) and (5.29), we get \[\partial_{t}(\frac{K_{\lambda}\theta^{\epsilon}-\rho^{\epsilon}}{K_{\lambda}+1 })+\frac{T^{\frac{1}{2}}}{\epsilon}\frac{K_{A}}{C_{A}(K_{\lambda}+1)}\nabla_{x }\cdot\langle\hat{A},\tilde{\mathcal{L}}^{\lambda,T}f^{\epsilon}\rangle = 0 \tag{5.34}\] Then by (5.12) and (5.13), \[\partial_{t}(\frac{K_{\lambda}\theta^{\epsilon}-\rho^{\epsilon}}{K_{\lambda}+ 1})\in L^{2}(\mathbb{R}_{+};H_{x}^{N-1}),\quad\frac{K_{\lambda}\theta^{ \epsilon}-\rho^{\epsilon}}{K_{\lambda}+1}\in L^{\infty}(\mathbb{R}_{+};H_{x}^ {N}). \tag{5.35}\] Then by Aubin-Lions-Simon Theorem, (5.17) and (5.33), we get \(\rho,\theta\in L^{\infty}(\mathbb{R}_{+};H_{x}^{N})\cap C(\mathbb{R}_{+};H_{x }^{N-1})\) and \[\frac{K_{\lambda}\theta^{\epsilon}-\rho^{\epsilon}}{K_{\lambda}+1}\to\theta \text{ strongly in }C(\mathbb{R}_{+};H_{x}^{N-1}). \tag{5.36}\] Recall that the Leray projection \(\mathcal{P}\) is defined by \(\mathcal{P}u=\mathcal{I}-\nabla\Delta\nabla\cdot\). Note that \[\mathcal{P}u=u\quad\Leftrightarrow\quad\nabla\cdot u=0. \tag{5.37}\] \[\mathcal{P}\nabla\phi=0. \tag{5.38}\] _Convergence of \(\mathcal{P}u^{\epsilon}\) and regularity of \(u\)_. Applying \(\mathcal{P}\) to (5.28), we get \[\partial_{t}\mathcal{P}u^{\epsilon}+\frac{T^{\frac{1}{2}}}{\epsilon}\frac{3} {m_{2}}\mathcal{P}\nabla_{x}\cdot\langle\hat{B},\tilde{\mathcal{L}}^{\lambda, T}f^{\epsilon}\rangle=0. \tag{5.39}\] Then by (5.12) and (5.13), \[\partial_{t}\mathcal{P}u^{\epsilon}\in L^{2}(\mathbb{R}_{+};H_{x}^{N-1}), \quad\mathcal{P}u^{\epsilon}\in L^{\infty}(\mathbb{R}_{+};H_{x}^{N}). \tag{5.40}\] Then by Aubin-Lions-Simon Theorem, (5.17) and (5.31), we get \(u\in L^{\infty}(\mathbb{R}_{+};H_{x}^{N})\cap C(\mathbb{R}_{+};H_{x}^{N-1})\) and \[\mathcal{P}u^{\epsilon}\to u=\mathcal{P}u\text{ strongly in }C(\mathbb{R}_{+};H_{x}^{N-1}). \tag{5.41}\] By (5.17) and (5.41), in the distributional sense, \(\mathcal{P}^{\perp}u^{\epsilon}\to 0\). By (5.12), \(\|\mathcal{P}^{\perp}u^{\epsilon}\|_{L^{\infty}(\mathbb{R}_{+};H_{x}^{N})} \lesssim C(M_{0})\). As a result, \[\mathcal{P}^{\perp}u^{\epsilon}\to 0\text{ weakly-* in }L^{\infty}(\mathbb{R}_{+};H_{x}^{N}). \tag{5.42}\] Note that we already proved \((\rho,u,\theta)\in C(\mathbb{R}_{+};H_{x}^{N-1})\) and the moment convergence (2.39)(given by (5.41)) and (2.40)(given by (5.36)). We also proved \((\rho,u,\theta)\) satisfies \(\eqref{eq:2.39}_{1}\). Now it remains to show \((u,\theta)\) satisfies \(\eqref{eq:2.39}_{2},\eqref{eq:2.39}_{3}\) with the initial conditions in \(\eqref{eq:2.39}_{4}\). To this end, we will come back to (2.2) to evaluate \(\langle\hat{B},\tilde{\mathcal{L}}^{\lambda,T}f^{\epsilon}\rangle\) and \(\langle\hat{A},\tilde{\mathcal{L}}^{\lambda,T}f^{\epsilon}\rangle\). By (2.2), we have \[\frac{1}{\epsilon}\tilde{\mathcal{L}}^{\lambda,T}f^{\epsilon}=-T^{1/2}v\cdot \nabla_{x}f_{1}^{\epsilon}+\tilde{\Gamma}_{2}^{\lambda,T}(f_{1}^{\epsilon},f_ {1}^{\epsilon})+R_{\epsilon}, \tag{5.43}\] where \[R_{\epsilon}:=-\epsilon\partial_{t}f^{\epsilon}+\tilde{\epsilon}_{3}^{ \lambda,T}(f^{\epsilon},f^{\epsilon},f^{\epsilon})-T^{1/2}v\cdot\nabla_{x}f_{ 2}^{\epsilon}+\tilde{\Gamma}_{2}^{\lambda,T}(f_{1}^{\epsilon},f_{2}^{\epsilon}) +\tilde{\Gamma}_{2}^{\lambda,T}(f_{2}^{\epsilon},f_{1}^{\epsilon})+\tilde{ \Gamma}_{2}^{\lambda,T}(f_{2}^{\epsilon},f_{2}^{\epsilon}).\] Note that \(R_{\epsilon}\) is bounded by \(O(\epsilon)+O(f_{2}^{\epsilon})\). Then it is easy to check \[R_{\epsilon}\to 0\text{ weakly-* in }L^{\infty}(\mathbb{R}_{+};H_{x}^{N-1}L^{2}). \tag{5.44}\] To derive more equations, we need to consider the following four quantities \[\langle\hat{A},v\cdot\nabla_{x}f_{1}^{\epsilon}\rangle,\quad\langle\hat{B},v \cdot\nabla_{x}f_{1}^{\epsilon}\rangle,\quad\langle\hat{A},\tilde{\Gamma}_{2}^ {\lambda,T}(f_{1}^{\epsilon},f_{1}^{\epsilon})\rangle,\quad\langle\hat{B}, \tilde{\Gamma}_{2}^{\lambda,T}(f_{1}^{\epsilon},f_{1}^{\epsilon})\rangle,\] We give three lemmas to derive them. **Lemma 5.1**.: _Let \(g=(\rho+u\cdot v+\theta(\frac{|v|^{2}}{2}-K_{\lambda}))N_{\lambda}\). Then_ \[\langle\hat{A},v\cdot\nabla_{x}g\rangle=\kappa_{1,\lambda,T}\nabla _{x}\theta+\kappa_{2,\lambda,T}\nabla_{x}(\rho+\theta), \tag{5.45}\] \[\langle\hat{B},v\cdot\nabla_{x}g\rangle=\nu_{\lambda,T}\mathcal{T} (u). \tag{5.46}\] _where_ \[\kappa_{1,\lambda,T}:=\frac{1}{3}\int\alpha_{\lambda,T}(|v|)(\frac {|v|^{2}}{2}-K_{A})^{2}|v|^{2}N_{\lambda}^{2}\mathrm{d}v,\quad\kappa_{2,\lambda,T }:=\frac{1}{3}\int\alpha_{\lambda,T}(|v|)(\frac{|v|^{2}}{2}-K_{A})|v|^{2}N_{ \lambda}^{2}\mathrm{d}v,\] \[\nu_{\lambda,T}:=\frac{1}{15}\int\beta_{\lambda,T}(|v|)|v|^{4}N_{ \lambda}^{2}\mathrm{d}v,\quad\mathcal{T}(u):=\nabla_{x}u+(\nabla_{x}u)^{\mathrm{ T}}-\frac{2}{3}(\nabla_{x}\cdot u)I_{3}.\] Proof.: Note that \[\langle\hat{A},v\cdot\nabla_{x}g\rangle=\nabla_{x}\cdot\langle\hat{A}\otimes v,g \rangle=\nabla_{x}\cdot\langle\hat{A}\otimes v,(\rho+\theta+\theta(\frac{|v|^{2}} {2}-K_{A}))N_{\lambda}\rangle=\nabla_{x}\cdot\langle\hat{A}\otimes A,\theta \rangle+\nabla_{x}\cdot\langle\hat{A}\otimes N_{\lambda}v,\rho+\theta\rangle.\] We then get (5.45) by observing \[\int\hat{A}\otimes Adv=\int\alpha_{\lambda,T}(|v|)A\otimes Adv=\frac{1}{3} \int\alpha_{\lambda,T}(|v|)(\frac{|v|^{2}}{2}-K_{A})^{2}|v|^{2}N_{\lambda}^{2} \mathrm{d}vI_{3}=\kappa_{1,\lambda,T}I_{3},\] \[\int\hat{A}\otimes N_{\lambda}v\mathrm{d}v=\int\alpha_{\lambda,T}(|v|)A \otimes N_{\lambda}\mathrm{d}v=\frac{1}{3}\int\alpha_{\lambda,T}(|v|)(\frac{|v |^{2}}{2}-K_{A})|v|^{2}N_{\lambda}^{2}\mathrm{d}vI_{3}=\kappa_{1,\lambda,T}I_{3}.\] Note that \[\langle\hat{B},v\cdot\nabla_{x}g\rangle=\langle\hat{B},N_{\lambda}(v\cdot \nabla_{x})(u\cdot v)\rangle=\langle\hat{B},N_{\lambda}\sum_{i,j}\partial_{i} u_{j}v_{i}v_{j}\rangle.\] For the \((1,1)\) element, we have \[\langle\beta_{\lambda,T}(|v|)N_{\lambda}^{2},(v_{1}^{2}-\frac{|v |^{2}}{3})\sum_{i,j}\partial_{i}u_{j}v_{i}v_{j}\rangle\] \[= \langle\beta_{\lambda,T}(|v|)N_{\lambda}^{2},(v_{1}^{2}-\frac{|v |^{2}}{3})v_{1}^{2}\rangle\partial_{1}u_{1}+\langle\beta_{\lambda,T}(|v|)N_{ \lambda}^{2},(v_{1}^{2}-\frac{|v|^{2}}{3})v_{2}^{2}\rangle\partial_{2}u_{2}+ \langle\beta_{\lambda,T}(|v|)N_{\lambda}^{2},(v_{1}^{2}-\frac{|v|^{2}}{3})v_{ 3}^{2}\rangle\partial_{3}u_{3}\] \[= \langle\beta_{\lambda,T}(|v|)N_{\lambda}^{2},(v_{1}^{4}-v_{1}^{2} v_{2}^{2})\rangle\partial_{1}u_{1}+\langle\beta_{\lambda,T}(|v|)N_{\lambda}^{2},(v_{1} ^{2}-\frac{|v|^{2}}{3})v_{2}^{2}\rangle\nabla_{x}\cdot u:=\lambda_{11}\partial _{1}u_{1}+\lambda_{*}\nabla_{x}\cdot u.\] For the \((1,2)\) element, we have \[\langle\beta_{\lambda,T}(|v|)N_{\lambda}^{2},v_{1}v_{2}\sum_{i,j}\partial_{i} u_{j}v_{i}v_{j}\rangle=\langle\beta_{\lambda,T}(|v|)N_{\lambda}^{2},v_{1}^{2}v_{2} ^{2}\rangle(\partial_{1}u_{2}+\partial_{2}u_{1}):=\lambda_{12}(\partial_{1}u_ {2}+\partial_{2}u_{1}).\] Observe that for a general radial function \(f(v)=f(|v|)\), \[\int f(|v|)|v|^{n}\mathrm{d}v=(n+1)\int f(|v|)v_{1}^{n}\mathrm{d}v.\] Since \(\int\beta(|v|)N_{\lambda}^{2}|v|^{4}\mathrm{d}v=5\int\beta(|v|)N_{\lambda}^{2 }v_{1}^{4}\mathrm{d}v\), then \(\lambda_{11}=2\lambda_{12},\lambda_{*}=-\frac{2}{3}\lambda_{12}\). As a result, we get \[\langle\hat{B},v\cdot\nabla_{x}g\rangle=\lambda_{12}(\nabla_{x}u+(\nabla_{x}u )^{\mathrm{T}})+\lambda_{*}(\nabla_{x}\cdot u)I_{3}=\lambda_{12}(\nabla_{x}u+( \nabla_{x}u)^{\mathrm{T}}-\frac{2}{3}(\nabla_{x}\cdot u)I_{3}).\] It is easy to check that \(\lambda_{12}=\langle\beta_{\lambda,T}(|v|)N_{\lambda}^{2},v_{1}^{2}v_{2}^{2} \rangle=\frac{1}{15}\langle\beta_{\lambda,T}(|v|)N_{\lambda}^{2},|v|^{4} \rangle=\nu_{\lambda,T}\). **Lemma 5.2**.: _For \(g\in\ker\tilde{\mathcal{L}}^{\lambda,T}\), we have_ \[\tilde{\Gamma}_{2}^{\lambda,T}(g,g)=\frac{1}{2}\tilde{\mathcal{L}}^{\lambda,T} ((1+2M_{\lambda})N_{\lambda}^{-1}g^{2}). \tag{5.47}\] Proof.: Recall from (2.3) and (2.4) the definitions of \(\tilde{\mathcal{L}}^{\lambda,T}\) and \(\Gamma_{2}^{\lambda,T}\). Let \(K_{\lambda}:=N_{\lambda}(N_{\lambda})_{*}N_{\lambda}^{\prime}(N_{\lambda})_{*}^ {\prime}\), then \[(\tilde{\mathcal{L}}^{\lambda,T}f)(v):=N_{\lambda}^{-1}\int B_{T}K_{\lambda} \mathrm{S}(N_{\lambda}^{-1}f)\mathrm{d}\sigma\mathrm{d}v_{*},\] \[\tilde{\Gamma}_{2}^{\lambda,T}(g,h):=N_{\lambda}^{-1}\int B_{T}K_{\lambda} \Theta_{2}(g,h)\mathrm{d}\sigma\mathrm{d}v_{*},\] where \[\Theta_{2}(g,h) := \mathrm{D}\big{(}(N_{\lambda}^{-1}g)_{*}^{\prime}(N_{\lambda}^{-1 }h)^{\prime}\big{)}\] \[+\mathrm{D}\big{(}(N_{\lambda}^{-1}g)_{*}^{\prime}(N_{\lambda}^{- 1}h)^{\prime}(M_{\lambda}^{\prime}+(M_{\lambda})_{*}^{\prime})\big{)}\] \[+\mathrm{D}\big{(}(N_{\lambda}^{-1}g)_{*}^{\prime}(N_{\lambda}^{- 1}h)(M_{\lambda}-(M_{\lambda})_{*}^{\prime})\big{)}\] \[+\mathrm{D}((N_{\lambda}^{-1}g)_{*}^{\prime}(N_{\lambda}^{-1}g)_{ *}(M_{\lambda})_{*})+\mathrm{D}((N_{\lambda}^{-1}h)^{\prime}(N_{\lambda}^{-1 }h)M_{\lambda}).\] Note that \[\Theta_{2}(g,g) := \mathrm{D}\big{(}(N_{\lambda}^{-1}g)_{*}^{\prime}(N_{\lambda}^{-1 }g)^{\prime}\big{)}\] \[+\mathrm{D}\left((M_{\lambda}N_{\lambda}^{-1}g)_{*}^{\prime}\big{(} (N_{\lambda}^{-1}g)^{\prime}-(N_{\lambda}^{-1}g)-(N_{\lambda}^{-1}g)_{*}\big{)} \right)\] \[+\mathrm{D}\left((M_{\lambda}N_{\lambda}^{-1}g)^{\prime}\big{(}(N _{\lambda}^{-1}g)_{*}^{\prime}-(N_{\lambda}^{-1}g)-(N_{\lambda}^{-1}g)_{*} \big{)}\right).\] For \(g\in\ker\tilde{\mathcal{L}}^{\lambda,T}\), setting \(g=N_{\lambda}h\), then \(h\) is collision invariant. Then \[\Theta_{2}(g,g) = \mathrm{D}\big{(}h^{\prime}_{*}h^{\prime}\big{)}\] \[+\mathrm{D}\left((M_{\lambda}h)^{\prime}_{*}\big{(}h^{\prime}-h-h_ {*}\big{)}\right)\] \[+\mathrm{D}\left((M_{\lambda}h)^{\prime}\big{(}h^{\prime}_{*}-h-h_ {*}\big{)}\right)\] \[= -\mathrm{D}(h_{*}h)-\mathrm{D}\left((M_{\lambda}h)_{*}\big{(}h-h^ {\prime}-h^{\prime}_{*}\big{)}\right)-\mathrm{D}\left((M_{\lambda}h)\big{(}h_{ *}-h^{\prime}-h^{\prime}_{*}\big{)}\right).\] Note that \[\frac{1}{2}\mathrm{S}(N_{\lambda}^{-1}(1+2M_{\lambda})N_{\lambda }^{-1}g^{2}) = \frac{1}{2}\mathrm{S}((1+2M_{\lambda})h^{2})=\frac{1}{2}\mathrm{ S}(h^{2})+\mathrm{S}(M_{\lambda}h^{2})\] \[= \frac{1}{2}\mathrm{D}(h^{2}+h_{*}^{2})+\mathrm{D}(M_{\lambda}h^ {2}+(M_{\lambda}h^{2})_{*}).\] Recalling \(h\) is collision invariant, we get \[\frac{1}{2}\mathrm{S}(N_{\lambda}^{-1}(1+2M_{\lambda})N_{\lambda }^{-1}g^{2})-\Theta_{2}(g,g)=\frac{1}{2}\mathrm{D}((h+h_{*})^{2})+\mathrm{D} \left(M_{\lambda}h\mathrm{S}h+(M_{\lambda}h)_{*}\mathrm{S}h\right)=0,\] which gives (5.47). **Lemma 5.3**.: _Let \(g=(\rho+u\cdot v+\theta(\frac{|v|^{2}}{2}-K_{\lambda}))N_{\lambda}\). Then_ \[\langle\hat{A},\tilde{\Gamma}_{2}^{\lambda,T}(g,g)\rangle= \langle\hat{A},\frac{1}{2}\mathcal{L}(\mu^{-\frac{1}{2}}g^{2})\rangle=C_{A} \theta u+C_{*}(\rho+\theta)u, \tag{5.48}\] \[\langle\hat{B},\tilde{\Gamma}_{2}^{\lambda,T}(g,g)\rangle=\langle \hat{B},\frac{1}{2}\mathcal{L}(\mu^{-\frac{1}{2}}g^{2})\rangle=\frac{m_{2}}{ 3}\mathcal{B}(u), \tag{5.49}\] _where_ \[C_{*}=\frac{m_{2}^{2}-m_{0}m_{4}}{2m_{2}},\quad\mathcal{B}(u):=u \otimes u-\frac{|u|^{2}}{3}I_{3}\] Proof.: The formula (5.47) will significantly simplify the computation of \(\langle\hat{A},\tilde{\Gamma}_{2}^{\lambda,T}(g,g)\rangle,\langle\hat{B}, \tilde{\Gamma}_{2}^{\lambda,T}(g,g)\rangle\) for \(g\in\ker\tilde{\mathcal{L}}^{\lambda,T}\). By (5.47), since \(\tilde{\mathcal{L}}^{\lambda,T}\) is self-adjoint, we get \[\langle\hat{A},\tilde{\Gamma}_{2}^{\lambda,T}(g,g)\rangle = \langle\hat{A},\frac{1}{2}\tilde{\mathcal{L}}^{\lambda,T}((1+2M_{ \lambda})N_{\lambda}^{-1}g^{2})\rangle\] \[= \frac{1}{2}\langle A,(1+2M_{\lambda})N_{\lambda}^{-1}g^{2}\rangle\] \[= \frac{1}{2}\langle N_{\lambda}(\frac{|v|^{2}}{2}-K_{A})v,(1+2M_{ \lambda})N_{\lambda}(\rho+u\cdot v+\theta(\frac{|v|^{2}}{2}-K_{\lambda}))^{2}\rangle\] \[= \langle N_{\lambda}(\frac{|v|^{2}}{2}-K_{A})v,(1+2M_{\lambda})N_ {\lambda}(\rho+\theta(\frac{|v|^{2}}{2}-K_{\lambda}))u\cdot v\rangle\] \[= \langle M_{\lambda}(1+M_{\lambda})(1+2M_{\lambda})(\frac{|v|^{2}} {2}-K_{A})v,(\rho+\theta+\theta(\frac{|v|^{2}}{2}-K_{A}))u\cdot v\rangle\] \[= \frac{1}{3}\langle M_{\lambda}(1+M_{\lambda})(1+2M_{\lambda}),( \frac{|v|^{2}}{2}-K_{A})^{2}|v|^{2}\rangle\theta u\] \[+\frac{1}{3}\langle M_{\lambda}(1+M_{\lambda})(1+2M_{\lambda}),( \frac{|v|^{2}}{2}-K_{A})|v|^{2}\rangle(\rho+\theta)u\] \[= C_{A}\theta u+C_{*}(\rho+\theta)u,\] where we use (5.4) to get \[\frac{1}{3}\langle M_{\lambda}(1+M_{\lambda})(1+2M_{\lambda}),( \frac{|v|^{2}}{2}-K_{A})|v|^{2}\rangle=\langle M_{\lambda}(1+M_{\lambda}), \frac{|v|^{2}}{2}-K_{A}\rangle=\frac{m_{2}^{2}-m_{0}m_{4}}{2m_{2}}=C_{*}.\] Similarly, \[\langle\hat{B},\tilde{\Gamma}_{2}^{\lambda,T}(g,g)\rangle = \langle\hat{B},\frac{1}{2}\tilde{\mathcal{L}}^{\lambda,T}((1+2M_{ \lambda})N_{\lambda}^{-1}g^{2})\rangle\] \[= \frac{1}{2}\langle B,(1+2M_{\lambda})N_{\lambda}^{-1}g^{2}\rangle\] \[= \frac{1}{2}\langle N_{\lambda}(v\otimes v-\frac{|v|^{2}}{3}I_{3} ),(1+2M_{\lambda})N_{\lambda}(\rho+u\cdot v+\theta(\frac{|v|^{2}}{2}-K_{ \lambda}))^{2}\rangle\] \[= \frac{1}{2}\langle M_{\lambda}(1+M_{\lambda})(1+2M_{\lambda})(v \otimes v-\frac{|v|^{2}}{3}I_{3}),(u\cdot v)^{2}\rangle\] \[= \frac{1}{15}\langle\tilde{\mu},|v|^{4})(u\otimes u-\frac{|u|^{2} }{3}I_{3}).\] For simplicity, let \(\tilde{\mu}:=M_{\lambda}(1+M_{\lambda})(1+2M_{\lambda})\). For the \((1,1)\) element, \[\langle\tilde{\mu},(v_{1}^{2}-\frac{|v|^{2}}{3})\sum_{i,j}u_{i}u_ {j}v_{i}v_{j}\rangle = \langle\tilde{\mu},(v_{1}^{2}-\frac{|v|^{2}}{3})v_{1}^{2}\rangle u _{1}^{2}+\langle\tilde{\mu},(v_{1}^{2}-\frac{|v|^{2}}{3})v_{2}^{2}\rangle u_{ 2}^{2}+\langle\tilde{\mu},(v_{1}^{2}-\frac{|v|^{2}}{3})v_{3}^{2}\rangle u_{3} ^{2}\] \[= \langle\tilde{\mu},v_{1}^{2}v_{2}^{2}\rangle(\frac{4}{3}u_{1}^{2 }-\frac{2}{3}u_{2}^{2}-\frac{2}{3}u_{3}^{2})=\frac{2}{15}\langle\tilde{\mu},|v |^{4})(u_{1}^{2}-\frac{|u|^{2}}{3}).\] For the \((1,2)\) element, \[\langle\tilde{\mu},v_{1}v_{2}\sum_{i,j}u_{i}u_{j}v_{i}v_{j}\rangle=2\langle \tilde{\mu},v_{1}^{2}v_{2}^{2}\rangle u_{1}u_{2}=\frac{2}{15}\langle\tilde{\mu },|v|^{4}\rangle u_{1}u_{2}.\] As a result, we obtain (5.49) by observing \(\langle\tilde{\mu},|v|^{4}\rangle=5m_{2}\). Plugging (5.43) into (5.39), using (5.46) and (5.49), we get \[\partial_{t}\mathcal{P}u^{\epsilon}-T\frac{3\nu_{\lambda,T}}{m_{2}}\mathcal{P }\nabla_{x}\cdot\mathcal{T}(u^{\epsilon})+T^{\frac{1}{2}}\mathcal{P}\nabla_{ x}\cdot\mathcal{B}(u^{\epsilon})+T^{\frac{1}{2}}\frac{3}{m_{2}}\mathcal{P}\nabla_{x} \cdot\langle\hat{B},R_{\epsilon}\rangle=0.\] Note that \[\nabla_{x}\cdot\mathcal{T}(u^{\epsilon})=\Delta_{x}u^{\epsilon}+\frac{1}{3} \nabla_{x}(\nabla_{x}\cdot u^{\epsilon}),\quad\nabla_{x}\cdot\mathcal{B}(u^ {\epsilon})=\nabla_{x}\cdot(u^{\epsilon}\otimes u^{\epsilon})-\frac{1}{3} \nabla_{x}(|u^{\epsilon}|^{2}),\] then \[\mathcal{P}\nabla_{x}\cdot\mathcal{T}(u^{\epsilon})=\mathcal{P}\Delta_{x}u^{ \epsilon}=\Delta_{x}\mathcal{P}u^{\epsilon},\quad\mathcal{P}\nabla_{x}\cdot \mathcal{B}(u^{\epsilon})=\mathcal{P}\nabla_{x}\cdot(u^{\epsilon}\otimes u^{ \epsilon}).\] Then we have \[\partial_{t}\mathcal{P}u^{\epsilon}-T\frac{3\nu_{\lambda,T}}{m_{2}}\Delta_{x} \mathcal{P}u^{\epsilon}+T^{\frac{1}{2}}\mathcal{P}\nabla_{x}\cdot(u^{\epsilon }\otimes u^{\epsilon})+T^{\frac{1}{2}}\frac{3}{m_{2}}\mathcal{P}\nabla_{x} \cdot\langle\hat{B},R_{\epsilon}\rangle=0.\] With the decomposition \(u^{\epsilon}=\mathcal{P}u^{\epsilon}+\mathcal{P}^{\perp}u^{\epsilon}\), defining \(\mu_{\lambda,T}:=T\frac{3\nu_{\lambda,T}}{m_{2}}\), we get \[\partial_{t}\mathcal{P}u^{\epsilon}-\mu_{\lambda,T}\Delta_{x}\mathcal{P}u^{ \epsilon}+T^{\frac{1}{2}}\mathcal{P}\nabla_{x}\cdot(\mathcal{P}u^{\epsilon} \otimes\mathcal{P}u^{\epsilon})=R_{\epsilon,u},\] where \[R_{\epsilon,u}:=-T^{\frac{1}{2}}\mathcal{P}\nabla_{x}\cdot(\mathcal{P}^{\perp }u^{\epsilon}\otimes\mathcal{P}u^{\epsilon})-T^{\frac{1}{2}}\mathcal{P}\nabla _{x}\cdot(\mathcal{P}u^{\epsilon}\otimes\mathcal{P}^{\perp}u^{\epsilon})-T^{ \frac{1}{2}}\mathcal{P}\nabla_{x}\cdot(\mathcal{P}^{\perp}u^{\epsilon}\otimes \mathcal{P}^{\perp}u^{\epsilon})-T^{\frac{1}{2}}\frac{3}{m_{2}}\mathcal{P} \nabla_{x}\cdot\langle\hat{B},R_{\epsilon}\rangle.\] For any \(T_{*}>0\), and any test function \(\psi(t,x)\in C^{1}(0,T_{*};C_{c}^{\infty}(\mathbb{R}^{3}))\) with \(\nabla_{x}\cdot\psi=0,\psi(T_{*},x)=0\), we first have \[\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}\partial_{t}\mathcal{P}u^{\epsilon}\cdot \psi\mathrm{d}t\mathrm{d}x=-\int_{\mathbb{R}^{3}}\mathcal{P}u^{\epsilon}(0,x) \cdot\psi(0,x)\mathrm{d}x-\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}\mathcal{P}u^{ \epsilon}\cdot\partial_{t}\psi\mathrm{d}t\mathrm{d}x.\] By convergence of the initial data (2.37), \[\int_{\mathbb{R}^{3}}\mathcal{P}u^{\epsilon}(0,x)\cdot\psi(0,x)\mathrm{d}x \rightarrow\int_{\mathbb{R}^{3}}\mathcal{P}u_{0}\cdot\psi(0,x)\mathrm{d}x.\] By the convergence (5.41), \[\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}\mathcal{P}u^{\epsilon}\cdot\partial_{t} \psi\mathrm{d}t\mathrm{d}x\rightarrow\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}u \cdot\partial_{t}\psi\mathrm{d}t\mathrm{d}x.\] As a result, we get \[\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}\partial_{t}\mathcal{P}u^{\epsilon}\cdot \psi\mathrm{d}t\mathrm{d}x\to-\int_{\mathbb{R}^{3}}\mathcal{P}u_{0}\cdot\psi(0,x )\mathrm{d}x-\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}u\cdot\partial_{t}\psi \mathrm{d}t\mathrm{d}x. \tag{5.50}\] Recalling (5.17), (5.41), and (5.42), \[u^{\epsilon}\to u\text{ and }\mathcal{P}^{\perp}u^{\epsilon}\to 0\text{ weakly-* in }L^{\infty}(\mathbb{R}_{+};H_{x}^{2}),\quad\mathcal{P}u^{\epsilon}\to u \text{ strongly in }C(\mathbb{R}_{+};H_{x}^{1}). \tag{5.51}\] From this, we first have \[\Delta_{x}\mathcal{P}u^{\epsilon}\to\Delta_{x}u\text{ in the sense of distribution.} \tag{5.52}\] It is easy to see \[\|\mathcal{P}\nabla_{x}\cdot(\mathcal{P}u^{\epsilon}\otimes\mathcal{P}u^{ \epsilon})-\mathcal{P}\nabla_{x}\cdot(u\otimes u)\|_{L_{x}^{2}}\lesssim\| \mathcal{P}u^{\epsilon}-u\|_{H_{x}^{1}}\|\mathcal{P}u^{\epsilon}\|_{H_{x}^{2 }}+\|u\|_{H_{x}^{2}}\|\mathcal{P}u^{\epsilon}-u\|_{H_{x}^{1}},\] which gives \[\|\mathcal{P}\nabla_{x}\cdot(\mathcal{P}u^{\epsilon}\otimes\mathcal{P}u^{ \epsilon})-\mathcal{P}\nabla_{x}\cdot(u\otimes u)\|_{C(\mathbb{R}_{+};L_{x}^{ 2})}\leq\|\mathcal{P}u^{\epsilon}-u\|_{C(\mathbb{R}_{+};H_{x}^{1})}(\|u^{ \epsilon}\|_{L^{\infty}(\mathbb{R}_{+};H_{x}^{2})}+\|u\|_{L^{\infty}(\mathbb{ R}_{+};H_{x}^{2})}).\] Then we have \[\mathcal{P}\nabla_{x}\cdot(\mathcal{P}u^{\epsilon}\otimes\mathcal{P}u^{ \epsilon})\to\mathcal{P}\nabla_{x}\cdot(u\otimes u)\text{ strongly in }C(\mathbb{R}_{+};L_{x}^{2}). \tag{5.53}\] By (5.51), it is standard to derive \[\mathcal{P}\nabla_{x}\cdot(\mathcal{P}^{\perp}u^{\epsilon}\otimes\mathcal{P}u ^{\epsilon}+\mathcal{P}u^{\epsilon}\otimes\mathcal{P}^{\perp}u^{\epsilon})\to 0 \text{ in the distributional sense.} \tag{5.54}\] First note that \[\|\mathcal{P}\nabla_{x}\cdot(\mathcal{P}^{\perp}u^{\epsilon}\otimes\mathcal{P} u^{\epsilon}+\mathcal{P}u^{\epsilon}\otimes\mathcal{P}^{\perp}u^{\epsilon})\|_{L_{x}^ {2}}\leq\|\mathcal{P}^{\perp}u^{\epsilon}\otimes\mathcal{P}u^{\epsilon}+ \mathcal{P}u^{\epsilon}\otimes\mathcal{P}^{\perp}u^{\epsilon}\|_{H_{x}^{1}} \lesssim\|u^{\epsilon}\|_{H_{x}^{1}}\|u^{\epsilon}\|_{H_{x}^{2}}.\] Then \(\mathcal{P}\nabla_{x}\cdot(\mathcal{P}^{\perp}u^{\epsilon}\otimes\mathcal{P} u^{\epsilon}+\mathcal{P}u^{\epsilon}\otimes\mathcal{P}^{\perp}u^{\epsilon})\in L ^{\infty}(\mathbb{R}_{+};H_{x}^{2})\). Note that \[\|\mathcal{P}^{\perp}u^{\epsilon}\otimes(\mathcal{P}u^{\epsilon}-u)\|_{C( \mathbb{R}_{+};H_{x}^{1})}\lesssim\|\mathcal{P}^{\perp}u^{\epsilon}\|_{L^{ \infty}(\mathbb{R}_{+};H_{x}^{2})}\|\mathcal{P}u^{\epsilon}-u\|_{C(\mathbb{R} _{+};H_{x}^{1})}\] From this together with (5.51), we have \[\mathcal{P}^{\perp}u^{\epsilon}\otimes(\mathcal{P}u^{\epsilon}-u)\to 0 \text{ strongly in }C(\mathbb{R}_{+};H_{x}^{1}) \tag{5.55}\] From \(u\in L^{\infty}(\mathbb{R}_{+};H_{x}^{2})\) together with (5.51), we have \[\mathcal{P}^{\perp}u^{\epsilon}\otimes u\to 0\text{ weakly-* in }L^{\infty}(\mathbb{R}_{+};H_{x}^{1}) \tag{5.56}\] By (5.55) and (5.56), we have \[\mathcal{P}^{\perp}u^{\epsilon}\otimes\mathcal{P}u^{\epsilon}\to 0 \text{ weakly-* in }L^{\infty}(\mathbb{R}_{+};H_{x}^{1}).\] Similarly, we can derive \[\mathcal{P}u^{\epsilon}\otimes\mathcal{P}^{\perp}u^{\epsilon}\to 0 \text{ weakly-* in }L^{\infty}(\mathbb{R}_{+};H_{x}^{1}).\] The previous two results give (5.54). We now derive \[\mathcal{P}\nabla_{x}\cdot(\mathcal{P}^{\perp}u^{\epsilon}\otimes\mathcal{P} ^{\perp}u^{\epsilon})\to 0\text{ in the distributional sense.} \tag{5.57}\] Applying \(\mathcal{P}^{\perp}\) to (5.28), we get \[\partial_{t}\mathcal{P}^{\perp}u^{\epsilon}+\frac{T^{\frac{1}{2}}}{\epsilon} \nabla_{x}(\rho^{\epsilon}+\theta^{\epsilon})+\frac{T^{\frac{1}{2}}}{\epsilon} \frac{3}{m_{2}}\mathcal{P}^{\perp}\nabla_{x}\cdot\langle B,f_{2}^{\epsilon} \rangle=0.\] Adding (5.27) and (5.29), we get \[\partial_{t}(\rho^{\epsilon}+\theta^{\epsilon})+\frac{T^{\frac{1}{2}}}{\epsilon }\frac{2}{3}K_{A}\nabla_{x}\cdot\mathcal{P}^{\perp}u^{\epsilon}+\frac{T^{ \frac{1}{2}}}{\epsilon}\frac{K_{A}}{C_{A}}\frac{2K_{A}m_{0}-m_{2}}{m_{2}}\nabla_{ x}\cdot\langle A,f_{2}^{\epsilon}\rangle=0.\] For simplicity, let \[\mathcal{F}_{\epsilon}:=T^{\frac{1}{2}}\frac{K_{A}}{C_{A}}\frac{2K_{A}m_{0}-m_{2 }}{m_{2}}\nabla_{x}\cdot\langle f_{2}^{\epsilon},A\rangle,\quad\mathcal{G}_{ \epsilon}:=T^{\frac{1}{2}}\frac{3}{m_{2}}\mathcal{P}^{\perp}\nabla_{x}\cdot \langle B,f_{2}^{\epsilon}\rangle.\] Then we get \[\epsilon\partial_{t}[(\rho^{\epsilon}+\theta^{\epsilon})\mathcal{P}^{\perp}u^{ \epsilon}]=T^{\frac{1}{2}}\frac{2}{3}K_{A}(\nabla_{x}\cdot\mathcal{P}^{\perp}u^{ \epsilon})\mathcal{P}^{\perp}u^{\epsilon}+\mathcal{F}_{\epsilon}\mathcal{P}^{ \perp}u^{\epsilon}+\frac{1}{2}T^{\frac{1}{2}}\nabla_{x}(\rho^{\epsilon}+\theta^{ \epsilon})^{2}+(\rho^{\epsilon}+\theta^{\epsilon})\mathcal{G}_{\epsilon}.\] Using \[\nabla_{x}\cdot(\mathcal{P}^{\perp}u^{\epsilon}\otimes\mathcal{P}^{\perp}u^{ \epsilon})=\frac{1}{2}\nabla_{x}(|\mathcal{P}^{\perp}u^{\epsilon}|^{2})+(\nabla_{x} \cdot\mathcal{P}^{\perp}u^{\epsilon})\mathcal{P}^{\perp}u^{\epsilon},\] and taking projection \(\mathcal{P}\), we get \[T^{\frac{1}{2}}\frac{2}{3}K_{A}\mathcal{P}\nabla_{x}\cdot(\mathcal{P}^{\perp}u^{ \epsilon}\otimes\mathcal{P}^{\perp}u^{\epsilon})=\epsilon\partial_{t}\mathcal{P} [(\rho^{\epsilon}+\theta^{\epsilon})\mathcal{P}^{\perp}u^{\epsilon}]-\mathcal{ P}[\mathcal{F}_{\epsilon}\mathcal{P}^{\perp}u^{\epsilon}+(\rho^{\epsilon}+ \theta^{\epsilon})\mathcal{G}_{\epsilon}]\] By (5.13) and (5.12), \[\mathcal{F}_{\epsilon},\mathcal{G}_{\epsilon}\to 0\text{ strongly in }L^{2}(\mathbb{R}_{+};H_{x}^{1}), \quad(\rho^{\epsilon},u^{\epsilon},\theta^{\epsilon})\in L^{\infty}(\mathbb{R }_{+};H_{x}^{2}).\] Then we arrive at (5.57). From (5.54), (5.57), and (5.44), we obtain \[R_{\epsilon,u}\to 0\text{ in the distributional sense.} \tag{5.58}\] By (5.50), (5.52), (5.53) and (5.58), it holds that \[-\int_{\mathbb{R}^{3}}\mathcal{P}u_{0}\cdot\psi(0,x)\mathrm{d}x-\int_{0}^{T_{ \epsilon}}\int_{\mathbb{R}^{3}}u\cdot\partial_{t}\psi\mathrm{d}t\mathrm{d}x= \int_{0}^{T_{\epsilon}}\int_{\mathbb{R}^{3}}(-T^{\frac{1}{2}}\mathcal{P}(u \cdot\nabla_{x}u)+\mu_{\lambda,T}\Delta_{x}u)\cdot\psi\mathrm{d}t\mathrm{d}x. \tag{5.59}\] That is, \(u\) satisfies \(\eqref{eq:T_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_RR_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_RR_R_RR_R_RR_R_R_R_R_R_R_R_R_R_R_R_R_R_RR_R_RR_R_R_RR_RR_RR_RR_R_R_RR_R_RR_R_R_R_R_RR_R_R_RR_RR_RR_R_RR_R_RR_R_RR_R_RR_RR_R_RR_RR_R_RR_R_R_R_R_R_R_RR_R_R_R_R_R_RR_R_R_RR_R_R_R_R_R_RR_R_R_RR_R_R_RR_R_RR_R_R_R_R_R_R_R_RR_R_R_R_RR_R_R_R_R_R_RR_R_RR_R_R_R_R_R_R_RR_R_R_R_RR_R_RR_R_R_R_R_R_R_RR_R_R_R_R_RR_R_R_R_R_R_R_R_RR_R_R_R_R_RR_R_R_R_R_R_R_R_RR_R_R_R_R_R_RR_R_R_R_R_R_R_R_RR_R_R_R_R_R_RR_R As a result of Lemma 6.1, for any \(R\in O_{3}\), \[f\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}\quad\Leftrightarrow\quad T_{R} f\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}. \tag{6.1}\] For any \(g\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}\), the following problem has at most a unique solution \[\tilde{\mathcal{L}}^{\lambda,T}f=g,\quad f\in(\ker\tilde{\mathcal{L}}^{ \lambda,T})^{\perp}. \tag{6.2}\] We recall two elementary results in linear algebra. **Lemma 6.2**.: _Let \(f:\mathbb{R}^{3}\to\mathbb{R}^{3}\). If for any \(R\in O_{3}\),_ \[f\circ R=R\circ f.\] _Then there exists a radial function \(\alpha=\alpha(|v|):\mathbb{R}^{3}\to\mathbb{R}\) such that_ \[f(v)=\alpha(|v|)v.\] **Lemma 6.3**.: _Let \(f:\mathbb{R}^{3}\to\mathbb{R}^{3\times 3}\). Suppose for any \(R\in O_{3}\),_ \[f\circ R=RfR^{-1}\] _as functions \(\mathbb{R}^{3}\to\mathbb{R}^{3\times 3}\). Here the right-hand side is interpreted as matrix multiplication. Moreover, for any \(v\in\mathbb{R}^{3}\),_ \[\text{the matrix $f(v)$ is symmetric and traceless}. \tag{6.3}\] _Then there exists a radial function \(\beta=\beta(|v|):\mathbb{R}^{3}\to\mathbb{R}\) such that_ \[f(v)=\beta(|v|)(v\otimes v-\frac{|v|^{2}}{3}I_{3}).\] Now we are ready to establish the following result. **Theorem 6.1**.: _Recall (5.8) for \(A,B\). Let \(\hat{A},\hat{B}\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}\) be solutions to_ \[\tilde{\mathcal{L}}^{\lambda,T}(\hat{A})=A,\quad\tilde{\mathcal{L}}^{\lambda, T}(\hat{B})=B. \tag{6.4}\] _The functions \(\hat{A},\hat{B}\) of \(A,B\) must be of the following form_ \[\hat{A}=\alpha_{\lambda,T}(|v|)A,\quad\hat{B}=\beta_{\lambda,T}(|v|)B, \tag{6.5}\] _for some radial functions \(\alpha_{\lambda,T}(|v|),\beta_{\lambda,T}(|v|)\) depending on the operator \(\tilde{\mathcal{L}}^{\lambda,T}\)._ Proof.: We claim for any \(R\in O_{3}\), \[\hat{A}\circ R=R\circ\hat{A}, \tag{6.6}\] as functions \(\mathbb{R}^{3}\to\mathbb{R}^{3}\). Then by Lemma 6.2, we conclude the existence of \(\alpha_{\lambda,T}(|v|)\). For any function \(f:\mathbb{R}^{3}\to\mathbb{R}^{3}\), \(f\circ R=T_{R}f\). Here \(R\circ f\) is a just linear combination of \(f\). Note that \[\tilde{\mathcal{L}}^{\lambda,T}(\hat{A}\circ R)=\tilde{\mathcal{L}}^{\lambda, T}T_{R}\hat{A}=T_{R}\tilde{\mathcal{L}}^{\lambda,T}\hat{A}=T_{R}A=A\circ R,\] \[\tilde{\mathcal{L}}^{\lambda,T}(R\circ\hat{A})=R\circ\tilde{ \mathcal{L}}^{\lambda,T}\hat{A}=R\circ A.\] The first line uses Lemma 6.1, while the second uses the linearity of \(\tilde{\mathcal{L}}^{\lambda,T}\). It is easy to check \[R\circ A=A\circ R,\quad\hat{A}\circ R\in(\ker\tilde{\mathcal{L}}^{\lambda,T}) ^{\perp},\quad R\circ\hat{A}\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}.\] The first one is obvious. The second one is given by (6.1). For the third one, note that each element of \(R\circ\hat{A}\) is just a linear combination of \(\hat{A}_{i}\). By uniqueness of problem (6.2), we get (6.6). We claim for any \(R\in O_{3},v\in\mathbb{R}^{3}\), \[\hat{B}\circ R=R\hat{B}R^{-1} \tag{6.7}\] as functions \(\mathbb{R}^{3}\to\mathbb{R}^{3\times 3}\). Moreover, for any \(v\in\mathbb{R}^{3}\), \[\hat{B}(v)\text{ is symmetric and traceless}. \tag{6.8}\] Then by Lemma 6.3, we conclude the existence of \(\beta_{\lambda,T}(|v|)\). For any function \(f:\mathbb{R}^{3}\to\mathbb{R}^{3\times 3}\), \(f\circ R=T_{R}f\). Here \(R\circ f\) is just a linear combination of \(f\). Note that \[\tilde{\mathcal{L}}^{\lambda,T}(\hat{B}\circ R)=\tilde{\mathcal{L}}^{\lambda,T} T_{R}\hat{B}=T_{R}\tilde{\mathcal{L}}^{\lambda,T}\hat{B}=T_{R}B=B\circ R,\] \[\tilde{\mathcal{L}}^{\lambda,T}(R\hat{B}R^{-1})=R(\tilde{ \mathcal{L}}^{\lambda,T}\hat{B})R^{-1}=RBR^{-1}.\] It is easy to check \[B\circ R=RBR^{-1},\quad\hat{B}\circ R\in(\ker\tilde{\mathcal{L}}^{\lambda,T}) ^{\perp},\quad R\hat{B}R^{-1}\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}.\] For the first one, note that \[(B\circ R)(v)=B(Rv)=Rv\otimes Rv-\frac{|v|^{2}}{3}I_{3}=Rv\otimes vR^{-1}-\frac{|v |^{2}}{3}RI_{3}R^{-1}=RB(v)R^{-1}.\] The second one is given by (6.1). For the third one, each element of \(R\hat{B}R^{-1}\) is just linear combination of \(\hat{B}_{ij}\). By uniqueness of problem (6.2), we get (6.7). As \(B(v)\) is symmetric and traceless, we have \[\tilde{\mathcal{L}}^{\lambda,T}(\hat{B}-\hat{B}^{\mathrm{T}})=B- B^{\mathrm{T}}=0,\quad\hat{B}-\hat{B}^{\mathrm{T}}\in(\ker\tilde{\mathcal{L}}^{ \lambda,T})^{\perp},\] \[\tilde{\mathcal{L}}^{\lambda,T}(\mathrm{Tr}(\hat{B}))=\mathrm{Tr }(B)=0,\quad\mathrm{Tr}(\hat{B})\in(\ker\tilde{\mathcal{L}}^{\lambda,T})^{\perp}.\] Uniqueness of problem (6.2) gives \[\hat{B}-\hat{B}^{\mathrm{T}}=0,\quad\mathrm{Tr}(\hat{B})=0.\] That is, we get (6.8). **Acknowledgments.** This work was partially supported by National Key Research and Development Program of China under the grant 2021YFA1002100. Ling-Bing He was supported by NSF of China under the grant 12141102. Ning Jiang was supported by NSF of China under the grants 11971360 and 11731008, and also supported by the Strategic Priority Research Program of Chinese Academy of Sciences under the grant XDA25010404. Yu-Long Zhou was partially supported by NSF of China under the grant 12001552, Science and Technology Projects in Guangzhou under the grant 202201011144, and Youth Talent Support Program of Guangdong Provincial Association for Science and Technology under the grant SKXRC202311.
2306.07945
Stability analysis of an extended quadrature method of moments for kinetic equations
This paper performs a stability analysis of a class of moment closure systems derived with an extended quadrature method of moments (EQMOM) for the one-dimensional BGK equation. The class is characterized with a kernel function. A sufficient condition on the kernel is identified for the EQMOM-derived moment systems to be strictly hyperbolic. We also investigate the realizability of the moment method. Moreover, sufficient and necessary conditions are established for the two-node systems to be well-defined and strictly hyperbolic, and to preserve the dissipation property of the kinetic equation.
Ruixi Zhang, Qian Huang, Wen-An Yong
2023-06-13T17:41:11Z
http://arxiv.org/abs/2306.07945v1
# Stability analysis of an extended quadrature method of moments for kinetic equations ###### Abstract. This paper performs a stability analysis of a class of moment closure systems derived with an extended quadrature method of moments (EQMOM) for the one-dimensional BGK equation. The class is characterized with a kernel function. A sufficient condition on the kernel is identified for the EQMOM-derived moment systems to be strictly hyperbolic. We also investigate the realizability of the moment method. Moreover, sufficient and necessary conditions are established for the two-node systems to be well-defined and strictly hyperbolic, and to preserve the dissipation property of the kinetic equation. Key words and phrases:kinetic equation, extended quadrature method of moments, BGK model, hyperbolicity, structural stability condition * Corresponding author Introduction The study of the stochastic stochastic differential equations (SDG) [1, 2] is a stochastic stochastic differential equation (SDG), which is a stochastic differential equation (SDG), which is a stochastic differential equation (SDG), which is a stochastic differential equation. The SDG is a stochastic differential equation (SDG), which is a stochastic differential equation, which is a stochastic differential equation. The SDG is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation. The SDG is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation. The SDG is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation. The SDG is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation. The SDG is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation. The SDG is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation. The SDG is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation. The SDG is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation. The SDG is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic Define the \(j\)th velocity moments of \(f\) as \[M_{j}=M_{j}(t,x)=\int_{\mathbb{R}}\xi^{j}fd\xi\] for \(j\in\mathbb{N}\). The evolution equation for \(M_{j}\) can be immediately derived from (2.1) as \[\partial_{t}M_{j}+\partial_{x}M_{j+1}=\frac{1}{\tau}\left(\rho\Delta_{j}^{eq}( U,\sqrt{\theta})-M_{j}\right) \tag{2.2}\] with \[\Delta_{j}^{eq}(u,\sigma)=\int_{\mathbb{R}}\xi^{j}\frac{1}{\sqrt{2\pi\sigma}} \exp\left(-\frac{(\xi-u)^{2}}{2\sigma^{2}}\right)d\xi.\] Notice that the first \(n\) equations for \(M_{0},...,M_{n-1}\) contain the term \(\partial_{x}M_{n}\). Therefore, any finite truncation of the above equations leads to an unclosed system, and a closure procedure is required. In this paper, we are concerned with an extended quadrature method of moments (EQMOM) [4, 27]. ### Extended quadrature method of moments (EQMOM) Let \(\mathcal{K}=\mathcal{K}(\xi)\geq 0\) satisfy \[\mathfrak{m}_{j}:=\int_{\mathbb{R}}\xi^{j}\mathcal{K}(\xi)d\xi<\infty,\ \forall j\in\mathbb{N},\quad\text{and}\ \mathfrak{m}_{0}=1. \tag{2.3}\] In EQMOM, the distribution \(f\) is approximated with the following ansatz \[f(\xi)=\sum_{i=1}^{n}\frac{w_{i}}{\sigma}\mathcal{K}\left(\frac{\xi-u_{i}}{ \sigma}\right) \tag{2.4}\] with the weights \(w_{i}\), nodes \(u_{i}\) and 'width' \(\sigma>0\) to be determined. To do this, the first \((2n+1)\) lower-order moments are employed: \[M_{j}=\sum_{i=1}^{n}w_{i}\Delta_{j}(u_{i},\sigma),\quad\text{with}\ \Delta_{j}(u,\sigma):=\int_{\mathbb{R}}\xi^{j}\frac{1}{\sigma}\mathcal{K} \left(\frac{\xi-u}{\sigma}\right)d\xi \tag{2.5}\] for \(j=0,\ldots,2n\). This defines a map \(M:=(M_{0},...,M_{2n})^{T}=\mathcal{M}(W)\) for \(W=(w_{1},u_{1},...,w_{n},u_{n},\sigma)^{T}\in\mathbb{R}^{2n+1}\) with \(\sigma>0\). Here the superscript '\(T\)' denotes the transpose of a vector or matrix. Suppose \(\mathcal{M}\) is injective on a certain domain \(W\in\Omega\subset\mathbb{R}^{2n+1}\). Then for any \(M\in\mathbb{G}:=\mathcal{M}(\Omega)\), there exists a unique \(W=\mathcal{M}^{-1}(M)\) solving (2.5). In this way, the EQMOM is _well-defined_ and the next moment \(M_{2n+1}\) can be evaluated as a function of \(M\): \[M_{2n+1}=\mathcal{M}_{2n+1}(M):=\sum_{i=1}^{n}w_{i}\Delta_{2n+1}(u_{i},\sigma). \tag{2.6}\] Consequently, the first \((2n+1)\) equations in (2.2) are closed as a system of first-order PDEs: \[\partial_{t}M+\partial_{x}(M_{1},...,M_{2n+1})^{T}=\frac{1}{\tau}\left(\rho \Delta^{eq}(U,\sqrt{\theta})-M\right). \tag{2.7}\] Here \(\Delta^{eq}(U,\sqrt{\theta})=\left(\Delta_{0}^{eq}(U,\sqrt{\theta}),\ldots, \Delta_{2n}^{eq}(U,\sqrt{\theta})\right)^{T}\in\mathbb{R}^{2n+1}\) with \(\rho=M_{0}\), \(\rho U=M_{1}\) and \(\rho(\theta+U^{2})=M_{2}\). The main goal of this paper is to investigate the injectivity of the map \(\mathcal{M}\) in (2.5) for the general kernel \(\mathcal{K}(\xi)\), where the injectivity is closely related to the realizability of moments. Such analyses are useful to design efficient and robust algorithms to solve \(W\) from (2.5). Moreover, we analyze hyperbolicity of the moment closure system (2.7) and its dissipation property inherited from the the \(H\)-theorem of the kinetic equation. **Remark 2.1**.: For the sake of simplicity, throughout this paper we assume that the kernel function \(\mathcal{K}(\xi)\) is _normalized_, in the sense that \(\mathfrak{m}_{0}=1\), \(\mathfrak{m}_{1}=0\) and \(\mathfrak{m}_{2}=1\). In fact, a general \(\mathcal{K}(\xi)\) can be normalized as \(\mathcal{K}^{+}(\xi)=h\mathcal{K}(h\xi+\xi_{0})\) with \(\xi_{0}=\mathfrak{m}_{1}\) and \(h=\sqrt{\mathfrak{m}_{2}-\mathfrak{m}_{1}^{2}}\) (note that \(\mathfrak{m}_{2}\geq\mathfrak{m}_{1}^{2}\) due to the Cauchy-Schwartz inequality). Moreover, the map \(\mathcal{M}_{2n+1}(M)\) in (2.6) is the same as that derived from \(\mathcal{K}(\xi)\), so the moment closure system (2.7) is unchanged with the normalization. ### Structural stability condition For smooth solutions, the balance laws (2.7) can be written as \[\partial_{t}M+A(M)\partial_{x}M=S(M):=\frac{1}{\tau}\left(\rho\Delta^{eq}(U, \sqrt{\theta})-M\right) \tag{2.8}\] with coefficient matrix \[A(M)=\begin{bmatrix}0&1\\ &0&1\\ &&\ddots&\ddots\\ &&0&1\\ a_{0}&a_{1}&\cdots&a_{2n-1}&a_{2n}\end{bmatrix}, \tag{2.9}\] where \(a_{j}(M)=\partial\mathcal{M}_{2n+1}/\partial M_{j}\) for \(j=0,\ldots,2n\). It is called _hyperbolic_ if \(A(M)\in\mathbb{R}^{(2n+1)\times(2n+1)}\) has \((2n+1)\) linearly-independent real eigenvectors [22]. If \(A(M)\) has \((2n+1)\) distinct real eigenvalues, it is called _strictly hyperbolic_. Obviously, strict hyperbolicity implies hyperbolicity. The dissipativeness of the moment system can be characterized with the structural stability condition proposed in [25] for hyperbolic relaxation systems. Assume that the equilibrium manifold \(\mathcal{E}=\{M\in\mathbb{G}\mid S(M)=0\}\) is not empty. Denote by \(S_{M}(M)\) the Jacobian matrix of \(S(M)\). The structural stability condition reads as 1. For any \(M\in\mathcal{E}\), there exist invertible matrices \(P=P(M)\in\mathbb{R}^{(2n+1)\times(2n+1)}\) and \(\hat{T}=\hat{T}(M)\in\mathbb{R}^{r\times r}\)\((0<r\leq 2n+1)\) such that \[PS_{M}(M)P^{-1}=\operatorname{diag}(\mathbf{0}_{(2n+1-r)\times(2n+1-r)},\hat{T}).\] 2. For any \(M\in\mathbb{G}\), there exists a positive definite symmetric matrix \(A_{0}=A_{0}(M)\) such that \(A_{0}A(M)=A^{T}(M)A_{0}\). 3. For any \(M\in\mathcal{E}\), the coefficient matrix and the source are coupled as \[A_{0}S_{M}(M)+S_{M}^{T}(M)A_{0}\leq-P^{T}\begin{bmatrix}0&0\\ 0&I_{r}\end{bmatrix}P.\] Here \(I_{r}\) is the unit matrix of order \(r\). **Remark 2.2**.: Recently, it has been demonstrated that several moment models from the kinetic equations respect the structural stability condition, including the Gaussian-EQMOM [13] and the hyperbolic regularization models [8, 28]. For the 1-D system (2.8), Condition (II) is satisfied if and only if the system is hyperbolic [13]. Condition (III) can be regarded as a proper manifestation of the dissipation property inherited from the kinetic model. See detailed discussions in [26]. ### Main results Our main results are collected in this subsection. As mentioned in Remark 2.1, we assume that the kernel \(\mathcal{K}(\xi)\) is normalized, that is, \(\mathfrak{m}_{0}=\mathfrak{m}_{2}=1\) and \(\mathfrak{m}_{1}=0\). To state the results, we recursively define a sequence of numbers associated with \(\{\mathfrak{m}_{j}\}\) : \[b_{0}=1,\quad b_{j}=-\sum_{k=1}^{j}\frac{\mathfrak{m}_{k}}{k!}b_{j-k}\quad \text{for }j=1,2,\ldots \tag{2.10}\] and a number of auxiliary moments associated with \(M=(M_{0},...,M_{2n})^{T}\in\mathbb{R}^{2n+1}\): \[M^{*}=\{M_{j}^{*}\}_{j=0}^{2n},\quad M_{j}^{*}=M_{j}^{*}(\sigma)=\sum_{k=0}^ {j}b_{k}\sigma^{k}\frac{j!}{(j-k)!}M_{j-k}. \tag{2.11}\] Moreover, for \(M=(M_{0},...,M_{2n})^{T}\) we introduce the Hankel matrix [21] as \[H_{k}(M)=\begin{bmatrix}M_{0}&M_{1}&\cdots&M_{k}\\ M_{1}&M_{2}&\cdots&M_{k+1}\\ \vdots&\vdots&\ddots&\vdots\\ M_{k}&M_{k+1}&\cdots&M_{2k}\end{bmatrix}\in\mathbb{R}^{(k+1)\times(k+1)}\quad \text{for }k\leq n. \tag{2.12}\] This is a real symmetric matrix. Our first result is **Theorem 2.3**.: _Given \(M=(M_{0},\ldots,M_{2n})^{T}\in\mathbb{R}^{2n+1}\), set_ \[P_{n}(\sigma;M)=\det H_{n}(M^{*}(\sigma)). \tag{2.13}\] _The following statements are equivalent._ _(i). There exists a unique \(W\in\Omega=\Omega^{\prime}\times\{\sigma>0\}\) with_ \[\Omega^{\prime}=\{(w_{1},u_{1},...,w_{n},u_{n})\in\mathbb{R}^{2n}\mid\ w_{i}>0,\ \forall i;\ u_{1}<\cdots<u_{n}\}, \tag{2.14}\] _such that \(\mathcal{M}(W)=M\)._ _(ii). \(P_{n}(\sigma;M)=0\) has a unique positive root \(\sigma_{0}\) such that the Hankel matrix \(H_{n-1}(M^{*}(\sigma_{0}))\) is positive definite._ **Remark 2.4**.: Clearly, a similar conclusion can be formulated when some of the weights \(w_{i}\) are zero or the centers \(u_{i}\) coincide. In that case, one can find a unique index \(k\leq n\) such that \(w_{i}>0\) for any \(i=1,\ldots,k\) and all the \(u_{i}\)'s are distinct. **Remark 2.5**.: Statement (ii) serves as an implicit realizable condition for \(M\). For \(n=2\), this condition is exactly that in [6] for the Gaussian kernel. Moreover, Theorem 2.3 suggests a key step in inverting the map \(M=\mathcal{M}(W)\), namely, finding \(\sigma\) as a root of the polynomial \(P_{n}(\sigma)=\det H_{n}(M^{*}(\sigma))\) of degree \(n(n+1)\). For \(n=2\), the polynomial for even and normalized kernels is \[P_{2}(\sigma)=(5-\mathfrak{m}_{4})\sigma_{1}^{3}+2\theta(3-\mathfrak{m}_{4}) \sigma_{1}^{2}+(M_{4}^{\prime}-\mathfrak{m}_{4}\theta^{2})\sigma_{1}+M_{3}^{ \prime 2}\] with \(\sigma_{1}=\sigma^{2}-\theta\), \[M_{3}^{\prime}=\frac{M_{3}}{M_{0}}-3U\theta-U^{3},\quad M_{4}^{\prime}=\frac{ M_{4}}{M_{0}}-4UM_{3}^{\prime}-6U^{2}\theta-U^{4},\] \(U=M_{1}/M_{0}\) and \(\theta=M_{2}/M_{0}-U^{2}\). Once \(\sigma\) is found, other components of \(W\) can be determined by the existing algorithms [16] (see Remark 3.1). As a corollary of this theorem, we have **Corollary 2.6** (Injectivity).: _For \(n=2\), the map \(M=\mathcal{M}(W)\) in (2.5) is injective for \(W\in\Omega\) if and only if the inequality \(\mathfrak{m}_{4}\geq 3+\frac{9}{8}\mathfrak{m}_{3}^{2}\) holds for \(\mathcal{K}(\xi)\)._ **Remark 2.7**.: For \(n=2\), it is not difficult to see that \(M_{5}\) can be expressed in terms of \(M_{0},...,M_{4}\) when \(u_{1}=u_{2}\). As a consequence, the condition of Corollary 2.6 ensures that the EQMOM is well defined on \(\mathcal{M}(\Omega^{tot})\) with \[\Omega^{tot}=\{W\in\mathbb{R}^{5}|w_{1}>0,\ w_{2}>0,\ u_{1}\leq u_{2},\ \sigma>0\}.\] Suppose the map \(M=\mathcal{M}(W)\) is injective on \(\Omega\) for general \(n\). Our second result is **Theorem 2.8** (Hyperbolicity).: _If the \(b\)-polynomial_ \[p(t):=\sum_{j=0}^{2n+1}b_{2n+1-j}t^{j}\] _has \((2n+1)\) real roots (counting multiplicity) and at least two roots are nonzero, then the \(n\)-node EQMOM moment system (2.7) is strictly hyperbolic on \(\mathcal{M}(\Omega)\)._ Furthermore, for even kernels we have **Theorem 2.9** (Hyperbolicity).: _Let the kernel \(\mathcal{K}(\xi)\) be an even function. The two-node EQMOM moment system (2.7) is strictly hyperbolic for \(M\in\mathcal{M}(\Omega^{tot})\) if and only if \(3\leq\mathfrak{m}_{4}<6\)._ **Theorem 2.10** (Dissipativeness).: _Let the kernel \(\mathcal{K}(\xi)\) be an even function and \(3\leq\mathfrak{m}_{4}<6\). Then the two-node EQMOM moment system (2.7) satisfies the structural stability condition if and only if \(3\leq\mathfrak{m}_{4}<5\)._ In the next section, Theorem 2.3 and Corollary 2.6 will be proved. Section 4.1 is devoted to a proof of Theorem 2.8, while Theorems 2.9 & 2.10 are proved in Sections 4.2 and 5, respectively. ## 3. Injectivity In this section, we prove Theorem 2.3 and Corollary 2.6. To start with, we recall the definition of the map \(M=\mathcal{M}(W)\) in (2.5): \[M_{j}=\sum_{i=1}^{n}w_{i}\Delta_{j}(u_{i},\sigma),\quad\text{with }\Delta_{j}(u, \sigma):=\int_{\mathbb{R}}\xi^{j}\frac{1}{\sigma}\mathcal{K}\left(\frac{\xi-u} {\sigma}\right)d\xi,\] for \(j=0,\dots,2n\). By performing the change of variables \(\frac{\xi-u}{\sigma}\mapsto\xi\), we can easily see that \[\Delta_{j}(u,\sigma)=\sum_{k=0}^{j}\binom{j}{k}\mathfrak{m}_{k}\sigma^{k}u^{j- k}, \tag{3.1}\] indicating that \(\Delta_{j}(u,\sigma)\) is a homogeneous bivariate polynomial of \(u\) and \(\sigma\). It is not difficult to verify \[\partial_{u}\Delta_{j}(u,\sigma) =j\Delta_{j-1}(u,\sigma), \tag{3.2a}\] \[\partial_{\sigma}\Delta_{j}(u,\sigma) =j\sum_{k=0}^{j-1}\binom{j-1}{k}\mathfrak{m}_{k+1}\sigma^{k}u^{j- 1-k}. \tag{3.2b}\] Notice that \(M=(M_{0},...,M_{2n})^{T}\) can be conversely expressed in terms of the auxiliary moments \(M_{j}^{*}=M_{j}^{*}(\sigma)\) defined in (2.11) as \[M_{j}=\sum_{k=0}^{j}\binom{j}{k}\mathfrak{m}_{k}\sigma^{k}M_{j-k}^{*}\quad \text{for }j=0,\dots,2n. \tag{3.3}\] Indeed, a straightforward calculation of the right-hand side, incorporating (2.11), yields \[\text{r.h.s.} =\sum_{k=0}^{j}\binom{j}{k}\mathfrak{m}_{k}\sigma^{k}\sum_{l=0}^{ j-k}b_{l}\sigma^{l}\frac{(j-k)!}{(j-k-l)!}M_{j-k-l}\] \[=\sum_{s=0}^{j}\frac{j!}{(j-s)!}\sigma^{s}M_{j-s}\sum_{k=0}^{s} \frac{\mathfrak{m}_{k}}{k!}b_{s-k}.\] The second equality is derived after the change of variables \(s=k+l\). Rewriting (2.10) as \[\sum_{k=0}^{s}\frac{\mathfrak{m}_{k}}{k!}b_{s-k}=0,\quad s\geq 1,\] we obtain \(\text{r.h.s}=M_{j}\). Similarly, with (3.1) involved, a direct calculation of the right-hand side of (2.11) results in \[M_{j}^{*}(\sigma)=\sum_{i=1}^{n}w_{i}u_{i}^{j},\quad j=0,...,2n. \tag{3.4}\] **Remark 3.1**.: Together with Remark 2.5, the last formula suggests a practical method to solve \(W\in\mathbb{R}^{2n+1}\) from the EQMOM map \(M=\mathcal{M}(W)\). Once \(\sigma\) has been determined as shown in Remark 2.5, the \((w_{i},u_{i})\)'s can be determined by solving the first \(2n\) equations in (3.4) with existing algorithms [16]. About the Hankel matrix, we quote the following lemma. **Lemma 3.2** ([21], Theorem 9.7).: _Given \(M^{\prime}=(M_{0},\dots,M_{2n-1})\in\mathbb{R}^{2n}\), if the nonlinear equations_ \[\sum_{i=1}^{n}w_{i}u_{i}^{j}=M_{j}\quad\text{for }j=0,\dots,2n-1\] _have a solution in \(\Omega^{\prime}\) defined in (2.14), then the Hankel matrix \(H_{n-1}(M^{\prime})\) is positive definite. Conversely, if \(H_{n-1}(M^{\prime})\) is positive definite, then the last equations have a unique solution in \(\Omega^{\prime}\)._ To prove Theorem 2.3, we first notice the following fact: **Proposition 3.3**.: _If \(M_{j}^{*}=\sum_{i=1}^{n}w_{i}u_{i}^{j}\) for \(j=0,\dots,2n\), the Hankel matrix \(H_{n}(\{M_{j}^{*}\})\) is singular._ Proof.: A direct calculation gives \[\det\begin{bmatrix}M_{0}^{*}&\cdots&M_{n}^{*}\\ \vdots&&\vdots\\ M_{n}^{*}&\cdots&M_{2n}^{*}\end{bmatrix}=\sum_{1\leq i_{0},\dots,i_{n}\leq n} \det\begin{bmatrix}w_{i_{0}}&\cdots&w_{i_{n}}u_{i_{n}}^{n}\\ \vdots&&\vdots\\ w_{i_{0}}u_{i_{0}}^{n}&\cdots&w_{i_{n}}u_{i_{n}}^{2n}\end{bmatrix}.\] For each determinant in the summation, at least two of the \((n+1)\) indices \(1\leq i_{0},\dots,i_{n}\leq n\) are identical, therefore the determinant is zero. Hence the Hankel matrix is singular. Proof of Theorem 2.3.: (i) \(\Rightarrow\) (ii). In this case, it has been shown in (3.4) that \(M_{j}^{*}(\sigma)\) can be expressed as \(\sum_{i=1}^{n}w_{i}u_{i}^{j}\) for \(j=0,\dots,2n\) with \(w_{i}>0\) and all the \(u_{i}\)'s distinct. Then it follows from Proposition 3.3 that \(P_{n}(\sigma;M)=\det H_{n}(\{M_{j}^{*}(\sigma)\})=0\). Because all \(w_{i}>0\) and the \(u_{i}\)'s are distinct, we deduce from Lemma 3.2 that \(H_{n-1}(\{M_{j}^{*}(\sigma)\})\) is positive definite. For the uniqueness, suppose that \(P_{n}(\sigma;M)\) has another root \(\sigma_{1}>0\) such that \(H_{n-1}(\{M_{j}^{*}(\sigma_{1})\})\) is positive definite. It follows from Lemma 3.2 that there exists a \(2n\)-tuple \(\{q_{i}>0,v_{i}\}_{1\leq i\leq n}\) such that \(v_{1}<\cdots<v_{n}\) and \(M_{j}^{*}(\sigma_{1})=\sum_{i=1}^{n}q_{i}v_{i}^{j}\) for \(j=0,...,2n-1\). Set \(S_{j}=\sum_{i=1}^{n}q_{i}v_{i}^{j}\). From \(P_{n}(\sigma_{1};M)=0\) and Proposition 3.3 we see that \(\det H_{n}(\{M_{j}^{*}(\sigma_{1})\})=P_{n}(\sigma_{1};M)=\det H_{n}(\{S_{j}\} )=0\). On the other hand, we observe that \[\det H_{n}(\{M_{j}^{*}(\sigma_{1})\})=aM_{2n}^{*}(\sigma_{1})+b,\quad\det H_{n }(\{S_{j}\})=aS_{2n}+b\] with \[a=\det H_{n-1}(\{M_{j}^{*}\})=\det H_{n-1}(\{S_{j}\})>0\] and \(b\) depending only on \(M_{j}^{*}(\sigma_{1})=S_{j}\) with \(j\leq 2n-1\). Thus, we have \(M_{2n}^{*}(\sigma_{1})=S_{2n}\) and thereby get another solution to the equations \(\mathcal{M}(W^{\prime})=M\), violating the uniqueness in (i). This proves (ii). (ii) \(\Rightarrow\) (i). Assume that \(P_{n}(\sigma_{0};M)=0\) and \(H_{n-1}(\{M_{j}^{*}(\sigma_{0})\})\) is positive definite. The reasoning above shows that there exists a unique \(2n\)-tuple \(\{w_{i}>0,u_{i}\}\) such that \(u_{1}<\cdots<u_{n}\) and \(W=(w_{i},u_{i},\sigma_{0})\) solves \(\mathcal{M}(W)=M\). If \(W_{1}=(q_{i},v_{i},\sigma_{1})\neq W\) is another solution, then \(\sigma_{1}\neq\sigma_{0}\) and the reasoning in (i) shows that \(\sigma_{1}\) is another root of \(P_{n}(\sigma;M)=0\) which contradicts (ii). This completes the proof. Now we are in a position to prove Corollary 2.6. Proof of Corollary 2.6.: Assume \(\mathfrak{m}_{4}\geq 3+\frac{9}{8}\mathfrak{m}_{3}^{2}\) for \(\mathcal{K}(\xi)\). It suffices to show that the Jacobian \(\frac{\partial\mathcal{M}}{\partial W}\) is invertible for \(W\in\Omega\). Recall the explicit expressions of \(\Delta_{j}(u,\sigma)\) in (3.1) and its derivatives in (3.2a) & (3.2b). Using \[M_{j}=w_{1}\Delta_{j}(u_{1},\sigma)+w_{2}\Delta_{j}(u_{2},\sigma),\quad j=0,1, \dots,4,\] we compute the \((j+1)\)th row of the Jacobian \(\frac{\partial\mathcal{M}}{\partial W}\) as \[\left(\Delta_{j}(u_{1},\sigma),jw_{1}\Delta_{j-1}(u_{1},\sigma),\Delta_{j}(u_{ 2},\sigma),jw_{2}\Delta_{j-1}(u_{2},\sigma),\sum_{i=1}^{2}w_{i}\partial_{ \sigma}\Delta_{j}(u_{i},\sigma)\right)\] and, by resorting to MATLAB, \[\det\frac{\partial\mathcal{M}}{\partial W}=w_{1}w_{2}\sigma^{3}(u_{1}-u_{2})^ {4}\left[w_{1}q\left(\frac{u_{1}-u_{2}}{\sigma}\right)+w_{2}q\left(\frac{u_{2} -u_{1}}{\sigma}\right)\right]\] with \(q(x)=x^{2}+3\mathfrak{m}_{3}x+2(\mathfrak{m}_{4}-3)\). Obviously, we have \(q(x)\geq 0\) for any real \(x\) due to \(\mathfrak{m}_{4}\geq 3+\frac{9}{8}\mathfrak{m}_{3}^{2}\), and \(q(x)\) does not have two distinct zeros. Therefore, we have \(\det\frac{\partial\mathcal{M}}{\partial W}>0\) for \(W\in\Omega\). Conversely, we assume \(\mathfrak{m}_{4}<3+\frac{9}{8}\mathfrak{m}_{3}^{2}\) for \(\mathcal{K}(\xi)\) and show that \(\mathcal{M}\) is not injective on \(\Omega\). To do this, we set \(M_{b}=\mathcal{M}(\frac{1}{2},0,\frac{1}{2},0,1)=\{\mathfrak{m}_{j}\}_{j=0}^{4}\) and compute \[M_{0}^{*}=1,\quad M_{1}^{*}=0,\quad M_{2}^{*}(\sigma)=1-\sigma^{2},\] \[M_{3}^{*}(\sigma)=\mathfrak{m}_{3}(1-\sigma^{3}),\quad M_{4}^{*}( \sigma)=\mathfrak{m}_{4}(1-\sigma^{4})-6\sigma^{2}(1-\sigma^{2})\] and \[\frac{P_{2}(\sigma;M_{b})}{(1-\sigma)^{2}}= (\mathfrak{m}_{4}-\mathfrak{m}_{3}^{2}-5)\sigma^{4}+2(\mathfrak{m} _{4}-\mathfrak{m}_{3}^{2}-5)\sigma^{3}+(2\mathfrak{m}_{4}-3\mathfrak{m}_{3}^{2} -6)\sigma^{2}\] \[+2(\mathfrak{m}_{4}-\mathfrak{m}_{3}^{2}-1)\sigma+(\mathfrak{m}_ {4}-\mathfrak{m}_{3}^{2}-1).\] The corresponding \(2\times 2\) Hankel matrix, which we rewrite as \(H_{1}(\sigma;M_{b})\), is positive definite if and only if the polynomial \[\det H_{1}(\sigma;M_{b})=M_{0}^{*}(\sigma)M_{2}^{*}(\sigma)-M_{1}^{*2}(\sigma) =1-\sigma^{2}>0,\] namely, \(0<\sigma<1\). On the other hand, we denote \(\tilde{P}_{2}(\sigma;M)=\frac{P_{2}(\sigma;M)}{(1-\sigma)^{2}}\) and notice \[\tilde{P}_{2}(1;M_{b})=8\mathfrak{m}_{4}-9\mathfrak{m}_{3}^{2}-24<0\quad\text {and}\quad\tilde{P}_{2}(0;M_{b})=\mathfrak{m}_{4}-\mathfrak{m}_{3}^{2}-1=\det H _{2}(M_{b})>0\] where the second inequality follows from the positivity of \(H_{2}(M_{b})\). By continuity, we may choose \(\epsilon>0\) and \(M\in\mathcal{M}(\Omega^{\prime}\times\{\sigma=1\})\) such that \[P_{2}(\epsilon;M)>0>P_{2}(1-\epsilon;M)\quad\text{and}\quad\det H_{1}(\sigma; M)>0\] for \(\sigma\in[\epsilon,1-\epsilon]\). Therefore, \(P_{2}(\sigma;M)\) has a root \(\sigma_{0}\) in \((\epsilon,1-\epsilon)\). Clearly, \(\sigma=1\) is another root of \(P_{2}(\sigma;M)\) such that \(H_{1}(1;M)>0\). By Theorem 2.3, \(\mathcal{M}\) is not injective and hence the proof is complete. ## 4. Hyperbolicity In this section we prove Theorems 2.8 & 2.9. To this end, it suffices to show that the characteristic polynomial \[c=c(u;M)=u^{2n+1}-\sum_{j=0}^{2n}a_{j}u^{j} \tag{4.1}\] of the coefficient matrix \(A(M)\) in (2.9) has distinct real roots. ### A proof of Theorem 2.8 First of all, we follow [13] and introduce an auxiliary polynomial associated with the characteristic polynomial: \[g=g(u;W)=\Delta_{2n+1}(u,\sigma)-\sum_{j=0}^{2n}a_{j}\Delta_{j}(u,\sigma). \tag{4.2}\] With \(g\) thus defined, we claim that \[c(u;M)=\sum_{k=0}^{2n+1}b_{k}\sigma^{k}\partial_{u}^{k}g(u;W). \tag{4.3}\] Indeed, we recall 2.11 & 3.4 that \[\sum_{i=1}^{n}w_{i}u_{i}^{j}=M_{j}^{*}=\sum_{k=0}^{j}b_{k}\sigma^{k}\frac{j!} {(j-k)!}\sum_{i=1}^{n}w_{i}\Delta_{j-k}(u_{i},\sigma).\] In this identity, we set \(w_{1}=1,w_{2}=\cdots=w_{n}=0\) and derive from (3.2a) that \[u_{1}^{j}=\sum_{k=0}^{j}b_{k}\sigma^{k}\frac{j!}{(j-k)!}\Delta_{j-k}(u_{1}, \sigma)=\sum_{k=0}^{j}b_{k}\sigma^{k}\partial_{u}^{k}\Delta_{j}(u_{1},\sigma).\] Thus the claim becomes clear. Furthermore, \(g(u;W)\) has the following elegant property for general kernel \(\mathcal{K}(\xi)\). **Proposition 4.1**.: \[g(u;W)=(u-u_{1})^{2}\cdots(u-u_{n})^{2}(u-\tilde{u}(W)).\] Proof.: With the expressions for \(M_{j}\) in (2.5) and \(\Delta_{j}(u,\sigma)\) in (3.1) (\(j=0,\ldots,2n\)), we calculate the \((j+1)\)th-row of the Jacobian matrix \(\frac{\partial\mathcal{M}}{\partial W}\in\mathbb{R}^{(2n+1)\times(2n+1)}\) as \[(\Delta_{j}(u_{1},\sigma),w_{1}\partial_{u}\Delta_{j}(u_{1},\sigma),\ldots, \Delta_{j}(u_{n}),w_{n}\partial_{u}\Delta_{j}(u_{n},\sigma),\partial_{\sigma}M _{j})\,.\] Moreover, from (2.6) we compute \(\frac{\partial\mathcal{M}_{2n+1}}{\partial W}\) as \[(\Delta_{2n+1}(u_{1},\sigma),w_{1}\partial_{u}\Delta_{2n+1}(u_{1},\sigma), \ldots,\Delta_{2n+1}(u_{n},\sigma),w_{n}\partial_{u}\Delta_{2n+1}(u_{n}, \sigma),\partial_{\sigma}M_{2n+1})\,.\] Then we use the relation \[(a_{0},\ldots,a_{2n})\frac{\partial\mathcal{M}}{\partial W}=\frac{\partial \mathcal{M}_{2n+1}}{\partial M}\frac{\partial\mathcal{M}}{\partial W}=\frac{ \partial\mathcal{M}_{2n+1}}{\partial W} \tag{4.4}\] to obtain, for \(i=1,\ldots,n\), \[a_{0}\Delta_{0}(u_{i},\sigma)+\cdots+a_{2n}\Delta_{2n}(u_{i}, \sigma)=\Delta_{2n+1}(u_{i},\sigma), \tag{4.5a}\] \[a_{0}\partial_{u}\Delta_{0}(u_{i},\sigma)+\cdots+a_{2n}\partial_ {u}\Delta_{2n}(u_{i},\sigma)=\partial_{u}\Delta_{2n+1}(u_{i},\sigma), \tag{4.5b}\] implying \(g(u_{i})=\partial_{u}g(u_{i})=0\) for \(i=1,\ldots,n\). This immediately leads to the expected expression. We also need the following elementary fact. **Proposition 4.2**.: _If a polynomial \(f(u)\) of degree \(N\) has \(N\) real roots (counting multiplicity) and the maximum multiplicity of the roots is \(m(f)\), then_ \[g_{s}(u)=f(u)+sf^{\prime}(u)\] _also has \(N\) real roots (counting multiplicity) and \(m(g_{s})=\max\{m(f)-1,1\}\) if \(s\neq 0\)._ Proof.: According to the condition, we can write \(f(u)=C(u-u_{1})^{m_{1}}\cdots(u-u_{r})^{m_{r}}\) with \(\sum_{i=1}^{r}m_{i}=N\). Denote by \(u_{i}^{\prime}\in(u_{i},u_{i+1})\) the root of \(f^{\prime}(u)\) for \(i=1,\ldots,r-1\). Set \(u_{0}^{\prime}=-\infty\) and \(u_{r}^{\prime}=\infty\). It suffices to show that, if \(s\neq 0\), \[g_{s}(u)=C\prod_{i=1}^{r}(u-u_{i})^{m_{i}-1}\cdot\prod_{i=1}^{r}(u-v_{i})\] with \(v_{i}\in(u_{i-1}^{\prime},u_{i}^{\prime})\) and \(v_{i}\neq u_{i}\) for \(i=1,\ldots,r\). For this purpose, we first notice that \(u_{i}\) is a root of \(g_{s}(u)\) with multiplicity \((m_{i}-1)\). Then we look into the interval \((u_{i-1}^{\prime},u_{i}^{\prime})\) which contains \(u_{i}\) for \(i=1,\ldots,r\). Note \(g_{s}(u_{i}^{\prime})=f(u_{i}^{\prime})\) for \(i=0,...,r\) (this equality holds for \(i=0,r\) where \(f(u_{i}^{\prime})=\pm\infty\)). If \(m_{i}\) is even, we have \(f(u_{i-1}^{\prime})f(u_{i}^{\prime})>0\) while \(g_{s}(u_{i}-\epsilon)g_{s}(u_{i}+\epsilon)<0\) because the multiplicity of \(u_{i}\) is odd (\(=m_{i}-1\)) for \(g_{s}(u)\). Therefore, there exists one root of \(g_{s}(u)\), denoted by \(v_{i}\), in either \((u_{i-1}^{\prime},u_{i}-\epsilon)\) or \((u_{i}+\epsilon,u_{i}^{\prime})\). It is hence distinct from \(u_{i}\). Similarly, such a \(v_{i}\) also exists for odd \(m_{i}\). In this way, we get \(r\) additional roots \(v_{i}\) (\(i=1,\ldots,r\)) of \(g_{s}(u)\). These roots are all simple because the degree of \(g_{s}(u)\) is \(\sum_{i=1}^{r}(m_{i}-1)+r=N\). Hence, \(g_{s}(u)\) can be factorized as above. Now we are in a position to prove Theorem 2.8. Proof of Theorem 2.8.: Referring to the condition of Theorem 2.8, we can write the \(b\)-polynomial as \(p(t)=(t+d_{1})\cdots(t+d_{2n+1})\) with at least two of the \(d_{i}\)'s being nonzero. It is straightforward to verify that \[\sum_{j=0}^{2n+1}b_{j}t^{j}=t^{2n+1}p\left(\frac{1}{t}\right)=(1+d_{1}t)\cdots( 1+d_{2n+1}t).\] Substituting \(t\) with \(\sigma\partial_{u}\), we obtain \[\sum_{j=0}^{2n+1}b_{j}\sigma^{j}\partial_{u}^{j}g(u;W)=(1+d_{1}\sigma\partial_ {u})\cdots(1+d_{2n+1}\sigma\partial_{u})g(u;W),\] which is the characteristic polynomial \(c(u;M)\) according to (4.3). As shown in Proposition 4.1, \(g(u;W)\) has \((2n+1)\) real roots (of \(u\)) and the maximum multiplicity \(m(g)\leq 3\) if \(W\in\Omega\). Then by repeatedly using Proposition 4.2 with \(s=\sigma d_{i}\) (\(i=2n+1,2n,\ldots,1\)), we see that the characteristic polynomial \(c(u;M)\) has \((2n+1)\) real roots. Moreover, since at least two of the \(d_{i}\)'s are nonzero, the maximum multiplicity of each root is reduced to 1. Hence, all the roots are distinct. ### A proof of Theorem 2.9 First of all, we recall (4.3) for \(n=2\) that the characteristic polynomial is \[c(u;M)=\sum_{k=0}^{5}b_{k}\sigma^{k}\partial_{u}^{k}g(u;W)\] and Proposition 4.1 reads as \[g(u;W)=(u-u_{1})^{2}(u-u_{2})^{2}(u-\tilde{u}(W)).\] Moreover, for even kernels we have \(\mathfrak{m}_{1}=\mathfrak{m}_{3}=\mathfrak{m}_{5}=0\). Then it follows from the definition (2.10) that \[b_{2}=-\frac{1}{2},\quad b_{4}=\frac{1}{4}-\frac{1}{24}\mathfrak{m}_{4},\quad b _{1}=b_{3}=b_{5}=0. \tag{4.6}\] Notice that \(\tilde{u}=a_{2n}-2\sum_{i=1}^{n}u_{i}\). We can obtain \(a_{4}\) by solving (4.4) via Crammer's rule and thereby \[\tilde{u}(W)=\frac{w_{1}u_{1}+w_{2}u_{2}}{w_{1}+w_{2}}+\frac{4(\mathfrak{m}_{ 4}-3)\sigma^{2}(w_{1}-w_{2})(u_{1}-u_{2})}{(w_{1}+w_{2})[(u_{1}-u_{2})^{2}+2( \mathfrak{m}_{4}-3)\sigma^{2}]}. \tag{4.7}\] Clearly, \(\tilde{u}(W)\) and thereby \(g(u;W)\) are well-defined for \(u_{1}=u_{2}\). With these preparations, we turn to _The proof of Theorem 2.9._ By Corollary 2.6 and Remark 2.7, the condition \(\mathfrak{m}_{4}\geq 3\) ensures that the EQMOM is well defined. Assume that the two-node EQMOM is strictly hyperbolic on \(\mathcal{M}(\Omega^{tot})\). Taking \(W=(\frac{1}{2},0,\frac{1}{2},0,1)\in\Omega^{eq}\), we have \(g(u;W)=u^{5}\) and \(c(u;W)=u(u^{4}-10u^{2}+120b_{4})\). The strict hyperbolicity that \(c(u;W)\) has 5 distinct real roots implies \(b_{4}>0\) and hence \(\mathfrak{m}_{4}<6\) due to (4.6). Conversely, if \(3\leq\mathfrak{m}_{4}<6\), we see from (4.6) that \(b_{2}=-\frac{1}{2}<0<b_{4}\leq\frac{1}{8}<\frac{5}{6}b_{2}^{2}\). Thus, \(c(u;W)=g(u;W)+b_{2}\sigma^{2}\partial_{u}^{2}g(u;W)+b_{4}\sigma^{4}\partial_{ u}^{4}g(u;W)\) has 5 distinct real roots due to Proposition 4.3 to be proved below. **Proposition 4.3**.: _Set \(g(u)=g(u;W)\). For any \(s>0\), the polynomial_ \[h(u;s)=g(u)-c_{1}sg^{(2)}(u)+c_{2}s^{2}g^{(4)}(u)\] _has 5 distinct roots if the constants satisfy \(c_{1}>0\) and \(0<c_{2}<\frac{5}{6}c_{1}^{2}\)._ Proof.: Without loss of generality, we assume \(u_{1}\leq u_{2}\) and consider four cases: (I) \(u_{1}=u_{2}=\tilde{u}\), (II) \(u_{1}<\tilde{u}<u_{2}\), (III) \(u_{1}<u_{2}\leq\tilde{u}\) and (IV) \(\tilde{u}\leq u_{1}<u_{2}\). **Case I**. In this case, we have \(g(u)=(u-u_{1})^{5}\) and \[h(u;s)=(u-u_{1})^{5}-20c_{1}s(u-u_{1})^{3}+120c_{2}s^{2}(u-u_{1})=(u-u_{1})(t^ {2}-20c_{1}st+120c_{2}s^{2})\] with \(t=(u-u_{1})^{2}\). Thanks to \(c_{1}>0\) and \(0<c_{2}<\frac{5}{6}c_{1}^{2}\), the quadratic function in \(t\) has two distinct positive roots. Therefore \(h(u;s)\) has 5 distinct real roots. For other cases, notice that \(h(u;s)\) is a monic polynomial of order 5. Thus, it suffices to find \(\hat{u}_{1}<\cdots<\hat{u}_{4}\) such that \[h(\hat{u}_{1};s)>0\quad\text{and}\quad h(\hat{u}_{i};s)h(\hat{u}_{i+1};s)<0, \quad i=1,2,3.\] For this purpose, we set \(P_{u}^{+}:=\{s>0|h(u;s)>0\}\) and \(P_{u}^{-}:=\{s>0|h(u;s)<0\}\) for each \(u\). Then our main test is to show \[\bigcup_{u\in I}P_{u}^{+}=(0,\infty)=\bigcup_{u\in I^{\prime}}P_{u}^{-} \tag{4.8}\] for some intervals \(I,I^{\prime}\). To do this, the key idea is to view \(h(u;s)\) as a quadratic function of \(s\) and check its symmetric axis and discriminant \[A(u)=\frac{c_{1}g^{(2)}(u)}{2c_{2}g^{(4)}(u)},\quad d(u)=g^{(2)}(u)^{2}-\beta g^{ (4)}(u)g(u)\] with \(\beta=4c_{2}/c_{1}^{2}\in(0,\frac{10}{3})\). If the leading coefficient \(c_{2}g^{(4)}(u)\) is negative (or positive) and the discriminant \(d(u)\) is positive in \(I\) (resp. \(I^{\prime}\)), then \(P_{u}^{+}\) (resp. \(P_{u}^{-}\)) is an open interval centered at \(s=A(u)\). In addition, we recall the form of \(g(u)\) given in Proposition 4.1 and denote by \(u_{1}^{(j)}\leq\cdots\leq u_{5-j}^{(j)}\) the \((5-j)\) roots of \(g^{(j)}(u)\) (counting multiplicity). It is not difficult to see that \(u_{i}^{(j)}\leq u_{i}^{(j+1)}\leq u_{i+1}^{(j)}\) for \(j=1,2,3\) and \(i=1,\ldots,4-j\). The equalities occur only when \(j=1\) and \(\tilde{u}=u_{1}\) or \(u_{2}\). **Case II**. In this case, we have \(u_{1}=u_{1}^{(1)}<u_{1}^{(2)}<u_{2}^{(1)}<\tilde{u}\) and \(u_{1}^{(2)}<u_{1}^{(3)}<u_{1}^{(4)}\), resulting in \(g(u_{1}^{(2)})<0\) and \(g^{(4)}(u_{1}^{(2)})<0\). Thus, we have \(h(u_{1}^{(2)};s)<0\) for any \(s>0\). A similar argument shows \(h(u_{3}^{(2)};s)>0\). As a consequence, it suffices to prove that there exist \(\hat{u}_{1}\leq u_{1}\) and \(\hat{u}_{4}\geq u_{2}\) such that \(h(\hat{u}_{1};s)>0\) and \(h(\hat{u}_{4};s)<0\); in other words, we need to verify (4.8) for \(I=(-\infty,u_{1}]\) and \(I^{\prime}=[u_{2},\infty)\). To this end, we notice \(A(u)\to+\infty\) as \(u\to-\infty\) and \(P_{u_{1}}^{+}=(0,2A(u_{1}))\). Owing to the continuity of \(A(u)\), it suffices to show that \(d(u)>0\) for \(u\leq u_{1}\). Notice that \(d(u)\) is a polynomial of order \(6\). A straightforward calculation yields (the \(u\)-dependence is omitted for clarity) \[\begin{split}& d^{(1)}=2g^{(2)}g^{(3)}-\beta(g^{(5)}g+g^{(4)}g^{ (1)}),\\ & d^{(2)}=2g^{(3)2}+(2-\beta)g^{(2)}g^{(4)}-2\beta g^{(1)}g^{(5)},\\ & d^{(3)}=(6-\beta)g^{(3)}g^{(4)}+(2-3\beta)g^{(2)}g^{(5)},\\ & d^{(4)}=(6-\beta)g^{(4)2}+(8-4\beta)g^{(3)}g^{(5)},\\ & d^{(5)}=(20-6\beta)g^{(4)}g^{(5)},\\ & d^{(6)}=(20-6\beta)g^{(5)2}.\end{split} \tag{4.9}\] Obviously, we have \(d(u_{1})>0\), \(d^{(1)}(u_{1})<0\), \(d^{(5)}(u_{1})<0\) and \(d^{(6)}(u_{1})>0\). Setting \(a=u_{2}-u_{1}>0\) and \(t=\frac{\tilde{u}-u_{4}}{u_{2}-u_{1}}\in(0,1)\), we obtain \[\begin{split}& d^{(2)}(u_{1})=24a^{4}[(16-2\beta)t^{2}+(20-4 \beta)t+3],\\ & d^{(3)}(u_{1})=-48a^{3}[6(6-\beta)t^{2}+5(20-6\beta)t+6(6- \beta)],\\ & d^{(4)}(u_{1})=576a^{2}[(6-\beta)t^{2}+(44-14\beta)t+(34-9\beta )].\end{split} \tag{4.10}\] For \(0<\beta<\frac{10}{3}\), the above \(t\)-parabolae are positive on \([0,\infty)\). Thus, we have \(d^{(2)}(u_{1})>0\), \(d^{(3)}(u_{1})<0\) and \(d^{(4)}(u_{1})>0\). The Taylor expansion of \(d(u)\) at \(u_{1}\) then verifies the positivity of \(d(u)\) for \(u\leq u_{1}\). A similar argument can be used to show \(\bigcup\limits_{u\geq u_{2}}P_{u}^{-}=(0,\infty)\). **Case III**. In this case we take \(\hat{u}_{2}=u_{1}^{(2)}\) and if \(u_{2}\geq u_{1}^{(4)}\), we take \(\hat{u}_{3}=u_{2}\). It is then easy to see that \(h(\hat{u}_{2};s)<0<h(\hat{u}_{3};s)\) for any \(s>0\). For \(u_{2}<u_{1}^{(4)}\), we can show the existence of \(\hat{u}_{3}\in(u_{2},u_{1}^{(4)})\) such that \(h(\hat{u}_{3},s)>0\) by verifying the first equality in (4.8) with \(I=(u_{2},u_{1}^{(4)})\). Clearly, for \(u\in I\) we have \(c_{2}g^{(4)}(u)<0\), \(P_{u_{2}}^{+}=(0,2A(u_{2}))\) and \(A(u)\to\infty\) as \(u\to u_{1}^{(4)}\). Then it suffice to show \(d(u)>0\) for \(u\in(u_{2},u_{1}^{(4)})\). Note that \(g^{(j)}(u)<0\) for \(u\in(u_{2},u_{1}^{(4)})\) because \(u_{2}=u_{3}^{(1)}>u_{2}^{(2)}>u_{1}^{(3)}\) and \(u_{1}^{(4)}<u_{2}^{(3)}<u_{4}^{(1)}<\tilde{u}\). Then we see that \(d(u_{2})>0\) and \(d^{(1)}(u_{2})>0\). Moreover, if \(\beta\leq 2\), we see from (4.9) that \(d^{(2)}(u)>0\) throughout \((u_{2},u_{1}^{(4)})\). Consequently, we have \(d(u)>0\) for \(u\in(u_{2},u_{1}^{(4)})\). On the other hand, if \(\beta>2\), it follows from (4.9) that \(d^{(3)}(u)>0\) for \(u\in(u_{2},u_{1}^{(4)})\). As for \(d^{(2)}(u_{2})\), it has the same form as in (4.10) except that \(u_{1}\) and \(u_{2}\) are permuted. Clearly, we have \(d^{(2)}(u_{2})>0\). In conclusion, we have shown that \(d(u)>0\) for \(u\in(u_{2},u_{1}^{(4)})\) in both cases. To show the existence of \(\hat{u}_{4}\), we verify the second equality in (4.8) with \(I^{\prime}=(\tilde{u},\infty)\). As above, it suffices to show \(d(u)>0\) for \(u>\tilde{u}\). From (4.9), we see that \(d(\tilde{u})>0\), \(d^{(5)}(\tilde{u})>0\) and \(d^{(6)}(\tilde{u})>0\). Furthermore, we set \(a=\tilde{u}-u_{1}>0\) and \(t=\frac{\tilde{u}-u_{2}}{\tilde{u}-u_{1}}\in[0,1)\). A straightforward calculation gives \[g^{(1)}(\tilde{u})=a^{4}p_{1}(t),\ g^{(2)}(\tilde{u})=4a^{3}p_{2}(t),\ g^{(3)}( \tilde{u})=6a^{2}p_{3}(t),\ g^{(4)}(\tilde{u})=48ap_{4}(t)\] with \(p_{1}(t)=t^{2}\), \(p_{2}(t)=t^{2}+t\), \(p_{3}(t)=t^{2}+4t+1\) and \(p_{4}(t)=t+1\). Owing to the following inequalities \[p_{2}(t)p_{3}(t) =(t^{2}+t)(t^{2}+4t+1)\geq(t^{2}+t)\cdot 6t=6p_{1}(t)p_{4}(t),\] \[p_{3}(t)^{2} =((t+1)^{2}+2t)^{2}\geq 8t(t+1)^{2}=8p_{2}(t)p_{4}(t),\] \[p_{3}(t)^{2} =(t^{2}+4t+1)^{2}\geq 36t^{2}=36p_{1}(t),\] \[p_{3}(t)p_{4}(t) =(t^{2}+4t+1)(t+1)\geq 6t(t+1)=6p_{2}(t),\] \[3p_{4}(t)^{2} =3(t+1)^{2}\geq 4t+2(t+1)^{2}=2p_{3}(t),\] we derive, by a direct calculation, that \[d^{(1)}(\tilde{u}) =48a^{5}\left(p_{2}(t)p_{3}(t)-\beta p_{4}(t)p_{1}(t)\right)\geq 4 8a^{5}(6-\beta)p_{1}(t)p_{4}(t)>0,\] \[d^{(2)}(\tilde{u}) =72a^{4}p_{3}(t)^{2}-192(\beta-2)a^{4}p_{2}(t)p_{4}(t)-240\beta a ^{4}p_{1}(t)\] \[\geq(256-192(\beta-2))a^{4}p_{2}(t)p_{4}(t)+240(6-\beta)a^{4}p_{1 }(t)>0,\] \[d^{(3)}(\tilde{u}) =288(6-\beta)a^{3}p_{3}(t)p_{4}(t)-480(3\beta-2)a^{3}p_{2}(t)\geq 9 6(118-33\beta)a^{3}p_{2}(t)>0,\] \[d^{(4)}(\tilde{u}) =48^{2}(6-\beta)a^{2}p_{4}(t)^{2}-720(4\beta-8)a^{2}p_{3}(t)\geq 1 92(78-23\beta)a^{2}p_{3}(t)>0.\] These ensure that \(d(u)>0\) and thereby the existence of \(\hat{u}_{4}\). A similar argument as in Case (II) can be used to prove the existence of \(\hat{u}_{1}\leq u_{1}\) such that \(h(\hat{u}_{1};s)>0\) for any \(s>0\). **Case IV**. This case can be converted to Case (III) with \(-u_{2}<-u_{1}\leq-\tilde{u}\) by introducing \(\tilde{g}(u)=-g(-u)=(u+u_{1})^{2}(u+u_{2})^{2}(u+\tilde{u})\). We see that \(-h(-u;s)=\tilde{g}(u)-c_{1}s\tilde{g}^{(2)}(u)+c_{2}s^{2}\tilde{g}^{(4)}(u)\) has 5 distinct roots, so is \(h(u;s)\). Hence, the proof is complete. ## 5. Structural stability In this section we prove Theorem 2.10, namely, checking the structural stability condition (I)-(III) in Subsection 2.2. For Condition (I), we calculate the Jacobian of \(S(M)=\frac{1}{\tau}\left(\rho\Delta^{eq}(U,\sqrt{\theta})-M\right)\) as \[S_{M}(M)=\frac{1}{\tau}\begin{bmatrix}0&&&&\\ &0&&&\\ &&0&&\\ s_{1}&s_{2}&s_{3}&-1&\\ s_{4}&s_{5}&s_{6}&&-1\end{bmatrix} \tag{5.1}\] with \[s_{1}=U^{3}-3U\theta,\ \ s_{2}=3(\theta-U^{2}),\ \ s_{3}=3U,\] \[s_{4}=3(U^{4}-2U^{2}\theta-\theta^{2}),\ \ s_{5}=-8U^{3},\ \ s_{6}=6(U^{2}+ \theta).\] Take \[P(M)=\begin{bmatrix}1&&&&\\ &1&&\\ -s_{1}&-s_{2}&-s_{3}&1\\ -s_{4}&-s_{5}&-s_{6}&1\end{bmatrix}, \tag{5.2}\] It is obvious that \(P(M)S_{M}(M)=\tau^{-1}\mathrm{diag}(0,0,0,-1,-1)P(M)\). This justifies Condition (I). Note that the choice of \(P\) is unique up to a block-diagonal matrix. As to Condition (II), we know from Theorem 2.9 that the two-node moment system (2.7) with even kernels is strictly hyperbolic if \(3\leq\mathfrak{m}_{4}<6\). Namely, the coefficient matrix \(A(M)\) has 5 distinct real eigenvalues \(\lambda_{i}=\lambda_{i}(M)(i=1,...,5)\). Corresponding to these eigenvalues, the left eigenvectors form the following matrix \[L=L(M)=\begin{bmatrix}\lambda_{1}^{4}&\lambda_{1}^{3}&\lambda_{1}^{2}&\lambda_{1 }&1\\ \lambda_{2}^{4}&\lambda_{2}^{3}&\lambda_{2}^{2}&\lambda_{2}&1\\ \lambda_{3}^{4}&\lambda_{3}^{3}&\lambda_{2}^{2}&\lambda_{3}&1\\ \lambda_{4}^{4}&\lambda_{3}^{4}&\lambda_{4}^{2}&\lambda_{4}&1\\ \lambda_{5}^{4}&\lambda_{5}^{3}&\lambda_{5}^{2}&\lambda_{5}&1\end{bmatrix} \begin{bmatrix}-1&&&\\ a_{4}&-1&&\\ a_{3}&a_{4}&-1&\\ a_{2}&a_{3}&a_{4}&-1\\ a_{1}&a_{2}&a_{3}&a_{4}&-1\end{bmatrix}, \tag{5.3}\] which can be easily verified. With this matrix, the symmetrizer \(A_{0}=A_{0}(M)\) in Condition (II) must be chosen as \(A_{0}=L^{T}\Lambda L\) with \(\Lambda\) an arbitrary positive definite diagonal matrix to be determined [25]. The rest of this section is to choose the diagonal matrix \(\Lambda=\Lambda(M)\) such that Condition (III) is satisfied for \(M\) in the equilibrium manifold. Since \(P(M)S_{M}(M)=\tau^{-1}\text{diag}(\mathbf{0}_{3},-I_{2})P(M)\), it is equivalent to find \(\Lambda\) such that the matrix \(P^{-T}A_{0}P^{-1}=P^{-T}L^{T}\Lambda LP^{-1}\) is block diagonal with the same partition as \(\text{diag}(\mathbf{0}_{3},-I_{2})\), meaning that the first three columns of \(\sqrt{\Lambda}LP^{-1}\) are orthogonal to the last two columns. Note that the existence of such a \(\Lambda\) is independent of the choice of \(P\). Denote by \(r_{i}\in\mathbb{R}^{5}\) the \(i\)th column of \(\sqrt{\Lambda}L\). Since \[P^{-1}=\begin{bmatrix}1&&&&\\ &1&&\\ &&1&&\\ s_{1}&s_{2}&s_{3}&1\\ s_{4}&s_{5}&s_{6}&&1\end{bmatrix},\] the orthogonality gives six equations \[(r_{i},r_{j})+s_{i}(r_{4},r_{j})+s_{i+3}(r_{5},r_{j})=0\quad\text{for $i=1,2,3$ and $j=4,5$.} \tag{5.4}\] Here \((\cdot,\cdot)\) represents the dot product of vectors. To show that (5.4) can be used to determine \(\Lambda\), we write \(\sqrt{\Lambda}=\text{diag}(x_{i})_{i=1}^{5}\). Then it follows from (5.3) that \[r_{i}=\left(x_{1}\sum_{j=i}^{5}a_{j}\lambda_{1}^{j-i},\ \ldots,\ x_{5}\sum_{j=i}^{5}a_{j} \lambda_{5}^{j-i}\right)^{T} \tag{5.5}\] with \(a_{5}=-1\). With this, the dot products can be written as \[(r_{k},r_{5})=-\sum_{i=1}^{5}x_{i}^{2}\sum_{j=k}^{5}a_{j}\lambda_{i}^{j-k}.\] Moreover, we introduce \[r_{4}^{\prime}=r_{4}+a_{4}r_{5}=-(x_{1}\lambda_{1},\ldots,x_{5}\lambda_{5})^{ T}.\] It is clear that the equations in (5.4) with \(j=4\) can be replaced with \[(r_{i},r_{4}^{\prime})+s_{i}(r_{4},r_{4}^{\prime})+s_{i+3}(r_{5},r_{4}^{ \prime})=0,\quad i=1,2,3,\] and \[(r_{k},r_{4}^{\prime}) =\sum_{i=1}^{5}x_{i}^{2}\left[\lambda_{i}\left(\lambda_{i}^{5-k} -a_{4}\lambda_{i}^{4-k}-\cdots-a_{k}\right)-a_{k-1}+a_{k-1}\right] \tag{5.6}\] \[=(r_{k-1},r_{5})+a_{k-1}\sum_{i=1}^{5}x_{i}^{2}.\] These indicate that (5.4) is a system of six linear equations for the five unknowns \(x_{i}^{2}\). Note that the coefficients of this system all depend on \(M\). Since Condition (III) is posed only for \(M\) in the equilibrium, we only need to calculate the coefficients for \(M\) in the equilibrium manifold for the moment system (2.7): \[\mathcal{E}=\{M\in\mathcal{M}(\Omega^{tot})|M_{j}=\rho\Delta_{j}^{eq}(U,\sqrt {\theta})\ \text{for $j=0,\ldots,4$}\}\] with \(\Delta_{j}^{eq}(U,\sqrt{\theta})\) defined in (2.2) and \[\rho=M_{0},\quad U=M_{1}/M_{0}\quad\text{and}\quad\theta=(M_{0}M_{2}-M_{1}^{2})/M _{0}^{2}.\] About this \(\mathcal{E}\), we have **Proposition 5.1**.: _For any \(\rho,\theta>0\) and \(U\in\mathbb{R}\), consider equations_ \[M_{j}:=\sum_{i=1}^{2}w_{i}\Delta_{j}(u_{i},\sigma)=\rho\Delta_{j}^{eq}(U,\sqrt {\theta}),\quad j=0,1,...,4,\] _for \(W=(w_{1},u_{1},w_{2},u_{2},\sigma)\in\Omega^{tot}\). When \(\mathfrak{m}_{4}<3\), the equations have no solution; when \(\mathfrak{m}_{4}=3\), the solutions satisfy \(u_{1}=u_{2}=U,\sigma=\sqrt{\theta}\) and \(w_{1}+w_{2}=\rho\); and when \(\mathfrak{m}_{4}>3\), there is a unique solution given as_ \[w_{1,2}=\frac{\rho}{2},\quad u_{1,2}=U\mp\nu\sigma,\quad\sigma=\left(\frac{ \theta}{\nu^{2}+1}\right)^{\frac{1}{2}},\quad\nu=\left(\frac{\mathfrak{m}_{4 }-3}{2}\right)^{\frac{1}{4}}.\] _In particular, the equilibrium manifold is nonempty if and only if \(\mathfrak{m}_{4}\geq 3\)._ Proof.: Notice that \[\sum_{i=1}^{2}w_{i}\Delta_{j}\left(\frac{u_{i}-U}{\sigma},1\right)= \int\xi^{j}\sum_{i=1}^{2}w_{i}\mathcal{K}\left(\xi-\frac{u_{i}-U}{ \sigma}\right)d\xi\] \[= \int\left(\frac{\eta-U}{\sigma}\right)^{j}\sum_{i=1}^{2}\frac{w_{ i}}{\sigma}\mathcal{K}\left(\frac{\eta-u_{i}}{\sigma}\right)d\eta=\sum_{k=0}^{j} \binom{j}{k}(-U)^{k}\frac{M_{j-k}}{\sigma^{j}}\] and similarly, \[\rho\Delta_{j}^{eq}\left(0,\frac{\sqrt{\theta}}{\sigma}\right)=\sum_{k=0}^{j} \binom{j}{k}(-U)^{k}\frac{\rho\Delta_{j-k}^{eq}(U,\sqrt{\theta})}{\sigma^{j}}.\] Then the given equations \(M_{j}=\rho\Delta_{j}^{eq}(U,\sqrt{\theta})\) are equivalent to \[\sum_{i=1}^{2}\frac{w_{i}}{\rho}\Delta_{j}\left(\frac{u_{i}-U}{\sigma},1 \right)=\Delta_{j}^{eq}\left(0,\frac{\sqrt{\theta}}{\sigma}\right),\quad j=0,...,4. \tag{5.7}\] Denote \(w_{i}^{\prime}=\frac{w_{i}}{\rho}\), \(u_{i}^{\prime}=\frac{u_{i}-U}{\sigma}\) and \(\theta^{\prime}=\frac{\theta}{\sigma^{2}}\). Recall from (3.1) that \[\Delta_{j}(u,\sigma)=\sum_{k=0}^{j}\binom{j}{k}\mathfrak{m}_{k}\sigma^{k}u^{j-k}\] with \(\mathfrak{m}_{3}=0\) for even kernels. Then equations (5.7) can be rewritten as \[w_{1}^{\prime}+w_{2}^{\prime} =1, \tag{5.8a}\] \[w_{1}^{\prime}u_{1}^{\prime}+w_{2}^{\prime}u_{2}^{\prime} =w_{1}^{\prime}u_{1}^{\prime}+w_{2}^{\prime}u_{2}^{\prime} =0,\] (5.8b) \[w_{1}^{\prime}u_{1}^{\prime 2}+w_{2}^{\prime}u_{2}^{\prime 2} =\theta^{\prime}-1,\] (5.8c) \[w_{1}^{\prime}u_{1}^{\prime 4}+w_{2}^{\prime}u_{2}^{\prime 4} =3\theta^{\prime 2}-6(\theta^{\prime}-1)-\mathfrak{m}_{4}. \tag{5.8d}\] Multiplying both sides of (5.8c) with \((u_{1}^{\prime}+u_{2}^{\prime})\) and using (5.8b), we obtain \[(\theta^{\prime}-1)(u_{1}^{\prime}+u_{2}^{\prime})=0.\] This gives \(u_{1}^{\prime}=-u_{2}^{\prime}:=\nu\neq 0\) if \(\theta^{\prime}\neq 1\). Then we see from (5.8a) and (5.8b) that \(w_{1}^{\prime}=w_{2}^{\prime}=\frac{1}{2}\). Thus, (5.8d) becomes \(2\nu^{4}=\mathfrak{m}_{4}-3\) with \(\theta^{\prime}=1+\nu^{2}\). When \(\theta^{\prime}=1\), it follows from (5.8c) that \(u_{1}^{\prime}=u_{2}^{\prime}=0\) due to \(w_{i}^{\prime}>0\) and \(\mathfrak{m}_{4}=3\) from (5.8d). Hence, the given equations have a solution if and only if \(\mathfrak{m}_{4}\geq 3\) At equilibrium, it is seen from Proposition 5.1 and (4.7) that \(\tilde{u}=U\) and \[g(u;W)=(u-U-\sigma\nu)^{2}(u-U+\sigma\nu)^{2}(u-U).\] We then rewrite \(g(u;W)\) and \[c(u;W)=g(u;W)+b_{2}\sigma^{2}g^{(2)}(u;W)+b_{4}\sigma^{4}g^{(4)}(u;W),\] with \(b_{j}\) in (4.6) as \(g(u;U,\sigma)\) and \(c(u;U,\sigma)\) to manifest the dependence on \(U\) and \(\sigma\). Clearly, we have \(g(u;U,\sigma)=\sigma^{5}g\left(\frac{u-U}{\sigma};0,1\right)\) and \(c(u;U,\sigma)=\sigma^{5}c\left(\frac{u-U}{\sigma};0,1\right)\). A direct calculation yields \(c(u;0,1)=u^{5}-B_{1}u^{3}+B_{2}u\) with \[B_{1}=2\nu^{2}+10\quad\text{and }B_{2}=-9\nu^{4}+6\nu^{2}+15. \tag{5.9}\] Recalling \(c(u;W)=u^{5}-a_{4}u^{4}-\cdots-a_{0}\), we derive \[a_{0} =U^{5}-B_{1}U^{3}\sigma^{2}+B_{2}U\sigma^{4},\ a_{1}=-5U^{4}+3B_{ 1}U^{2}\sigma^{2}-B_{2}\sigma^{4}, \tag{5.10}\] \[a_{2} =10U^{3}-3B_{1}U\sigma^{2},\quad a_{3}=-10U^{2}+B_{1}\sigma^{2}, \quad a_{4}=5U.\] This determines the matrix \(L=L(M)\) in (5.3) on the equilibrium manifold. Denote by \(\pm\mu_{1}\), \(\pm\mu_{2}\) and \(0\) the five distinct roots of the polynomial \(c(u;0,1)\). Then the roots \(\lambda_{i}\) of \(c(u;U,\sigma)\) can be written as \[\lambda_{1,2}=U\pm\mu_{1}\sigma,\quad\lambda_{3,4}=U\pm\mu_{2}\sigma,\quad \lambda_{5}=U. \tag{5.11}\] By introducing \(V=U/\sigma\), it is seen that \(r_{i}\) is a homogeneous polynomial of \(\sigma\) (of degree \(5-i\)), so the relations in (5.4) are all homogeneous with \(\sigma\). Thus, we may set \(\sigma=1\) for the following calculations. Set \[Y_{0}=\sum_{i=1}^{5}x_{i}^{2},\quad Y_{j}=\left(x_{1}^{2}+(-1)^{j}x_{2}^{2} \right)\mu_{1}^{j}+\left(x_{3}^{2}+(-1)^{j}x_{4}^{2}\right)\mu_{2}^{j},\quad \text{for }j=1,\ldots,4.\] Having (5.10) & (5.11), the dot products \((r_{i},r_{5})\) can be calculated as \[(r_{1},r_{5}) =(U^{4}-B_{1}U^{2}+B_{2})Y_{0}-(U^{3}-B_{1}U)Y_{1}+(U^{2}-B_{1})Y _{2}-UY_{3}+Y_{4},\] \[(r_{2},r_{5}) =(-4U^{3}+2B_{1}U)Y_{0}+(3U^{2}-B_{1})Y_{1}-2UY_{2}+Y_{3},\] \[(r_{3},r_{5}) =(6U^{2}-B_{1})Y_{0}-3UY_{1}+Y_{2},\] \[(r_{4},r_{5}) =-4UY_{0}+Y_{1},\] \[(r_{5},r_{5}) =Y_{0}.\] Furthermore, we use (5.6) to get \[(r_{1},r_{4}^{\prime}) =a_{0}Y_{0}=(U^{5}-B_{1}U^{3}+B_{2}U)Y_{0},\] \[(r_{2},r_{4}^{\prime}) =(2B_{1}U^{2}-4U^{4})Y_{0}-(U^{3}-B_{1}U)Y_{1}+(U^{2}-B_{1})Y_{2} -UY_{3}+Y_{4},\] \[(r_{3},r_{4}^{\prime}) =(6U^{3}-B_{1}U)Y_{0}+(3U^{2}-B_{1})Y_{1}-2UY_{2}+Y_{3},\] \[(r_{4},r_{4}^{\prime}) =-4U^{2}Y_{0}-3UY_{1}+Y_{2},\] \[(r_{5},r_{4}^{\prime}) =UY_{0}+Y_{1}.\] With these and those in (5.1), the equations in (5.4) can be written as \[0= \left[(6\theta-B_{1})U^{2}+B_{2}-3\theta^{2}\right]Y_{0}+(B_{1}-3 \theta)UY_{1} \tag{5.12a}\] \[+(U^{2}-B_{1})Y_{2}-UY_{3}+Y_{4},\] \[0= (2B_{1}-12\theta)UY_{0}+(3\theta-B_{1})Y_{1}-2UY_{2}+Y_{3},\] (5.12b) \[0= (6\theta-B_{1})Y_{0}+Y_{2},\] (5.12c) \[0= \left[(6\theta-B_{1})U^{3}+(B_{2}-3\theta^{2})U\right]Y_{0}+(3U^ {2}\theta-3\theta^{2})Y_{1}+(U^{3}-3U\theta)Y_{2},\] (5.12d) \[0= (2B_{1}-12\theta)U^{2}Y_{0}+(B_{1}-9\theta)UY_{1}-(B_{1}-3\theta +2U^{2})Y_{2}-UY_{3}+Y_{4},\] (5.12e) \[0= (6\theta-B_{1})UY_{0}+(6\theta-B_{1})Y_{1}+UY_{2}+Y_{3}. \tag{5.12f}\] We shall show that there exists \(x_{i}^{2}\) solving these equations if and only if \(\mathfrak{m}_{4}<5\). To do this, we use (5.12c) and deduce from (5.12b) & (5.12f) that \(Y_{3}=(B_{1}-3\theta)Y_{1}=(B_{1}-6\theta)Y_{1}\) and hence \(Y_{1}=Y_{3}=0\). By the definitions of \(Y_{1}\) and \(Y_{3}\), it follows that \(x_{1}^{2}=x_{2}^{2}\) and \(x_{3}^{2}=x_{4}^{2}\) due to \(\mu_{1}\neq\mu_{2}\). On the other hand, notice that \(\theta=\nu^{2}+1\) due to Proposition 5.1 with \(\sigma=1\). We deduce from (5.9) that \[B_{2}-3\theta^{2}=3\theta(B_{1}-6\theta)=-12(\nu^{4}-1)=-6(\mathfrak{m}_{4}-5). \tag{5.13}\] Thus, it is not difficult to see that all the equations in (5.12) are linear combinations of (5.12a) and (5.12c). Furthermore, since \(\mu_{1}\) and \(\mu_{2}\) are the nonzero roots of \(c(u;0,1)=u^{5}-B_{1}u^{3}+B_{2}u\), we have \(\mu_{1}^{2}+\mu_{2}^{2}=B_{1}>0\) and \(\mu_{1}^{2}\mu_{2}^{2}=B_{2}>0\). Thus, by the definitions of \(Y_{2}\) and \(Y_{4}\), we have \(B_{1}Y_{2}=Y_{4}+B_{2}(x_{1}^{2}+\cdots+x_{4}^{2})\). Consequently, (5.12a) and (5.12c) are equivalent to (using \(x_{1}^{2}=x_{2}^{2}\) and \(x_{3}^{2}=x_{4}^{2}\)) \[(B_{2}-3\theta^{2})Y_{0} =B_{2}\cdot 2(x_{1}^{2}+x_{3}^{2}), \tag{5.14a}\] \[Y_{2} =(B_{1}-6\theta)Y_{0}. \tag{5.14b}\] Therefore, if \(\mathfrak{m}_{4}\geq 5\), from (5.13) and (5.14b) we see that \(Y_{0}\) and \(Y_{2}\) cannot be both positive. This together with the definitions of \(Y_{0}\) and \(Y_{2}\) indicates the nonexistence of the \(x_{i}^{2}\)'s and thereby the diagonal positive matrix \(\Lambda\). Finally, we show the existence of the \(x_{i}^{2}\)'s if \(\mathfrak{m}_{4}<5\). From (5.13) we see that \(\mathfrak{m}_{4}<5\) if and only if \(0\leq\nu<1\). Substituting (5.13) into (5.14a) and using \(Y_{0}=\sum_{i=1}^{5}x_{i}^{2}\), we easily obtain \[x_{1}^{2}+x_{3}^{2}=2\frac{1-\nu^{2}}{\theta}x_{5}^{2}.\] Substituting this into (5.14b) and putting them into matrix form, we arrive at \[\begin{bmatrix}1&1\\ \mu_{1}^{2}&\mu_{2}^{2}\end{bmatrix}\begin{bmatrix}x_{1}^{2}\\ x_{3}^{2}\end{bmatrix}=\frac{2(1-\nu^{2})}{\theta}x_{5}^{2}\begin{bmatrix}1 \\ 5-3\nu^{2}\end{bmatrix}.\] Assume \(\mu_{1}<\mu_{2}\). It is not difficult to verify that the two components \(x_{1}^{2}\) and \(x_{3}^{2}\) of the solution to this system are positive if and only if \(\mu_{1}^{2}<5-3\nu^{2}<\mu_{2}^{2}\). Notice that \[\mu_{2}^{2}>\frac{\mu_{1}^{2}+\mu_{2}^{2}}{2}=\frac{B_{1}}{2}=\nu^{2}+5\geq 5-3 \nu^{2}\quad\text{and}\] \[\mu_{1}^{2}=\frac{B_{1}}{2}-\sqrt{\frac{1}{4}B_{1}^{2}-B_{2}}=5+\nu^{2}-\sqrt{ 10\nu^{4}+4\nu^{2}+10}<5-3\nu^{2}\] for \(0\leq\nu<1\). Hence, the existence of positive \(x_{1}^{2}\) and \(x_{3}^{2}\) is demonstrated and the proof is complete. ## 6. Specific kernel functions In this section we present a number of specific kernel functions which satisfy the conditions required by our previous theoretical results, and then examine their performance with numerical tests. ### Examples of kernel functions Here are examples of the kernel functions, which may not be normalized. To check the conditions in our main results, we notice that, for even normalized kernels, \[\mathfrak{m}_{4}=\int\sqrt{\tilde{\mathfrak{m}}_{2}}\mathcal{K}(\sqrt{\tilde{ \mathfrak{m}}_{2}}\xi)\xi^{4}d\xi=\tilde{\mathfrak{m}}_{2}^{-2}\int\mathcal{ K}(\eta)\eta^{4}d\eta=\tilde{\mathfrak{m}}_{4}/\tilde{\mathfrak{m}}_{2}^{2},\] where \(\tilde{\mathfrak{m}}_{4},\tilde{\mathfrak{m}}_{2}\) are the \(4^{th}\) and \(2^{nd}\) moments of the unnormalized kernel, respectively. **Example 6.1** (Gaussian distribution).: _Our first example is the most widely-used Gaussian distribution_ \[\mathcal{K}(\xi)=\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{\xi^{2}}{2}\right)\] _in the EQMOM approach for the BGK equation [6]. It is even and normalized with \(\mathfrak{m}_{4}=3\). According to Theorem 2.10, the two-node Gaussian-EQMOM satisfies the structural stability condition. Indeed, for this kernel, it has been even shown in [13] that the structural stability condition is fulfilled for the \(n\)-node EQMOM._ **Example 6.2** (Piecewise polynomials).: _Our next example is the piecewise polynomials_ \[\mathcal{K}_{j}(\xi)=\frac{j+1}{2}(1-|\xi|)^{j}\mathbf{1}_{|\xi|\leq 1},\quad j=0,1,\dots.\] _These are even functions and \(\mathcal{K}_{1}\) was used in [7]. A straightforward calculation of the unnormalized \(k\)th-moment \(\tilde{\mathfrak{m}}_{j,k}\) gives_ \[\frac{\tilde{\mathfrak{m}}_{j,4}}{\tilde{\mathfrak{m}}_{j,2}^{2}}=\frac{6(j+2 )(j+3)}{(j+4)(j+5)}\] _which is in \([3,6)\) for \(j\geq 3\) and in \([3,5)\) for \(3\leq j\leq 18\). According to Theorems 2.9 & 2.10, the corresponding two-node EQMOM moment system is well-defined and strictly hyperbolic if \(j\geq 3\), and satisfies the structural stability condition if \(3\leq j\leq 18\)._ _On the other hand, even polynomials_ \[\tilde{\mathcal{K}}_{j}(\xi)=\frac{j+1}{2j}(1-|\xi|^{j})\mathbf{1}_{|\xi|\leq 1 },\quad j=1,2,\dots\] _cannot be used as EQMOM kernels since \(\tilde{\mathfrak{m}}_{j,4}/\tilde{\mathfrak{m}}_{j,2}^{2}=\frac{9(j+3)^{2}}{ 5(j+1)(j+5)}<3\), violating the conditions of Corollary 2.6._ **Example 6.3** (Kappa distribution).: _For kernels to be the kappa distribution [19]_ \[\mathcal{K}(\xi)=\frac{\Gamma(\kappa+1)}{\kappa\sqrt{\pi\left(\kappa-\frac{3} {2}\right)}\Gamma\left(\kappa-\frac{1}{2}\right)}\left(\frac{\xi^{2}}{\kappa- \frac{3}{2}}+1\right)^{-\kappa}\] _with \(\kappa>3\), where \(\Gamma(z)=\int_{0}^{\infty}t^{z-1}e^{-t}dt\) is the gamma function, we claim that the two-node EQMOM map \(\mathcal{M}\) in (2.5) is injective. The resultant moment system is strictly hyperbolic if \(\kappa>\frac{7}{2}\) and satisfies the structural stability condition if \(\kappa>4\)._ _To see this, we only need to consider the rescaled kernel \(\mathcal{K}(\xi)=I_{0,\kappa}^{-1}(\xi^{2}+1)^{-\kappa}\) with \(I_{j,\kappa}=\int_{\mathbb{R}}\xi^{2j}(\xi^{2}+1)^{-\kappa}d\xi\). Obviously we have_ \[I_{j,\kappa}=\int_{\mathbb{R}}(\xi^{2j}-\xi^{2j-2}+\xi^{2j-2})(\xi^{2}+1)^{- \kappa}d\xi=I_{j-1,\kappa-1}+I_{j-1,\kappa}.\] _Moreover, integrating by parts gives_ \[I_{j,\kappa}=\frac{-1}{2(\kappa-1)}\int_{\mathbb{R}}\xi^{2j-1}d(\xi^{2}+1)^{1 -\kappa}=\frac{2j-1}{2(\kappa-1)}I_{j-1,\kappa-1}\] _for \(\kappa>j+1/2\). Thus we obtain \(I_{j,\kappa}=\frac{2j-1}{2j-2\kappa+1}I_{j-1,\kappa}\) and_ \[\frac{\tilde{\mathfrak{m}}_{4}}{\tilde{\mathfrak{m}}_{2}^{2}}=\frac{I_{2, \kappa}/I_{0,\kappa}}{\left(I_{1,\kappa}/I_{0,\kappa}\right)^{2}}=\frac{I_{2, \kappa}/I_{1,\kappa}}{I_{1,\kappa}/I_{0,\kappa}}=\frac{3(2\kappa-3)}{2\kappa-5 }>3.\] _Therefore, by Corollary 2.6 the two-node EQMOM is well defined. Moreover, we have \(\tilde{\mathfrak{m}}_{4}/\tilde{\mathfrak{m}}_{2}^{2}<6\) and \(\tilde{\mathfrak{m}}_{4}/\tilde{\mathfrak{m}}_{2}^{2}<5\) if \(\kappa>\frac{7}{2}\) and \(\kappa>4\), respectively. Consequently, the conditions of Theorems 2.9 & 2.10 are satisfied if \(\kappa>\frac{7}{2}\) or \(\kappa>4\)._ **Example 6.4**.: _Let_ \[\mathcal{K}_{j}(\xi)=\frac{1}{2\Gamma(j+1)}|\xi|^{j}e^{-|\xi|},\quad j=1,2,\dots.\] _A straightforward calculation gives \(\tilde{\mathfrak{m}}_{4,j}/\tilde{\mathfrak{m}}_{2,j}^{2}=\frac{(j+3)(j+4)}{(j +1)(j+2)}\), which is greater than 3 if and only if \(j=1\). For \(\mathcal{K}_{1}(\xi)\), the ratio is \(\frac{10}{3}<5\). According to Theorem 2.10, the resultant two-node EQMOM moment system satisfies the structural stability condition._ **Example 6.5**.: _Let_ \[\mathcal{K}(\xi)=\frac{1}{3}(1+\cos\xi)e^{-|\xi|}.\] _A direct calculation leads to \(\tilde{\mathfrak{m}}_{4}/\tilde{\mathfrak{m}}_{2}^{2}=14\). According to Corollary 2.6 and Theorem 2.10, the two-node EQMOM is well-defined, but the resultant moment system is not hyperbolic._ **Example 6.6** (Uneven kernels).: _Let us give an example of uneven kernels which allows a well-defined and strictly hyperbolic two-node EQMOM moment system. This kernel function reads as_ \[\mathcal{K}(\xi)=\frac{1}{c}\left(\frac{1-\xi}{\xi^{\alpha}}\mathbf{1}_{0<\xi \leq 1}+\frac{1+\xi}{|\xi|^{\beta}}\mathbf{1}_{-1\leq\xi<0}\right)\] _with \(\alpha=0.6060\), \(\beta=0.5340\) and \(c=\frac{1}{(1-\alpha)(2-\alpha)}+\frac{1}{(1-\beta)(2-\beta)}=3.2845\). The normalized moments \(\mathfrak{m}_{j}\) of \(\mathcal{K}(\xi)\) are \(\mathfrak{m}_{3}=-0.04006\) and \(\mathfrak{m}_{4}=4.7455\). Thus, we have \(\mathfrak{m}_{4}>3+\frac{9}{8}\mathfrak{m}_{3}^{2}\) and the corresponding two-node EQMOM is well-defined. Its strict hyperbolicity is ensured by Theorem 2.8 as the polynomial_ \[p(t)=t^{5}-0.5t^{3}+6.6766\times 10^{-3}t^{2}+0.052271t-2.7789\times 10^{-3}\] _can be shown to have 5 distinct real roots (this can also be verified numerically)._ **Example 6.7**.: _Our last kernel function is_ \[\mathcal{K}(\xi)=\xi e^{-\xi}\mathbf{1}_{\xi>0}\] _and its unnormalized moments are obviously \(\tilde{\mathfrak{m}}_{j}=(j+1)!\). It is not difficult to verify the conditions of Corollary 2.6 and, thereby, the two-node EQMOM is well-defined. Moreover, a direct calculation via (2.10) gives \(b_{1}=-2\), \(b_{2}=1\) and \(b_{j}=0\) for \(j\geq 3\). Thus, the polynomial in Theorem 2.8 for the two-node case is \(p(t)=t^{3}(t-1)^{2}\) and has two nonzero roots. By Theorem 2.8, the two-node EQMOM moment system is strictly hyperbolic._ ### Numerical validation In this subsection, we use a Riemann problem of the Euler equations to show that the kernel functions given in the previous subsection produce satisfactory results if they satisfy the conditions required by our theory; otherwise they lead to spurious results. The initial data of the Riemann problem are \[\rho(0,x)=\begin{cases}3.093,&x<0,\\ \qquad 1,&x>0,\end{cases}\quad U(0,x)=0,\quad\theta(0,x)=1,\] for the Euler equation. These data are used to determine the equilibrium distribution \(f^{eq}(\rho,U,\theta;\xi)\) in the kinetic equation (2.1) and thereby initial moments. The computational domain \(-1\leq x\leq 1\) is discretized into \(1000\) uniform cells. The Neumann boundary condition \(\partial_{x}f=0\) is applied on the endpoints \(x=\pm 1\). The spatial fluxes are treated as in [6] and the time step is chosen so that the CFL number is less than \(0.5\). Two limiting cases of \(\tau\) in (2.7) are considered here. The continuum limit has infinitely fast collisions with \(\tau=0\), while the free-molecular limit assumes no collision (\(\tau=\infty\)). The analytical solutions for the two cases can be found in [24] and [6], respectively. We test six different kernels for the both cases. Fig. 1 shows the spatial profiles of the macroscopic quantities \((\rho,u,\theta)\) at \(t=0.1\) for the free-molecular case. Both the simulated results and analytical solutions are plotted. It is seen that the kernel greatly affects the simulation results of this highly non-equilibrium flow. The Gaussian kernel, the kappa distribution in Example 6.3 with \(\kappa=6\) and the even polynomial \(\mathcal{K}_{3}(\xi)\) in Example 6.2 all satisfy the structural stability condition, and exhibit similar precision in the simulation. The results with the uneven kernel in Example 6.6 can still roughly reproduce the profiles but shows larger errors than the former kernels. Notice that the corresponding moment system is strictly hyperbolic, while whether the structural stability condition (III) is respected remains unclear. On the other hand, Figs. 1 (b1)-(b3) include the results from two improper kernels. The kernel in Example 6.5 yields a non-hyperbolic system, leading to huge unphysical peaks (termed '\(\delta\)-shocks') in the region where flow quantities change drastically. This is a common phenomenon for non-hyperbolic moment systems [10]. By contrast, the kernel \(\mathcal{K}_{25}(\xi)\) in Example 6.2 is hyperbolic but violates the stability condition (III). The errors are larger than those from the kernel \(\mathcal{K}_{3}(\xi)\). Fig. 2 gives the numerical results of the continuum case. The flow can be reasonably predicted, including a right-moving shock wave, a left-moving rarefaction wave and a discontinuity at \(x=0\). The result indicates that the continuum flow regime may be less sensitive to the choice of the kernels. Figure 2. 1-D Riemann problem in the continuum limit: Profiles of density \(\rho\), velocity \(U\) and temperature \(\theta\) at \(t=0.1\). Kernels: 1. Analytical solution; 2. Gaussian; 3. Kappa distribution with \(\kappa=6\); 4. \(K_{3}(\xi)\) in Example 6.2; 5. \(\mathcal{K}(\xi)\) in Example 6.6; 6. \(\mathcal{K}(\xi)\) in Example 6.5; 7. \(K_{25}(\xi)\) in Example 6.2. Figure 1. 1-D Riemann problem with no collision: Profiles of density \(\rho\), velocity \(U\) and temperature \(\theta\) at \(t=0.1\). Kernels: 1. Analytical solution; 2. Gaussian; 3. Kappa distribution with \(\kappa=6\); 4. \(K_{3}(\xi)\) in Example 6.2; 5. \(\mathcal{K}(\xi)\) in Example 6.6; 6. \(\mathcal{K}(\xi)\) in Example 6.5; 7. \(K_{25}(\xi)\) in Example 6.2. ## 7. Conclusions This paper is concerned with a class of moment closure systems derived with an extended quadrature method of moments (EQMOM) for the one-dimensional BGK equation. The class is characterized with a kernel function and the unknown distribution is approximated with Ansatz Eq.(2.4). We investigate the realizability of the extended method of moments (see Theorem 2.3 & Corollary 2.6). A sufficient condition (Theorem 2.8) on the kernel is identified for the EQMOM-derived moment systems to be strictly hyperbolic. Furthermore, sufficient and necessary conditions are established for the two-node systems to be well-defined and strictly hyperbolic, and to preserve the dissipation property of the kinetic equation. For normalized kernels, the condition is \(\mathfrak{m}_{4}\geq 3+\frac{9}{8}\mathfrak{m}_{3}^{2}\) for the well-definedness, where \(\mathfrak{m}_{3}\) and \(\mathfrak{m}_{4}\) are the 3rd and 4th moments of the kernel function. When the kernel is even, the conditions are \(3\leq\mathfrak{m}_{4}<6\) for hyperbolicity and \(3\leq\mathfrak{m}_{4}<5\) for the dissipativeness corresponding to the \(H\)-theorem of the kinetic equation. In addition, we present a number of examples of the kernel functions and examine their performance numerically. Precisely, we use a Riemann problem of the Euler equations to show that the kernel functions produce satisfactory results if they satisfy the conditions required by our theoretical results; otherwise they lead to spurious results. ## Acknowledgments This work is supported by the National Key Research and Development Program of China (Grant no. 2021YFA0719200) and the National Natural Science Foundation of China (Grant no. 12071246). The authors are grateful to Prof. Shuiqing Li and Mr. Yihong Chen at Tsinghua University for insightful discussions.
2310.05678
Parallel expansion of a fuel pellet plasmoid
The problem of the expansion and assimilation of a cryogenic fuel pellet injected into a hot plasma is considered. Due to the transparency of the plasmoid to ambient particles, it is found that electrons reach a `quasi-equilibrium' (QE) which is characterised by a steady-state on the fastest collisional timescale. The simplified electron kinetic equation of the quasi-equilibrium state is solved. Taking a velocity moment of the electron kinetic equation permits a fluid closure, yielding an evolution equation for the parameters describing the QE distribution function. In contrast to the Braginskii equations, the closure does not require that electrons have a short mean free path compared to the size of density perturbations and permits an anisotropic and highly non-Maxwellian distribution function. Since the QE electron distribution function accounts for both trapped and passing electrons, the self-consistent electric potential that causes the expansion can be properly described, in contrast to earlier models of pellet plasmoid expansion with an unbounded potential. The plasmoid expansion is simulated using both a Vlasov model and a cold fluid model for the ions. During the expansion plasmoid ions and electrons obtain a nearly equal amount of energy; as hot ambient electrons provide this energy in the form of collisional heating of plasmoid electrons, the expansion of a pellet plasmoid is expected to be a potent mechanism for the transfer of energy from electrons to ions on a timescale shorter than that of ion-electron thermalisation.
A. M. Arnold, P. Aleynikov, B. N. Breizman
2023-10-09T12:47:29Z
http://arxiv.org/abs/2310.05678v1
[ ###### Abstract The problem of the expansion and assimilation of a cryogenic fuel pellet injected into a hot plasma is considered. Due to the transparency of the plasmoid to ambient particles, it is found that electrons reach a 'quasi-equilibrium' (QE) which is characterised by a steady-state on the fastest collisional timescale. The simplified electron kinetic equation of the quasi-equilibrium state is solved. Taking a velocity moment of the electron kinetic equation permits a fluid closure, yielding an evolution equation for the parameters describing the QE distribution function. In contrast to the Braginskii equations, the closure does not require that electrons have a short mean free path compared to the size of density perturbations and permits an anisotropic and highly non-Maxwellian distribution function. Since the QE electron distribution function accounts for both trapped and passing electrons, the self-consistent electric potential that causes the expansion can be properly described, in contrast to earlier models of pellet plasmoid expansion with an unbounded potential. The plasmoid expansion is simulated using both a Vlasov model and a cold fluid model for the ions. During the expansion plasmoid ions and electrons obtain a nearly equal amount of energy; as hot ambient electrons provide this energy in the form of collisional heating of plasmoid electrons, the expansion of a pellet plasmoid is expected to be a potent mechanism for the transfer of energy from electrons to ions on a timescale shorter than that of ion-electron thermalisation. P Parallel expansion of a fuel pellet plasmoid]Parallel expansion of a fuel pellet plasmoid A. M. Arnold et al.]Alistair M. Arnold\({}^{1}\)+, Pavel Aleynikov\({}^{1}\), Boris N. Breizman\({}^{2}\) Footnote †: Email address for correspondence: [email protected] ## 1 Introduction During a recent experimental campaign of the W7-X stellarator, fuel pellet injection was found to be associated with a large transfer of energy from electrons to ions (Baldzuhn _et al._, 2019, 2020; Bozhenkov _et al._, 2020). Such an energy transfer is generally desirable, since it acts to bring the ion temperature up to the electron temperature (the ion temperature being lower than the electron temperature during normal operation), and a higher ion temperature results in a larger fusion cross-section. Subsequent investigation of the dynamics of the injection and assimilation of fuel pellets have suggested that a possible mechanism for the energy transfer is the rapid ambipolar expansion of the ionised pellet material - the pellet plasmoid - along magnetic field lines (Aleynikov _et al._, 2019; Runov _et al._, 2021; Arnold _et al._, 2021). The aim of this paper is to resolve the inconsistencies present in these models and provide a rigorous model of the parallel plasmoid expansion. What follows is a brief recapitulation of the processes by which the pellet plasmoid is formed, the reason for its parallel expansion and concomitant electron-ion energy transfer, and a summary of the approaches and pitfalls of earlier models. An outline of a new approach which does not suffer from these pitfalls is provided before being realised mathematically. When a fuel pellet is injected into an MCF (Magnetic Confinement Fusion) device, the incoming energy flux from the ambient plasma ablates the surface of the pellet and produces a gas cloud (Parks _et al._, 1977). The pellet and gas are composed of electrically neutral molecules, but plasma is continuously generated within the gas cloud by the collisions of the high-energy ions and electrons composing the multi keV ambient plasma with gas molecules. Subsequently, the pellet and gas cloud continue to cross magnetic field lines at the speed at which they were injected, but some of the newly-ionised plasma is left behind; that which was not collisionally 'dragged' along with the moving gas cloud. This is because the plasma constituents are charged particles and follow Larmor orbits that 'pin' the particles to the field line. The result is that a _plasmoid_, a localised excess density of plasma, is deposited on field lines that intersected the gas cloud as it traversed the device. Since the plasmoid is a localised density perturbation and electrons have a much higher thermal velocity than ions, the electric potential required to maintain quasineutrality acts to trap electrons inside the plasmoid and accelerate ions away from the plasmoid. Figure 1 shows a schematic of the plasmoid and electric potential. Since the potential acts to trap electrons, we will use the names 'well' and 'potential' interchangeably. With regards to pellet plasmoids the density is such that the the electric potential drives parallel dynamics much more quickly than transverse dynamics occur, the latter being due to drifts. Therefore, as with previous investigations, we consider only the parallel expansion of the pellet plasmoid on one given field line. We stress that not all of the plasma produced from the pellet ablatant fits the description of the previous paragraph. Naturally, plasma that is 'dragged along' with the gas cloud exhibits quite different dynamics. However, for 'fast' pellet injection devices proportionally more of the plasmoid dynamics occur on field lines where the gas cloud and pellet have departed (Arnold _et al._, 2021). The injection devices fitting the criteria of being 'fast' are becoming the norm in MCF experiments, so the conclusions drawn from the parallel plasmoid expansion in the absence of gas can be expected to reasonably well apply to pellet injection in future MCF devices. The dynamics of any plasmoid immersed in an ambient plasma depend greatly upon the plasma and plasmoid parameters, such as their relative temperatures, densities, the plasmoid size, and so on. Naturally, it is difficult to describe plasmoid dynamics with a too wide-ranging choice of parameters, so our attention must be restricted to plasmoids broadly corresponding to a those produced by pellet injection in a state-of-the-art MCF device. We take as a reference point the W7-X stellarator, since the success of its pellet injection campaign provides motivation for studying pellet plasmoids. Further, the temperatures and densities in the core of W7-X are generally comparable to other high-performance MCF devices. For the purpose of untangling the different phenomena involved in plasmoid expansion it is helpful to provide concrete plasma parameters We consider an ambient plasma of electron density \(n_{a}=5\times 10^{19}\,\mathrm{m}^{-3}\) at a temperature \(T_{a}=5\,\mathrm{keV}\). A typical line-integrated density along the field line of a fuel pellet plasmoid in W7-X is \(N_{p}=10^{22}\,\mathrm{m}^{-2}\)(Arnold _et al._, 2021). The fuel pellets in W7-X contain approximately \(10^{20}\) electrons and penetrate roughly \(0.1\,\mathrm{m}\) into the plasma, resulting in the average line-density in the radial direction of \(10^{21}\,\mathrm{m}^{-1}\)(Baldzuhn _et al._, 2019). In W7-X the flux surface located at minor radius \(r=0.3\,\mathrm{m}\), given the major radius \(R=5\,\mathrm{m}\), has a flux-surface integrated density of \(3\times 10^{21}\,\mathrm{m}^{-1}\) if the density of this flux surface is \(5\times 10^{19}\,\mathrm{m}^{-3}\). Hence for such a flux surface the temperature is not strongly quenched after assimilation. For high-performance scenarios in W7-X the quenching is even weaker due to the higher plasma densities. Therefore, unlike killer pellets, which quench the temperature completely, on large flux surfaces fuelling pellets only slightly affect the temperature. We therefore neglect any change in \(T_{a}\) during the plasmoid expansion. We consider irrational flux surfaces where individual field lines have a connection length \(L_{F}\) that is, in principle, infinite. However, given that the plasmoid has a transverse size \(r_{I}\), the connection length of the flux tube containing the plasmoid is in practice \((2\pi R)(2\pi r)/r_{I}\), since this is the length after which the flux tube of diameter \(r_{I}\) self-intersects. For W7-X pellets \(r_{I}\approx 0.1\,\)m shortly after injection (Baldzuhn _et al._, 2019; Arnold _et al._, 2021), giving a maximum connection length of \(600\,\)m. Since the plasmoid is expected to reach this size after its density has dropped to practically the value of the ambient plasma for \(N_{p}=10^{22}\,\)m\({}^{-2}\)(Aleynikov _et al._, 2019), we formally take the connection length to be infinite. Arnold _et al._ (2023) in contrast treated the electron kinetics in a high-Z plasmoid on a field line of finite connection length, accounting for the quenching of the ambient plasma. Since plasmoid electrons are 'born' at energies comparable to the ionisation energy, of order tens of eV, but are immersed in an ambient plasma with a temperature on the order of several keV, the electron distribution function as a whole will consist of a cold, dense core of plasmoid electrons and a hot, sparse tail of ambient electrons. The distribution function is only close to a Maxwellian after the plasmoid electrons have been sufficiently heated by the ambient electrons, which happens after the plasmoid has significantly expanded with the plasma parameters we use here. The primary concern with previous models of the expansion is that they treated only the cold plasmoid electrons, assuming that they have a near-Maxwellian distribution function, but did not treat ambient electrons. These electrons were simply assumed to be of a constant density, hence providing collisional heating to the plasmoid electrons. Further, ambient ions were not considered at all in the fluid model for the ions. The consequence of this approach is that the electric potential decreases without bound as the plasmoid density vanishes. Clearly, one cannot use this approach as a basis for treating both trapped and passing electrons. The fact that the electric potential was unphysical also called into question the result of the electron-ion energy transfer and other aspects of the expansion. A more sophisticated approach to electrons is required to resolve these issues. We will consider electron kinetics in the variables of parallel energy \(\mathcal{E}_{\parallel}=m_{e}v_{\parallel}^{2}/2-e\phi(z)\), where \(v_{\parallel}\) is the velocity parallel to the field line, perpendicular energy \(\mathcal{E}_{\perp}=m_{e}v_{\perp}^{2}/2\), where \(v_{\perp}\) is the speed perpendicular to the field lines, \(z\), the position along the field line, and \(t\), time. In anticipation of the form of the distribution function for different energies we split the phase space into regions I, II, and III, respectively corresponding to the deeply trapped electrons, hot trapped electrons, and hot passing electrons (Fig. 2). In each region we employ the separation of timescales appropriate for the plasmoid and plasma parameters mentioned earlier in this section in order to obtain a simplified kinetic equations for the electrons. We find that trapped electrons collide with the cold, dense plasmoid electrons (and the plasmoid ions) much more quickly than with the passing electrons. At the same time, owing to the high temperature of the ambient plasma, the mean free path of passing and hot trapped electrons exceeds the length of the plasmoid; the plasmoid appears essentially transparent to the ambient electrons, and hot trapped electrons bounce inside the well many times before colliding. The latter effect means that trapped electrons behave 'adiabatically' as the potential well expands. We will show that, except at the very earliest stage of expansion, the ordering of timescales leads to the electrons reaching a 'quasi-equilibrium' (QE) state which is characterised by rapid electron collisions within the plasmoid causing the electron distribution to exhibit a steady-state on the timescale on which trapped electrons collide with the plasmoid. The steady-state is established with no net flux of electrons into the trapped region of phase-space to prevent the 'charging up' of the plasmoid (and hence the violation of quasineutrality) on this timescale. The QE electron distribution function is analogous to an equilibrium distribution function, which is indeed attained if electron-electron collisions are _the_ fastest effect. In our case, however, the bounce period of hot trapped electrons is considerably shorter than the collision timescale. We note that the QE state and the equilibrium state (which has a Maxwellian energy distribution) differ conceptually; a Maxwellian distribution exhibits no collisional flux, but the QE state is characterised by a vanishing _divergence_ of collisional flux; in this sense QE is a 'dynamical' steady-state. It will be shown that the QE distribution is specified in terms of the 'deeply trapped' distribution function occupying region I, a Maxwellian with homogeneous temperature \(T\), which is uniquely defined by two parameters. These two parameters must be such that there is no net flux of electrons into the trapped region on the timescales on which QE is established. This allows us to express one parameter of the lowest-order distribution in terms of the other. Once the electron distribution is known in terms of this remaining parameter, which we choose to be the temperature, its zeroth moment may be taken to obtain an expression for electron density, which, combined with the quasineutrality condition provides an implicit expression for the self-consistent electric potential \(\phi\) in terms of the temperature. The velocity moment corresponding to line-integrated energy density is then taken over the electron kinetic equation to obtain an energy conservation law, which is practically used as the evolution equation for \(T\). A description of the expansion requires a model for ion motion. Two models were considered: a cold-fluid model with a single flow velocity and a collisionless kinetic (Vlasov) model. The first model is pragmatically justified by a possible application of this work being to provide a simplified model for pellet plasmoid expansion in an established fluid code. The second is justified by the long mean-free-path of hot ambient ions. These models represent opposite collisionality regimes for ions, and we therefore expect the shared qualitative properties to remain in a more sophisticated and accurate model for the ions. The qualitative property of greatest concern is the electron to ion energy transfer during the expansion. With each ion model the system was evolved until the plasmoid and ambient densities were similar and the plasmoid electron temperature \(T\) had reached \(T_{a}\); the plasmoid assimilated with the ambient plasma. After this point the electric field does not provide much energy to the ions, so the energy transfer from electrons to ions may be considered complete. With the given choice of plasma parameters the densities and temperatures equilibrate at approximately the same time. With a larger line-integrated density \(N_{p}\) the temperature equilibration would occur well before the densities are comparable. With a much smaller line-integrated density the densities become similar well before the temperatures have equilibrated. More discussion of how the QE formalism fits into the larger topic of plasmoid expansion is given in a later section. Late in the plasmoid expansion with the cold-fluid model for ions, a steepening of the density profile results from the fast plasmoid moving into the ambient plasma causing a shock near the extremities of the plasmoid. This shock which will cause the generation of sound waves and solitons that will propagate into the ambient plasma. The wave propagation will sap the kinetic energy of the plasmoid and possibly the electron-ion energy balance. However, the shock and wave dynamics may only be properly accounted for with Poisson's equation for the electric potential since a deviation from quasineutrality will occur near the shock. As we neglect any deviation from quasineutrality, sound wave and soliton generation is suppressed. In the Vlasov ion model, which accounts for the ambient ion temperature, no shock is observed; the density profile smoothly decreases to the ambient density. This is because many ambient ions can now traverse the entire plasmoid or are rapidly reflected from the potential, hence do not pile up at the expanding edge of the plasmoid. It is expected that as the ion temperature decreases the system is more prone to forming a shock. ### Self-similar solution to plasmoid expansion Aleynikov _et al._ (2019) provided the self-similar solution to plasmoid expansion given that the plasmoid is transparent to the ambient plasma. Although we seek to rectify the issues in the model therein, the density and temperature profiles obtained with the plasma parameters given earlier will be used to justify the ordering which will be used to simplify the electron kinetic problem. We require only order-of-magnitude estimates to find this ordering, so we deem the self-similar solution good enough for this purpose. However, as we wish to model the long-term expansion of the plasmoid, well past the applicability of the self-similar model, we must modify the profiles in Aleynikov _et al._ (2019) so that they are valid (i.e. giving a correct order of magnitude estimate) for the long-term expansion. Firstly, we note that in the self-similar expansion the plasmoid electron temperature is given by \(T=\nu_{h}T_{a}t\) for \(t\ll\nu_{h}^{-1}\), where \[\nu_{h}=\frac{n_{a}e^{4}\ln\Lambda}{6\sqrt{2}\pi^{\frac{3}{2}} \varepsilon_{0}^{2}m_{e}^{\frac{1}{2}}T_{a}^{\frac{5}{2}}} \tag{1}\] is the inverse heating time of the cold plasmoid electrons by a hot population of density \(n_{a}\) and temperature \(T_{a}\). The expression \[T\sim T_{a}\left(1-\mathrm{e}^{-\nu_{h}t}\right) \tag{2}\] agrees with this linearly increasing temperature at early times, but exponentially approaches \(T_{a}\) as time advances, which is the characteristic behaviour of a cold Maxwellian being heated by a hotter one. Therefore we expect the above expression to be adequate in describing the plasmoid electron temperature in both the short and long-term evolution of the plasmoid. In the self-similar solution, the density becomes infinite as \(t\to 0\) and vanishes for \(t\to\infty\), neglecting the fact that the electron density approaches \(n_{a}\) as time advances. Therefore the expression \[n_{m}\sim N_{p}\nu_{h}\sqrt{\frac{3m_{i}}{8\pi(\nu_{h}t)^{3}T_{a }}}+n_{a}, \tag{3}\] for the peak plasma density, which simply adds the ambient electron density \(n_{a}\) to the self-similar solution, is a plausible expression for both long and short term evolution. Although the electric potential in Aleynikov _et al._ (2019) diverges as \(|z|\to\infty\), the Boltzmann relation \[e\phi_{m}\sim T\ln\left(\frac{n_{m}}{n_{a}}\right) \tag{4}\] provides a good estimate for the height of the potential. This estimate is also supported by solution to the self-consistent electron kinetic problem in Arnold _et al._ (2023), which showed that the potential height is the same order of magnitude as that suggested by the Boltzmann relation, its exact value being somewhat larger when \(T\ll T_{a}\). Equation (1.3) expresses is the _peak_ plasma density, which we will subsequently use in the ordering. Of course, electrons move throughout the plasmoid, more for passing electrons and less for trapped electrons, so the most rigorous approach would be to consider 'average' quantities throughout the orbit. This would be very complicated, and relies on a detailed knowledge of the shape of the potential which we have not yet obtained. Therefore, we apply the ordering using the quantities at the peak of the plasmoid with the reasoning that the plasmoid density is very large at its peak and decreases rapidly as one moves away from it. Hence, when considering trapped electrons the orbit-average of any quantity will be heavily weighted by the value at the peak. When consider passing electrons, we will actually obtain the same expressions as those obtained by rigorous consideration. Figure 1: Schematic of the electric potential induced by the presence of the plasmoid (Arnold _et al._, 2023). Example trapped (with turning points \(\pm z_{c}\)) and passing electron trajectories are included. The profiles are assumed to be even and monotonically decreasing in \(z\) with the electron density and potential reaching their maxima \(n_{m}\) and \(\phi_{m}\) at \(z=0\). ## 2 Electron kinetics The kinetic equation for the electron distribution function \(f\) is given by \[\frac{\partial f}{\partial t}+v_{\parallel}\frac{\partial f}{\partial z}+\frac{e }{m_{e}}\frac{\partial\phi}{\partial z}\frac{\partial f}{\partial v_{\parallel} }=C(f,f)+\sum_{k}C_{e,ik}(f) \tag{1}\] in the variables \((v_{\parallel},v_{\perp},z,t)\), where \(v_{\parallel}\) is the velocity parallel to the field line, \(v_{\perp}\) is the speed perpendicular to the field line, \(z\) is the coordinate along the field line, \(t\) is time, \(C(f,f)\) is the electron self-collision operator and \(C_{e,ik}(f)\) is the collision operator for electrons colliding against ion species \(k\). We change to the independent variables (\(\mathcal{E}_{\parallel}=m_{e}v_{\parallel}^{2}/2-e\phi,\mathcal{E}_{\perp}=m_{ e}v_{\perp}^{2}/2,z,t\)): \[\frac{\partial f}{\partial t}+v_{\parallel}\frac{\partial f}{\partial z}-e \frac{\partial\phi}{\partial t}\frac{\partial f}{\partial\mathcal{E}_{ \parallel}}=C(f,f)+\sum_{k}C_{e,ik}(f), \tag{2}\] showing that collisionless change in electron energy is associated with time-variation of the electric potential. We note that with a stationary potential both \(\mathcal{E}_{\parallel}\) and \(\mathcal{E}_{\perp}\) are constants of motion in the absence of collisions. Passing electrons have \(\mathcal{E}_{\parallel}\geqslant 0\) and trapped \(\mathcal{E}_{\parallel}<0\). The minimum parallel energy of an electron is \(\mathcal{E}_{\parallel}=-e\phi_{m}\), which corresponds to an electron with \(v_{\parallel}=0\) which remains at \(z=0\). We now solve the kinetic equations in regions I, II, and III. We split up the distribution function into its representation in each region, \(f_{\rm I}\), \(f_{\rm II}\), and \(f_{\rm III}\). That is, when we are in region I, for example, we solve the kinetic equation for \(f_{\rm I}\). No matter where we are in phase-space, collisions are experienced with every other region of phase-space via \(C(\cdot,f_{\rm I}+f_{\rm II}+f_{\rm III})\). In this sense it is understood that \(f_{\rm I}\) (or \(f_{\rm II}\) or \(f_{\rm III}\)) is the entire distribution function in region I (or II or III), but is zero outside of its own region. Figure 2: Schematic of the phase-space domain of the electron kinetic problem at \(z=0\). This is also the domain for the bounce-averaged kinetic problems. The dotted line indicates \(\mathcal{E}=\mathcal{E}_{\parallel}+\mathcal{E}_{\perp}=0\). The diagonal dashed line indicates \(\mathcal{E}=\mathcal{E}_{\rm I/II}\), which separates regions I and II. The vertical dashed line indicates \(\mathcal{E}_{\parallel}=0\), the trapped-passing separatrix. ### Solving the kinetic equation for passing electrons (region III) The kinetic equation for electrons in region III is given by \[\frac{\partial f_{\rm III}}{\partial t}+v_{\parallel}\frac{\partial f_{\rm III}} {\partial z}-e\frac{\partial\phi}{\partial t}\frac{\partial f_{\rm III}}{ \partial\mathcal{E}_{\parallel}}=C(f_{\rm III},f)+\sum_{k}C_{e,ik}(f_{\rm III}). \tag{3}\] The collision frequency of an electron inside the plasmoid with the plasmoid electrons and ions is approximately given by \[\nu_{p}(v)=\frac{n_{p}(1+Z_{\rm eff})e^{4}\ln\Lambda}{8\pi\varepsilon_{0}^{2}m _{e}^{2}v^{3}}, \tag{4}\] where \(v\) is the electron speed, \(n_{p}\) is the plasmoid electron density, and \[Z_{\rm eff}=\frac{\sum_{k}n_{ik}Z_{k}^{2}}{\sum_{k}n_{ik}Z_{k}} \tag{5}\] is the effective charge of the ions. Equation (4) is derived from the frequency with which an electron experiences pitch-angle scattering, assuming that quasineutrality \[n_{e}\approx n_{p}=\sum_{k}n_{ik}Z_{k}, \tag{6}\] where \(n_{e}\) is the electron density, holds inside the plasmoid. The typical velocity of a passing electron is given by the ambient electron thermal velocity \(v_{T_{a}}=\sqrt{2T_{a}/m_{e}}\). The inverse of the time taken for a passing electron to transit the plasmoid is \(\nu_{T}=v_{T_{a}}/L_{p}\) for plasmoid length \(L_{p}=N_{p}/n_{p}(z=0)\). Hence, for a hydrogenic plasma, in region III the ratio of the collision frequency with the plasmoid and the inverse transit time is given by \[\frac{\nu_{p}(v_{T_{a}})}{\nu_{T}}:=\mu=\frac{N_{p}e^{4}\ln\Lambda}{16\pi \varepsilon_{0}^{2}T_{a}^{2}},. \tag{7}\] With the plasma and plasmoid parameters given in the Introduction this ratio is much less than unity: \[\mu\ll 1. \tag{8}\] where for the purpose of calculation the Coulomb logarithm \(\ln\Lambda\) is assumed to be equal to 15. This implies that the mean free path of passing electrons is much longer than the plasmoid, so the plasmoid appears essentially transparent to ambient electrons. We therefore refer to \(\mu\) as the opacity of the plasmoid. \(\mu\) is independent of any parameters that change during the expansion, hence it is small throughout the _entire_ expansion. The terms in the kinetic equation (3) containing time derivatives correspond to collisionless changes in the energy of passing electrons, which certainly occur on a longer timescale than the transit time. Therefore the shortest timescale in the region III is the transit time, which is associated with the convective term in the kinetic equation; the lowest-order kinetic equation for passing electrons is \[v_{\parallel}\frac{\partial f_{\rm III}}{\partial z}=0, \tag{9}\] which implies that \(f_{\rm III}\) is independent of \(z\). The kinetic equation for passing electrons (3) then reduces to \[\frac{\partial f_{\rm III}}{\partial t}-e\frac{\partial\phi}{\partial t}\frac {\partial f_{\rm III}}{\partial\mathcal{E}_{\parallel}}=C(f_{\rm III},f)+\sum _{k}C_{e,ik}(f_{\rm III}), \tag{10}\] the solution to which can be obtained immediately by bounce-averaging. We define the bounce integral of a function \(g(\mathcal{E}_{\parallel},\mathcal{E}_{\perp},z,t)\) to be \[\oint g\,\mathrm{d}z:=2\int_{-z_{c}}^{z_{c}}g\,\mathrm{d}z, \tag{11}\] where \(z_{c}(\mathcal{E}_{\parallel},t)>0\) is the turning point such that \[\mathcal{E}_{\parallel}+e\phi(z_{c},t)=0 \tag{12}\] (cf. Fig. 1). The bounce average of \(g\) is given by \[\langle g\rangle=\frac{1}{\tau}\oint g\,\mathrm{d}z \tag{13}\] for bounce period \[\tau=\oint\frac{\mathrm{d}z}{v_{\parallel}}. \tag{14}\] We note that on an infinitely long magnetic field line the bounce-average of any function \(g\) for \(\mathcal{E}_{\parallel}>0\) is given by \[\langle g\rangle\big{|}_{\mathcal{E}_{\parallel}>0}=\lim_{|z|\to\infty}g. \tag{15}\] Since the potential vanishes at infinity and we expect the distribution function to be constant at infinity, the bounce average of Eq (10) is solved by the Maxwellian defining the ambient plasma: \[f_{\mathrm{III}}=f_{a}=n_{a}\left(\frac{m_{e}}{2\pi T_{a}}\right)^{\frac{3}{2} }\mathrm{e}^{-\frac{\mathcal{E}}{f_{a}}}. \tag{16}\] We note that, as mentioned earlier, we have used the expression for peak plasmoid density to deduce the ordering (8) and obtain Eq. (16), despite the fact that plasmoid electrons spend little time inside the plasmoid. The same result can be obtained by a slightly different approach: Eq. (7) is identical to the expression for the number of passing electrons slowed down by collisions with the plasmoid as they traverse it (Arnold _et al._, 2021). If the number of passing electrons slowed by the plasmoid is small, then the passing electron distribution is modified little by collisions and is simply replenished from flux emerging from \(|z|\to\infty\), resulting in the distribution identical to that at infnity; hence Eq. (16). ### Solving the kinetic equation for deeply trapped electrons (region I) The kinetic equation for electrons in region I is given by \[\frac{\partial f_{\mathrm{I}}}{\partial t}+v_{\parallel}\frac{\partial f_{ \mathrm{I}}}{\partial z}-e\frac{\partial\phi}{\partial t}\frac{\partial f_{ \mathrm{I}}}{\partial\mathcal{E}_{\parallel}}=C(f_{\mathrm{I}},f_{\mathrm{I} })+C(f_{\mathrm{I}},f_{\mathrm{II}}+f_{\mathrm{III}})+\sum_{k}C_{e,ik}(f_{ \mathrm{I}}). \tag{17}\] We expect the potential well to be parabolic at its peak; deeply trapped electrons bounce inside this parabola and collide with the plasmoid when its density is near its peak. For a parabolic potential well of height \(\phi_{m}\) and width \(L_{p}\), the bounce frequency is given by \[\nu_{B}\sim\frac{L_{p}}{\sqrt{2e\phi_{m}/m_{e}}}, \tag{18}\] and is associated with the convective term, so we write \[v_{\parallel}\frac{\partial f_{\mathrm{I}}}{\partial z}\sim\nu_{B}f_{\mathrm{ I}}. \tag{19}\] Substituting temperature (2) and density (3) from the modified self-similar solution into the Boltzmann relation (4) yields \[\frac{e\phi_{m}}{T_{a}}\sim(1-\mathrm{e}^{-\nu_{h}t})\ln\left(1+\frac{2}{\pi \sqrt{3}}\sqrt{\frac{m_{i}}{m_{e}}}\mu(\nu_{h}t)^{-\frac{3}{2}}\right). \tag{20}\] Given that \((2/(\pi\sqrt{3}))\sqrt{m_{i}/m_{e}}\mu\) is at least order unity, the above is order unity for several heating times \(\nu_{h}^{-1}\), with the exception of the very early stage of the expansion. This is the case for the plasma parameters given in the Introduction. We also note that while \(T<T_{a}\) the Boltzmann relation provides something of an underestimate, strengthening the argument that \(e\phi_{m}\sim T_{a}\) for smaller values of \(\mu\). Owing to the height of the potential, the bounce frequency in region I is of the same order as the transit frequency of region III: \[\nu_{B}\sim\frac{L_{p}}{\sqrt{2T_{a}/m_{e}}}. \tag{21}\] Since we expect \(f_{\mathrm{I}}\) to correspond to a dense population of cold electrons, we associate the collision terms against \(f_{\mathrm{I}}\) with the frequency of collisions with the plasmoid; we write \[C(f_{\mathrm{I}},f_{\mathrm{I}})\sim\sum_{k}C_{e,ik}(f_{\mathrm{I}})\sim\nu_{ p}\left(v_{T}\right)f_{\mathrm{I}}, \tag{22}\] where \(v_{T}=\sqrt{2T/m_{e}}\) the typical electron speed within region I. Since \(f_{\mathrm{I}\mathrm{I}}\) and \(f_{\mathrm{I}\mathrm{I}\mathrm{I}}\) represent the hot tail, we associate the collision terms against these with the heating rate. Since \(T\) approaches exponentially \(T_{a}\) as time advances, this heating rate decreases exponentially as time advances: we define the heating rate to be \[\frac{1}{T_{a}}\frac{\mathrm{d}T}{\mathrm{d}t}=\nu_{h}\mathrm{e}^{-\nu_{h}t} \tag{23}\] and write \[C(f_{\mathrm{I}},f_{\mathrm{I}}+f_{\mathrm{I}\mathrm{I}\mathrm{I}})\sim\nu_{ h}\mathrm{e}^{-\nu_{h}t}f_{\mathrm{I}}. \tag{24}\] Comparing the frequency of collisions with the plasmoid with the heating using the modified self-similar temperature (2) and density (3) gives \[\frac{\nu_{h}\mathrm{e}^{-\nu_{h}t}}{\nu_{p}(v_{T})}\sim\frac{4}{3\sqrt{\pi}} \mathrm{e}^{-\nu_{h}t}\left(1-\mathrm{e}^{-\nu_{h}t}\right)^{\frac{3}{2}} \left(1+\frac{2}{\pi\sqrt{3}}\sqrt{\frac{m_{i}}{m_{e}}}\mu(\nu_{h}t)^{-\frac{3 }{2}}\right)^{-1}. \tag{25}\] With the plasma parameters used in the Introduction the above is much smaller than unity for all times: collisions with the plasmoid occur much more frequently than collisions that cause heating. This effect is also enhanced by the fact that in a parabolic potential well we expect the heating rate to be slightly reduced, due to the reduction in the density of passing electrons and their reduction in collisionality (Arnold _et al._, 2023). Hence we can write \[\frac{\nu_{h}\mathrm{e}^{-\nu_{h}t}}{\nu_{p}(v_{T})}\ll 1 \tag{26}\] Since region I represents the cold electrons, we expect the time-dependent terms in Eq. (17) to act on a much longer timescale than the collision time. In region III we noted that the transit frequency greatly exceeds the collision frequency with the plasmoid. However, as we move into region I, which contains lower-velocity electrons, the bounce frequency (which is comparable to the transit frequency, cf. Eq. (21)), remains the same as the collision frequency increases. Therefore, collisions with the plasmoid and bounce motion are associated with the two shortest timescales in region I. Accordingly, the lowest-order kinetic equation in region I is \[v_{\parallel}\frac{\partial f_{\rm I}}{\partial z}=C(f_{\rm I},f_{\rm I})+\sum_{ k}C_{e,ik}(f_{\rm I}). \tag{27}\] When \(T\ll T_{a}\), \(T\ll e\phi_{m}\), which allows us to define \(\mathcal{E}_{\rm I/II}\) such that region I extends for several times \(T\) in both the parallel and perpendicular direction. Therefore, the solution to the above, assuming that collisions with ions are well-approximated by pitch-angle scattering, is a Maxwellian in energy \[f_{\rm I}=f_{0}=\eta\left(\frac{m_{e}}{2\pi T}\right)\mathrm{e}^{-\frac{ \mathcal{E}}{T}} \tag{28}\] for parameters \(\eta(t)\), \(T(t)\)(Aleynikov _et al._, 2019). We see now that in the earlier work Aleynikov _et al._ (2019); Runov _et al._ (2021); Arnold _et al._ (2021) only electrons in region I, where a purely Maxwellian electron distribution function is exhibited, are treated, whereas in this investigation we continue the analysis in regions II and III. The remaining kinetic equation in region I, corresponding to the heating timescale, is given by \[\frac{\partial f_{\rm I}}{\partial t}-e\frac{\partial\phi}{\partial t}\frac{ \partial f_{\rm I}}{\partial\mathcal{E}_{\parallel}}=C(f_{\rm I},f_{\rm II}+f _{\rm III}), \tag{29}\] which captures the collisionless change in electron energy due to the expanding well and the heating of the cold Maxwellian by the hot electrons. ### Choosing \(\mathcal{E}_{\rm I/II}\) Owing to the ordering, when \(T\ll T_{a}\) we understand that the potential well is deep enough for a Maxwellian to reside in region I, provided \(\mathcal{E}_{\rm I/II}\) is chosen close enough to zero. We now decide upon an explicit definition for \(\mathcal{E}_{\rm I/II}\) which is consistent with the distribution in region I: we require that collisions with the cut-off Maxwellian in region I are well-approximated by collisions with the full Maxwellian that extends to arbitrarily large energies. This will allow the linearisation of the kinetic problem in region II in terms of collisions with the Maxwellian. However, we must not artificially extend region I past the point where the distribution function would be Maxwellian; this would result in an incorrect distribution function. From the Boltzmann relation (4) and the density of the full Maxwellian distribution (28) being \(n_{p}=\eta\exp(e\phi/T)\) we find that \[n_{p}\sim n_{a}\mathrm{e}^{\frac{e\phi}{T}}. \tag{30}\] Considering an electron with with parallel energy \(\mathcal{E}_{\rm I/II}\), it has a turning point \(z_{c}\) such that \(e\phi(z_{c})+\mathcal{E}_{\rm I/II}=0\). Therefore, at this turning point, \[n_{p}(z_{c})\sim n_{a}\mathrm{e}^{-\frac{\mathcal{E}_{\rm I/II}}{T}}. \tag{31}\] During its orbit, this electron collides with a plasmoid density that strictly larger than the above. Therefore, if we choose \(\mathcal{E}_{\rm I/II}\) such that \(n_{p}(z_{c})>an_{a}\) for \(a\gg 1\), then the collisions the electron experiences are completely dominated by collisions with the cold Maxwellian _throughout its entire orbit_. Above this energy, the electron collides considerably with the ambient electrons in the extremities of the plasmoid _as well as with the plasmoid electrons in the core_, so the distribution function at these higher energies is not necessarily Maxwellian. Hence, the upper bound for \(\mathcal{E}_{\rm I/II}\) is expressed as \[\mathcal{E}_{\rm I/II}<-T\ln a. \tag{32}\] The lower bound is fixed by collisions with the cut-off Maxwellian in region I being well-approximated by collisions with the full Maxwellian. The simplest way to guarantee this is to have \[\frac{f_{0}(\mathcal{E}_{\rm I/II})}{f_{0}(-e\phi_{m})}<\frac{1}{a} \tag{33}\] for \(a\gg 1\). Then, the lower bound for \(\mathcal{E}_{\rm I/II}\) is given by \[\mathcal{E}_{\rm I/II}>-e\phi_{m}+T\ln a. \tag{34}\] ### Deriving the kinetic equation for hot trapped electrons (region II) The kinetic equation in region II is given by \[\frac{\partial f_{\rm II}}{\partial t}+v_{\parallel}\frac{\partial f_{\rm II }}{\partial z}-e\frac{\partial\phi}{\partial t}\frac{\partial f_{\rm II}}{ \partial\mathcal{E}_{\parallel}}=C(f_{\rm II},f_{\rm I})+C(f_{\rm II},f_{ \rm II}+f_{\rm III})+\sum_{k}C_{e,ik}(f_{\rm II}). \tag{35}\] As the intermediate region, the ordering is most complex for the hot trapped electrons. More care must also be taken when considering terms with time derivatives as the collision frequency is lower in region II than region I. The typical velocity in region I is of order \(\sqrt{2e\phi_{m}/m_{e}}\sim v_{T_{a}}\), so the collision frequency with the plasmoid in region II is of the same order as in region III. In region II the bounce frequency is of the same order as in region I (Eq. (21)). Hence we write \[C(f_{\rm II},f_{\rm I})\sim\sum_{k}C_{e,ik}(f_{\rm I})\sim\nu_{p}(v_{T_{a}})f _{\rm II}, \tag{36}\] \[C(f_{\rm II},f_{\rm II}+f_{\rm III})\sim\nu_{h}{\rm e}^{-\nu_{h}t}f_{\rm II}, \tag{37}\] \[v_{\parallel}\frac{\partial f_{\rm II}}{\partial z}\sim\nu_{B}f_{\rm II}, \tag{38}\] noting that both the collision frequency with the plasmoid and the heating rate are much smaller than the bounce frequency: \[\frac{\nu_{p}(v_{T_{a}})}{\nu_{B}}=\mu\ll 1, \tag{39}\] \[\frac{\nu_{h}{\rm e}^{-\nu_{h}t}}{\nu_{B}}=\frac{4}{3\sqrt{\pi}}\frac{n_{a}L_ {p}{\rm e}^{-\nu_{h}t}}{N_{p}}\mu\ll 1. \tag{40}\] As in region I, we assume that the terms containing time derivatives correspond to a timescale much longer than the bounce period. Then, the shortest timescale in the system is the bounce period, which leads to the lowest-order equation \[v_{\parallel}\frac{\partial f_{\rm II}}{\partial z}=0, \tag{41}\] meaning that \(f_{\rm II}\) (and hence \(f\) as a whole) is independent of \(z\). The higher-order kinetic equation can then be bounce-averaged, yielding \[\frac{\partial f_{\rm II}}{\partial t}-\frac{1}{\tau}\frac{\partial J}{ \partial t}\frac{\partial f_{\rm II}}{\partial\mathcal{E}_{\parallel}}= \left\langle C(f_{\rm II},f_{\rm I})+\sum_{k}C_{e,ik}(f_{\rm II})\right\rangle +\left\langle C(f_{\rm II},f_{\rm II}+f_{\rm III})\right\rangle \tag{42}\] where \[J(\mathcal{E}_{\parallel},t)=\oint m_{e}v_{\parallel}\,{\rm d}z=\sqrt{2m_{e}} \oint\sqrt{\mathcal{E}_{\parallel}+e\phi}\,{\rm d}z \tag{43}\] is the second adiabatic invariant for an electron bouncing in the well. Now we analyse the timescales on which the time-dependent terms act and compare them to other timescales. Since \(f_{\rm II}\) is equal to \(f_{0}\) at \(\mathcal{E}=\mathcal{E}_{\rm I/II}\) and equal to \(f_{a}\) at \(\mathcal{E}_{\parallel}=0\) it serves to perform this analysis at each boundary. At the trapped-passing separatrix, \(\mathcal{E}_{\parallel}=0\), \(f_{\rm II}\) must be equal to \(f_{a}\), which is constant in time; the first term on the left hand side of Eq. (42) vanishes. The second term represents the adiabatic change in electron energy as the well expansion, which in Aleynikov _et al._ (2019) was shown to occur on the heating timescale: \[-\frac{1}{\tau}\frac{\partial J}{\partial t}\frac{\partial f_{\rm II}}{ \partial\mathcal{E}_{\parallel}}\sim\nu_{h}\mathrm{e}^{-\nu_{h}t}f_{\rm II}. \tag{44}\] At \(\mathcal{E}=\mathcal{E}_{\rm I/II}\) Eq. (44) also holds. However, since the Maxwellian in region I has a temperature that changes in time, the time derivative of \(f_{\rm II}\) is not zero here: \[\left.\frac{\partial f_{\rm II}}{\partial t}\right|_{\mathcal{E}=\mathcal{E}_{ \rm I/II}}=\left[\frac{1}{\eta}\frac{\mathrm{d}\eta}{\mathrm{d}t}+\left(\frac{ \mathcal{E}_{\rm I/II}}{T}-\frac{3}{2}\right)\frac{1}{T}\frac{\mathrm{d}T}{ \mathrm{d}t}\right]f_{\rm II}\right|_{\mathcal{E}=\mathcal{E}_{\rm I/II}}. \tag{45}\] We can choose \(|\mathcal{E}_{\rm I/II}|\sim T\), and we see that \((\partial\eta/\partial t)/\eta\sim(\partial T/\partial t)/T\), so Eq. (45) can be approximated by \[\left.\frac{\partial f_{\rm II}}{\partial t}\right|_{\mathcal{E}=\mathcal{E}_{ \rm I/II}}\sim\frac{1}{T}\frac{\mathrm{d}T}{\mathrm{d}t}f_{\rm II}\right|_{ \mathcal{E}=\mathcal{E}_{\rm I/II}}. \tag{46}\] That is, the timescale on which the term acts, which we define via the frequency \[\nu_{t}=\frac{1}{T}\frac{\mathrm{d}T}{\mathrm{d}t}=\frac{T_{a}}{T}\nu_{h} \mathrm{e}^{-\nu_{h}t}, \tag{47}\] is the time taken for \(T\) to increase by a factor of e. When \(T\) is small this can be a very short time, so \(\nu_{t}\) cannot simply be assumed to be small compared to the collision time. The profiles from the modified self-similar solution give \[\frac{\nu_{t}}{\nu_{p}(v_{T_{a}})}\sim\frac{4}{3\sqrt{\pi}}\mathrm{e}^{-\nu_{ h}t}(1-\mathrm{e}^{-\nu_{h}t})^{\frac{1}{2}}\left(1+\frac{2}{\pi\sqrt{3}} \sqrt{\frac{m_{i}}{m_{e}}}(\nu_{h}t)^{-\frac{3}{2}}\right)^{-1}, \tag{48}\] which is always much less than unity for the plasma parameters given in the Introduction; we write \[\frac{\nu_{t}}{\nu_{p}(v_{T_{a}})}\ll 1. \tag{49}\] So, the collision term with the with the plasmoid corresponds to the shortest timescale in Eq. (42); the lowest-order kinetic equation is therefore \[\left\langle C(f_{\rm II},f_{\rm I})+\sum_{k}C_{e,ik}(f_{\rm II})\right\rangle=0. \tag{50}\] The distribution function must continuous in collisional kinetic problems, hence Eq. (50) must be solved with boundary conditions ensuring continuity: \[f_{\rm II}(\mathcal{E}=\mathcal{E}_{\rm I/II})=f_{0}(\mathcal{E}=\mathcal{E}_{ \rm I/II}), \tag{51}\] \[f_{\rm II}(\mathcal{E}_{\parallel}=0)=f_{a}(\mathcal{E}_{\parallel}=0). \tag{52}\] The higher-order equation in region II is \[\frac{\partial f_{\rm II}}{\partial t}-\frac{1}{\tau}\frac{\partial J}{ \partial t}\frac{\partial f_{\rm II}}{\partial\mathcal{E}_{\parallel}}=\left \langle C(f_{\rm II},f_{\rm II}+f_{\rm III})\right\rangle, \tag{53}\] which describes the heating and expansion timescales. ### The quasi-equilibrium problem The distribution function in region II is obtained by solving Eq. (2.50), which must be be supplemented by boundary conditions (2.51),(2.52) enforcing continuity of the distribution function into regions I and III. Conceptually, the kinetic problem in region II describes a _quasi-equilibrium_ (QE); hot trapped electrons experience rapid collisions against a Maxwellian (and are isotropised by collisions with ions), but the tail of the distribution is forced to meet a Maxwellian of a different temperature at the trapped-passing separatrix. When \(T\ll T_{a}\) collisions with the distribution in region I are well-approximated by collisions with the full Maxwellian: \(C(\cdot,f_{\rm I})\approx C(\cdot,f_{0})\). Since it only remains to solve the kinetic problem in region II, it is unnecessary to have a subscript, so we write \(f=f_{\rm II}\) in region II. Hence the QE kinetic equation can be written as \[\langle C_{\rm QE}(f)\rangle=0 \tag{2.54}\] for \[C_{\rm QE}(f)=C(f,f_{0})+\sum_{k}C_{e,ik}(f). \tag{2.55}\] Further, owing to the fact that collisions are linearised in terms of collisions against a full Maxwellian, the lower boundary condition Eq. (2.51) can actually be applied at \({\cal E}=-e\phi_{m}\) rather than \({\cal E}_{\rm I/II}\). ### Range of validity of the ordering The ordering developed in this section is based upon the self-similar expansion in Aleynikov _et al._ (2019), modified to provide plausible profiles at the later stages of the expansion, given a line-integrated plasmoid density \(N_{p}=10^{22}\,{\rm m}^{-2}\) in an ambient plasma of density electron \(n_{a}=5\times 10^{19}\) at a temperature of \(5\,{\rm keV}\). The requirements of the ordering are that during most of the expansion the potential height is of order the ambient temperature: \[e\phi_{m}\sim T_{a}, \tag{2.56}\] that the plasmoid is transparent to passing and hot trapped electrons: \[\mu=\frac{\nu_{p}(v_{T_{a}})}{\nu_{T}}\sim\frac{\nu_{p}(v_{T_{a}})}{\nu_{B}} \ll 1, \tag{2.57}\] that the heating rate is much lower than the collision frequency with the plasmoid: \[\frac{\nu_{h}{\rm e}^{-\nu_{h}t}}{\nu_{p}(v_{T_{a}})}\ll 1, \tag{2.58}\] \[\frac{\nu_{h}{\rm e}^{-\nu_{h}t}}{\nu_{p}(v_{T})}\ll 1, \tag{2.59}\] and that the time taken for the plasmoid electron temperature to increase by a factor of \({\rm e}\) is much larger than the collision time: \[\frac{1}{T}\frac{{\rm d}T}{{\rm d}t}\ll\nu_{p}(v_{T_{a}}), \tag{2.60}\] \[\frac{1}{T}\frac{{\rm d}T}{{\rm d}t}\ll\nu_{p}(v_{T}). \tag{2.61}\] The transparency of the plasmoid is dependent upon the line-integrated plasmoid density not being too large, and \(e\phi_{m}\) being of order \(T_{a}\) is dependent upon the line-integrated plasmoid density not being too small; satisfying both conditions does formally somewhat constrain the values of the line-integrated density. However, the relative simplicity of the resulting kinetic problem provides strong motivation for using the formalism: we immediately obtain the distribution function in two out of three regions, and in the remaining region we must solve a steady-state kinetic equation. Then, the macroscopic expansion is described by a kinetic equation of which _velocity moments_ can be taken, in order to obtain much simpler time-dependent equations than one would by including the time-dependent terms directly in the kinetic problem. In this sense, the formalism is analogous to the Braginskii equations, but valid for systems with a long rather than short hot electron mean free path. We reiterate that the kinetic problem in region I and II was formally derived assuming \(T\ll T_{a}\), which permitted a choice of \({\cal E}_{\rm I/II}\) that guaranteed a Maxwellian distribution in region I, collisions with which are well-approximated by collisions with the full Maxwellian. This ultimately allowed the linearisation of the kinetic problem in region II in terms of collisions with the full Mawwellian. However, we note that when \(T=T_{a}\), the entire distribution function will be a single Maxwellian, _yet_ Eq. (54) _would still be satisfied_. This is because electrons in region II would be colliding with Maxwellian electrons with the same temperature in regions I and III. We therefore conclude that the whole formalism is valid when \(T\ll T_{a}\)_and_ when \(T\to T_{a}\). Therefore the formalism can actually accurately model the expansion _outside_ of its formal ordering (which has \(T\ll T_{a}\)), which is characteristic of robust simplifications of kinetic problems, such as the Braginskii equations, which often achieve a level of qualitative correctness even when the mean free path is long and the distribution function is not very close to a Maxwellian. By the same argument one could expect that the formalism here is qualitatively correct somewhat outside of the range of parameters that leads to the ordering. As mentioned in the previous subsection, the boundary condition (51) can be applied at \({\cal E}=-e\phi_{m}\) in the QE problem. This solves the problem of how to choose \({\cal E}_{\rm I/II}\) when \(T\) approaches \(T_{a}\) and the well becomes too shallow to contain several \(T\): when solving the QE problem we can always choose \({\cal E}_{\rm I/II}=-e\phi_{m}\). The formalism has been developed specifically with pellet plasmoids in mind, and essentially models plasmoid expansion with 'intermediate' line-integrated densities. An alternate approach, which is more suited to the abstract study of plasmoid expansion, is to consider the limit as the line-integrated density goes to zero or infinity. Then, the ratio \(e\phi_{m}/T_{a}\) is a large or small parameter on which an ordering may be based. When the line-integrated density is very large, the plasmoid and ambient temperatures will equilibrate before the plasmoid density is comparable to the ambient density. Then, the expansion can be described simply with Maxwellian electrons from an early stage. If instead it is very small, the densities become comparable well before the temperatures have equilibrated. In our case, with an intermediate line-integrated density, the two occur at approximately the same time; certainly one cannot assume \(T=T_{a}\) or \(n_{pm}\approx n_{a}\) from an early stage. When constructing the ordering, the opacity \(\mu\) was given assuming that intra-species collisions dominate; ambient ions collide most quickly with plasmoid ions and ambient electrons collide most quickly with plasmoid electrons. This is the case when the thermal velocities of ions and electrons are disparate. However, if the thermal velocities of an ion population \(k\) and an electron population are comparable, then the friction of the ions on the electrons is actually \(m_{ik}/m_{e}\) times larger than the ion-ion collision frequency (Helander & Sigmar 2002). The thermal velocities of the ambient ions and plasmoid electrons are actually comparable for a brief window of time where \(T\) is extremely small. However, collisions of the ambient ions with plasmoid electrons in this regime cause rapid heating of the plasmoid electrons, therefore driving \(T\) up and out of the regime where the plasmoid electrons and ambient ions have a comparable thermal velocity. Hence these collisions are negligible outside of the very early stages of the plasmoid expansion, where the ordering is, anyway, not satisfied; we restrict our attention to times later than this. Expressing the quasi-equilibrium equation in the variables \((\mathcal{E}_{\parallel},\mathcal{E}_{\perp})\) The collision operator against \(f_{0}\) in the variables \((v,\theta,z,t)\) for pitch-angle \(\theta\) (assuming that \(f\) is independent of the azimuthal angle of the velocity \(\varphi\)) is given by \[\begin{split} C(f,f_{0})=\frac{e^{4}\ln\Lambda}{4\pi\varepsilon_ {0}^{2}m_{e}^{2}}\Bigg{\{}&\frac{1}{v^{3}}\frac{g^{\prime}(x)}{v_ {T}}\frac{1}{2\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta \frac{\partial f}{\partial\theta}\right)+\\ &\frac{1}{v^{2}}\frac{\partial}{\partial v}\left[\frac{x^{3}g^{ \prime\prime}(x)}{v_{T}}\left(f+\frac{T}{m_{e}v}\frac{\partial f}{\partial v }\right)\right]\Bigg{\}},\end{split} \tag{62}\] where \(x=v/v_{T}\), \(g(x)\) is the function \[g(x)=\int|\mathbf{v}-\mathbf{v}^{\prime}|\,f_{0}\,\mathrm{d}^{3}v=n_{0}v_{T} \left[\left(x+\frac{1}{2x}\right)\mathrm{erf}(x)+\frac{1}{\sqrt{\pi}}\mathrm{e }^{-x^{2}}\right], \tag{63}\] and \[n_{0}=\eta\mathrm{e}^{\frac{\pi\phi}{T}} \tag{64}\] is the density of core electrons (Helander & Sigmar, 2002). We note that when \(x\) is large, i.e. we consider collisions of an electron with an energy much larger than \(T\) with the Maxwellian, then both \(g^{\prime}(x)\) and \(x^{3}g^{\prime\prime}(x)\) are well-approximated by \(n_{0}v_{T}\). Similarly, assuming that collisions with ions are well-approximated by pitch-angle scattering, we have \[C_{e,ik}(f)=\frac{e^{4}\ln\Lambda}{4\pi\varepsilon_{0}^{2}m_{e}^{2}}\left[ \frac{1}{v^{3}}Z_{k}^{2}n_{ik}\frac{1}{2\sin\theta}\frac{\partial}{\partial \theta}\left(\sin\theta\frac{\partial f}{\partial\theta}\right)\right] \tag{65}\] for ion charge \(Z_{k}\) and density \(n_{ik}\). The collision operator is given by the divergence of the collisional flux \(\mathbf{F}\) in velocity space: \[C(f,f_{0})=\nabla_{\mathbf{v}}\cdot\mathbf{F}, \tag{66}\] so it can always be expressed in the form \[C(f,f_{0})=|J|\nabla_{\mathbf{w}}\cdot\tilde{\mathbf{F}}, \tag{67}\] where \(|J|\) is the Jacobian of the transformation between coordinates \(\mathbf{v}\) and \(\mathbf{w}\), \[J=\det\left(\frac{\partial w_{i}}{\partial v_{j}}\right), \tag{68}\] and \(\tilde{\mathbf{F}}\) is the collisional flux in \(\mathbf{w}\) phase-space. Noting that \[\det\left(\frac{\partial(\mathcal{E}_{\parallel},\mathcal{E}_{\perp},\varphi )}{\partial(\mathbf{v})}\right)=m_{e}^{2}v_{\parallel} \tag{69}\] we seek to transform Eq. (62) into the form \[C(f,f_{0})=Av_{\parallel}\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp} )}\cdot\tilde{\mathbf{F}} \tag{70}\] for some constant \(A\). We find that \[\begin{split} C(f,f_{0})=&\frac{e^{4}\ln\Lambda}{4\pi \varepsilon_{0}^{2}m_{e}}v_{\parallel}\nabla_{(\mathcal{E}_{\parallel}, \mathcal{E}_{\perp})}.\\ &\left[f_{0}\bigg{(}\mathbf{r}\mathbf{r}\mathcal{E}_{\perp}\frac {v_{\parallel}}{v^{3}}\frac{g^{\prime}(x)}{v_{T}}+T\frac{\mathbf{s}\mathbf{s }}{v^{5}v_{\parallel}}\frac{x^{3}g^{\prime\prime(x)}}{v_{T}}\bigg{)}\nabla_{( \mathcal{E}_{\parallel},\mathcal{E}_{\perp})}\left(\frac{f}{f_{0}}\right) \right],\end{split} \tag{2.71}\] where \[\mathbf{r}=(1,-1),\quad\mathbf{s}=\left(v_{\parallel}^{2},v_{\perp}^{2}\right) \tag{2.72}\] and \(\mathbf{r}\mathbf{r}\), \(\mathbf{s}\mathbf{s}\) represent dyadic products. Similarly, \[C_{e,ik}(f)=\frac{e^{4}\ln\Lambda}{4\pi\varepsilon_{0}^{2}m_{e}}v_{\parallel }\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})}\cdot\left[f_{0}\left( \mathbf{r}\mathbf{r}\mathcal{E}_{\perp}\frac{v_{\parallel}}{v^{3}}Z_{k}^{2}n _{ik}\right)\right]. \tag{2.73}\] \(\mathbf{r}\mathbf{r}\) is the tensor associated with pitch-angle scattering, altering parallel and perpendicular energies such that their sum is unchanged. \(\mathbf{s}\mathbf{s}\) is associated with energy exchange, which affects parallel and perpendicular energies equally. We must bounce-average the collision operators (2.71) and (2.73). In order to obtain a 'useful' expression, we must be able to commute divergence in \((\mathcal{E}_{\parallel},\mathcal{E}_{\perp})\) and the orbit integral. The following Lemma is useful with regards to commuting the divergence and orbit integral: for a vector-valued function \(\mathbf{F}\), we have \[\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})}\cdot\oint\mathbf{F} \,\mathrm{d}z=\oint\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})} \cdot\mathbf{F}\,\mathrm{d}z+4\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{ \perp})}z_{c}\cdot\mathbf{F}(z_{c}). \tag{2.74}\] In order for the divergence and orbit integral to commute, we must have \(\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})}z_{c}\cdot\mathbf{F}(z _{c})=0\); either \(\mathbf{F}(z_{c})\) or \(\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})}z_{c}\) vanishes, or \(\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})}z_{c}\) is orthogonal to \(\mathbf{F}(z_{c})\). We observe that the term associated with pitch-angle scattering is proportional to \(v_{\parallel}\), which (by definition) vanishes when \(z=z_{c}\). So, the divergence and orbit integral commute in the pitch-angle scattering term. As for the energy exchange term, we observe that \[\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})}z_{c}=\frac{\partial z_{ c}}{\partial\mathcal{E}_{\parallel}}\begin{pmatrix}1\\ 0\end{pmatrix}, \tag{2.75}\] and \[(\mathbf{s}\mathbf{s})(z=z_{c})=\begin{pmatrix}0&0\\ 0&v_{\perp}^{4}\end{pmatrix}, \tag{2.76}\] which means that with respect to this term, \[\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})}z_{c}\cdot\mathbf{F}(z _{c})\propto\begin{pmatrix}1\\ 0\end{pmatrix}\cdot\left[\begin{pmatrix}0&0\\ 0&v_{\perp}^{4}\end{pmatrix}\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{ \perp})}\left(\frac{f}{f_{0}}\right)\right]=0, \tag{2.77}\] so the orbit integral and divergence commute for the energy exchange term. Since both \(f\) and \(f_{0}\) are independent of \(z\), they and their derivatives may be brought outside the orbit integral, giving the bounce-averaged QE collision operator \[\begin{split}\langle C_{\text{QE}}(f)\rangle=\frac{e^{4}\ln\Lambda}{4 \pi\varepsilon_{0}^{2}m_{e}\tau}\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{ \perp})}\cdot\bigg{[}& f_{0}\bigg{(}\mathbf{r}\mathbf{r} \mathcal{E}_{\perp}\oint\frac{v_{\parallel}}{v^{3}}\left(\frac{g^{\prime}(x)} {v_{T}}+\sum_{k}Z_{k}^{2}n_{ik}\right)\,\mathrm{d}z\\ &+T\oint\frac{\mathbf{s}\mathbf{s}}{v^{5}v_{\parallel}}\frac{x^{3 }g^{\prime\prime(x)}}{v_{T}}\,\mathrm{d}z\bigg{)}\nabla_{(\mathcal{E}_{ \parallel},\mathcal{E}_{\perp})}\left(\frac{f}{f_{0}}\right)\bigg{]}.\end{split} \tag{2.78}\] The quasi-equilibrium equation is given by setting the above to zero: \[\begin{split}\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{ \perp})}\cdot\bigg{[}& f_{0}\bigg{(}\mathbf{r}\mathbf{r}\mathcal{E }_{\perp}\oint\frac{v_{\parallel}}{v^{3}}\left(\frac{g^{\prime}(x)}{v_{T}}+ \sum_{k}Z_{k}^{2}n_{ik}\right)\,\mathrm{d}z\\ &+T\oint\frac{\mathbf{s}\mathbf{s}}{v^{5}v_{\parallel}}\frac{x^{3 }g^{\prime\prime(x)}}{v_{T}}\,\mathrm{d}z\bigg{)}\nabla_{(\mathcal{E}_{ \parallel},\mathcal{E}_{\perp})}\left(\frac{f}{f_{0}}\right)\bigg{]}=0,\end{split} \tag{2.79}\] which is in the form of an anisotropic steady-state diffusion problem in \((\mathcal{E}_{\parallel},\mathcal{E}_{\perp})\) space: \[\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})}\cdot\bigg{[}D_{\text{QE }}\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})}\left(\frac{f}{f_{0}} \right)\bigg{]}=0 \tag{2.80}\] for \[D_{\text{QE}}=D_{\text{QE,S}}+D_{\text{QE,F}}, \tag{2.81}\] where the diffusion tensor associated with pitch-angle scattering is given by \[D_{\text{QE,S}}=f_{0}\mathcal{E}_{\perp}\oint\frac{v_{\parallel}}{v^{3}} \left(\frac{g^{\prime}(x)}{v_{T}}+\sum_{k}Z_{k}^{2}n_{ik}\right)\,\mathrm{d}z \begin{pmatrix}1&-1\\ -1&1\end{pmatrix}, \tag{2.82}\] and that associated with energy exchange is given by \[D_{\text{QE,F}}=f_{0}T\oint\frac{1}{v^{5}v_{\parallel}}\begin{pmatrix}v_{ \parallel}^{4}&v_{\parallel}^{2}v_{\perp}^{2}\\ v_{\parallel}^{2}v_{\perp}^{2}&v_{\perp}^{4}\end{pmatrix}\frac{x^{3}g^{\prime \prime}(x)}{v_{T}}\,\mathrm{d}z. \tag{2.83}\] Together with quasineutrality, \[\int f\,\mathrm{d}^{3}v=n_{e}=\sum_{k}Z_{k}n_{ik}, \tag{2.84}\] Eq. (2.80) with the boundary conditions (2.51),(2.52) (noting that we write \(f=f_{\text{II}}\)) provides a unique solution for \(f\) and \(\phi\) in terms of the parameters \(\eta\) and \(T\). However, these parameters are not know _a priori_. It should be noted that up to this point we have assumed that the potential is monotonically decreasing, which is the case when the density profile is monotonically decreasing. If this is not the case, i.e. the potential has more than one peak, then there are actually multiple trapped electron populations that must be treated independently. Some electrons can explore the region encompassed by only one peak, and others, still trapped in the potential as a whole, can explore more than one. This situation greatly complicates the kinetic problem and is of secondary importance in this paper as we are concerned with the expansion of a plasmoid with a potential that is initially single-peaked; we expect, and observe, as will be shown in Section 3.4, the profile to remain single-peaked when the high temperature of the ambient plasma is accounted for. There is one exception to the inapplicability of the foregoing model to multiply-peaked electric potential wells: when \(T=T_{a}\). In this case the solution to the QE problem is, anyway, the ambient Maxwellian \(f_{a}\), which is correct even when multiple peaks are present. It will be seen that the cold-fluid model for ions does produce a multiply-peaked electric potential during later stages of the expansion, but at this point \(T\) is of order \(T_{a}\), so the solution to the QE problem is close to a Maxwellian and we expect the resulting expansion to be qualitatively correct. ### The no-net-flux condition Given some \(\eta\) and \(T\) we may solve the QE problem as specified in the previous subsection. However, most of the combinations of \(\eta\) and \(T\) are not physically meaningful, since they would not actually establish a steady-state. Quasineutrality requires that there is no net charge, but most combinations of \(\eta\) and \(T\) would cause a very large collisional flux of electrons into or out of the trapped region of phase-space, causing the plasmoid to 'charge up', quickly resulting in the violation of global quasineutrality. Therefore a closer look at quasineutrality during the establishment of the QE state is required. The global quasineutrality condition is given by \[N_{t}+N_{p}=\sum_{k}Z_{k}N_{ik} \tag{2.85}\] for \(N_{t}\) the line-integrated density of trapped electrons, \(N_{p}\) the line-integrated density of passing electrons, and \(N_{ik}\) the line-integrated density of ion species \(k\). Formally, the magnetic field line we consider is infinite. However, the entire plasmoid structure is localised, with the possible exception of the plasmoid density approaching zero asymptotically (if we use, for example, the self-similar ion density profile from Ref. Aleynikov _et al._ (2019)). So, rather than the whole field line, we consider the global quasineutrality condition on some interval \(z\in[-L_{S}/2,L_{S}/2]\) for some \(L_{S}\) much larger than the plasmoid; large enough that the plasmoid density at the endpoints is negligible compared to the ambient density, and the electric potential at the endpoints is negligible compared to \(T\), \(T_{a}\), or \(e\phi_{m}\). In order to maintain global quasineutrality we require \[\frac{\mathrm{d}N_{t}}{\mathrm{d}t}+\frac{\mathrm{d}N_{p}}{\mathrm{d}t}=\sum_ {k}Z_{k}\frac{\mathrm{d}N_{ik}}{\mathrm{d}t}. \tag{2.86}\] Since the density is constant far the plasmoid, the terms in the above cease to change as \(L_{S}\to\infty\); we may take \(L_{S}\) arbitrarily large as we never directly evaluate \(N_{p}\) or \(N_{ik}\). Using the results from Appendix A, we see that the time derivative of the line-integrated density of trapped electrons is given by \[\frac{\mathrm{d}N_{t}}{\mathrm{d}t}=\frac{2\pi}{m_{e}^{2}}\frac{\mathrm{d}}{ \mathrm{d}t}\int_{0}^{\infty}\int_{0}^{J_{m}}f\,\mathrm{d}J\,\mathrm{d} \mathcal{E}_{\perp}, \tag{2.87}\] where \(J_{m}=J(\mathcal{E}_{\parallel}=0)\) is the maximum value of the second adiabatic invariant for a trapped electron, and, knowing that \(f=f_{a}\) in the \(\mathcal{E}_{\parallel}>0\) region, the time derivative of the line-integrated density of passing electrons is given by \[\frac{\mathrm{d}N_{p}}{\mathrm{d}t}=\frac{\mathrm{d}}{\mathrm{d}t}\int_{- \frac{L_{S}}{2}}^{\frac{L_{S}}{2}}n_{a}\mathrm{e}^{\frac{e\phi}{T_{a}}} \mathrm{erfc}\left(\sqrt{\frac{e\phi}{T_{a}}}\right)\,\mathrm{d}z. \tag{2.88}\] The full kinetic equation (2.2) can be bounce-averaged and the left hand side changed to the independent variables \((J,\mathcal{E}_{\perp},t)\) to yield \[\left.\frac{\partial f}{\partial t}\right|_{J}=\left\langle C(f,f_{\rm I}) \right\rangle+\sum_{k}\left\langle C_{e,ik}(f)\right\rangle+\left\langle C(f,f_ {\rm II}+f_{\rm III})\right\rangle, \tag{89}\] which, along with the approximation \(C(f,f_{\rm I})\approx C(f,f_{0})\), gives \[\begin{split}\frac{\mathrm{d}N_{t}}{\mathrm{d}t}=& \frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{J_{m}}\left\langle C_{\rm QE }(f)\right\rangle\,\mathrm{d}J\,\mathrm{d}\mathcal{E}_{\perp}+\\ &\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{J_{m}}\left\langle C (f,f_{\rm II}+f_{\rm III})\right\rangle\,\mathrm{d}J\,\mathrm{d}\mathcal{E}_{ \perp}+\\ &\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\frac{\partial J_{m}}{ \partial t}f_{a}(\mathcal{E}_{\perp})\,\mathcal{E}_{\perp},\end{split} \tag{90}\] where we have used the fact that the distribution function is continuous: \(f(J=J_{m})=f(\mathcal{E}_{\parallel}=0)=f_{a}(\mathcal{E}_{\parallel}=0)\). We then see that Eq. (86) becomes \[\begin{split}&\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{J_{m}} \left\langle C_{\rm QE}(f)\right\rangle\,\mathrm{d}J\,\mathrm{d}\mathcal{E}_ {\perp}+\\ &\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{J_{m}}\left\langle C (f,f_{\rm II}+f_{\rm III})\right\rangle\,\mathrm{d}J\,\mathrm{d}\mathcal{E}_{ \perp}+\\ &\int_{-\frac{L_{\infty}}{2}}^{\frac{L_{\infty}}{2}}n_{a}\frac{e}{ T_{a}}\frac{\partial\phi}{\partial t}\mathrm{e}^{\frac{\varepsilon\phi}{T_{a}}} \mathrm{erfc}\left(\sqrt{\frac{\varepsilon\phi}{T_{a}}}\right)\,\mathrm{d}z= \sum_{k}Z_{k}\frac{\mathrm{d}N_{ik}}{\mathrm{d}t},\end{split} \tag{91}\] where the term associated with \(\partial J_{m}/\partial t\) has cancelled out between \(\mathrm{d}N_{t}/\mathrm{d}t\) and \(\mathrm{d}N_{p}/\mathrm{d}t\). The first term on the left hand side is associated with fluxes due to collisions with the plasmoid; we may write \[\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{J_{m}}\left\langle C_{\rm QE }(f)\right\rangle\,\mathrm{d}J\,\mathrm{d}\mathcal{E}_{\perp}\sim\nu_{p}(v_{T_ {a}})N_{p}. \tag{92}\] The second term on the left hand side is due to heating; we may write \[\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{J_{m}}\left\langle C(f,f_{\rm II }+f_{\rm III})\right\rangle\,\mathrm{d}J\,\mathrm{d}\mathcal{E}_{\perp}\sim \nu_{h}\mathrm{e}^{-\nu_{h}t}N_{p}. \tag{93}\] The term on the right hand side is due to the plasma at infinity acting as a source or sink of ions. The third term on the left hand side is due to the constant replenishment of the passing distribution by plasma at infinity, leading to it always having the form \(f_{a}\). We can estimate this term by noting that \(\exp(e\phi/T_{a})\mathrm{erfc}(\sqrt{e\phi/T_{a}})\leqslant 1\) for \(\phi\geqslant 0\) and using the approximation \[\int_{-\frac{L_{\infty}}{2}}^{\frac{L_{\infty}}{2}}\frac{e\phi}{T_{a}}\, \mathrm{d}z\sim L_{p}\frac{e\phi_{m}}{T_{a}}\sim L_{p}. \tag{94}\] Writing \(L_{p}=N_{p}/n_{m}\) and using Eq. (3) then yields \[\int_{-\frac{L_{\infty}}{2}}^{\frac{L_{\infty}}{2}}n_{a}\frac{e}{T_{a}}\frac{ \partial\phi}{\partial t}\mathrm{e}^{\frac{\varepsilon\phi}{T_{a}}}\mathrm{ erfc}\left(\sqrt{\frac{e\phi}{T_{a}}}\right)\,\mathrm{d}z\sim\nu_{h}N_{p} \frac{\mu\sqrt{\frac{3m_{1}}{m_{e}}}\nu_{h}t}{\left((\nu_{h}t)^{\frac{3}{2}}+ \frac{2}{\sqrt{3}}\sqrt{\frac{m_{i}}{m_{e}}}\mu\right)^{2}}. \tag{95}\] With the plasma parameters used in the Introduction, the above is at most of order \(\nu_{h}N_{p}\), and decreases as time advances; it is of the same order as the heating term (the second on the left hand side of Eq. (2.91)). The change in \(N_{ik}\) depends upon the (yet unchosen) model for the ions. Of course, the system for plasmoid ions is necessarily conservative (plasmoid ions are localised), so the line-integrated plasmoid ion density is constant. If the system also conserves ambient ions, i.e. the ambient plasma cannot act as a source of ions, then the line-integrated ambient ion densities are constant. On the other hand, if the plasma can act as a source for the ambient ions then the terms associated with the change in their line-integrated densities is at most on the same order as Eq. (2.95). This is because an ambient ion density \(n_{ik,a}\) is at most of order \(n_{a}/Z_{k}\) and its change is associated purely with the change in the electric potential. Therefore the most significant term in Eq. (2.91) is the first on the left hand side, which is associated with frequency with which an electron collides with the plasmoid; the other terms are associated with the heating frequency. Hence Eq. (2.91) is well-approximated with simply a vanishing flux associated with collisions with the plasmoid, which can be expressed as \[\frac{e^{4}\ln\Lambda}{2\varepsilon_{0}^{2}m_{e}^{3}}\int_{0}^{\infty}\left[D _{\rm QE}\nabla_{(\mathcal{E}_{\parallel},\mathcal{E}_{\perp})}\left(\frac{ f}{f_{0}}\right)\right]\bigg{|}_{\mathcal{E}_{\parallel}=0}\cdot\begin{pmatrix}1 \\ 0\end{pmatrix}\,\mathrm{d}\mathcal{E}_{\perp}=0. \tag{2.96}\] Hence, to maintain global quasineutrality, there can (approximately) be no net collisional flux due to collisions with the plasmoid into the trapped region of phase-space; we call the above the _no-net-flux_ condition. Intuitively this makes sense; a steady-state due entirely to collisional fluxes cannot exist if a consequence of the flux is an immediate violation of quasineutrality. Since we had two free parameters, \(\eta\) and \(T\), the no-net-flux condition fixes one in terms of the other. We choose to keep \(T\) as the free parameter. From Eq. (2.80) it is clear that although the QE problem implies a steady-state, the collisional fluxes themselves do not vanish. This is the sense in which QE is a dynamical steady-state rather than the static steady-state characteristic of a thermal equilibrium. ### Numerical solution to the QE problem The QE problem (2.80) with boundary conditions (2.51), (2.52) (noting that we write \(f=f_{\rm II}\) in region II), a self-consistent potential given by quasineutrality (2.84), and the zero-net-flux condition (2.96) was solved numerically. The plasma was assumed to be hydrogenic: there was a single species of singly-charged ion (\(Z=1\)) with the proton mass (\(m_{i}=m_{p}\)), following the profile \[n_{i}=n_{a}+N_{ic}\frac{1}{L_{p}\sqrt{\pi}}\mathrm{e}^{-\left(\frac{x}{L_{p}} \right)^{2}}, \tag{2.97}\] where \(L_{p}=2.8\,\mathrm{m}\), \(N_{ic}=10^{22}\,\mathrm{m}^{-2}\), \(T=615\,\mathrm{eV}\), \(n_{a}=5\times 10^{19}\,\mathrm{m}^{-3}\), and \(T_{a}=5\,\mathrm{keV}\). The Gaussian term in the above is consistent with the profile in Aleynikov _et al._ (2019) at \(t=20\,\mathrm{\mu s}\) given these parameters. Figure 3 shows the properties of the electron distribution function. The top left plot shows the distribution function in velocity space at \(z=0\). \(\mathcal{E}=0\) is indicated by the dashed circle and the trapped-passing separatrix by the vertical dashed lines. The isotropic passing distribution is clearly visible as concentric circles, as is the very isotropic core (\(\mathcal{E}<0\)). Significant flattening of the distribution in the high-energy part region II is observed. The bottom left plot shows the phase-space trajectories of electrons in \(({\cal E}_{\parallel},{\cal E}_{\perp})\) space, clearly indicating flow into and out of the trapped region. The dashed line indicates \({\cal E}=0\). The right boundary is the trapped-passing separatrix. The top right plot shows the effective phase-space flow velocity \({\bf u}^{*}\), defined via \[\nabla_{({\cal E}_{\parallel},{\cal E}_{\perp})}\cdot({\bf u}^{*}f)=-\left<C_{ \rm QE}(f)\right>, \tag{2.98}\] i.e. \[{\bf u}^{*}\propto-\frac{D_{\rm QE}\nabla_{({\cal E}_{\parallel},{\cal E}_{ \perp})}\left(\frac{f}{f_{0}}\right)}{f}. \tag{2.99}\] Electron phase-space flow for \({\cal E}<0\) is very weak, since this region is highly isotropised and essentially conforms to a Maxwellian, which exhibits no collisional phase-space flux. The bottom right plot shows the flux through the trapped-passing separatrix, where \[\Gamma_{S}=-\frac{e^{4}\ln\Lambda}{2\varepsilon_{0}^{2}m_{e}^{3}}\left[D_{\rm QE,S}\nabla_{({\cal E}_{\parallel},{\cal E}_{\perp})}\left(\frac{f}{f_{0}} \right)\right]\bigg{|}_{{\cal E}_{\parallel}=0}\cdot\left(\begin{array}{c}1 \\ 0\end{array}\right) \tag{2.100}\] \[\Gamma_{F}=-\frac{e^{4}\ln\Lambda}{2\varepsilon_{0}^{2}m_{e}^{3}}\left[D_{\rm QE,F}\nabla_{({\cal E}_{\parallel},{\cal E}_{\perp})}\left(\frac{f}{f_{0}} \right)\right]\bigg{|}_{{\cal E}_{\parallel}=0}\cdot\left(\begin{array}{c}1 \\ 0\end{array}\right), \tag{2.101}\] and \(\Gamma=\Gamma_{S}+\Gamma_{F}\). Collisions with the cold Maxwellian always produces an inflow of electrons in region II due to friction (see \(\Gamma_{F}\) in Fig. 3). Pitch-angle scattering may only cause a flow along lines of constant \({\cal E}\), and is seen to eject electrons at low perpendicular energies and causes an inflow at higher energies. The net flux through the separatrix vanishes due to the no-net-flux condition. ### Analytical solution to the QE problem The main difficulty in the QE problem is the presence of bounce integrals, which are difficult to evaluate analytically in a self-consistent potential. In a square well, however, owing to the constancy of the electric potential and plasmoid density, the bounce average operator is the identity, significantly simplifying matters. Since the phase-space domain of the QE problem depends only upon the potential height \(\phi_{m}\), the distribution function obtained as a solution to the QE problem in a square-well potential of height \(\phi_{m}\) may be used as an approximation of the solution to the QE problem in a self-consistent potential. Assuming that there is a single ion species of charge \(Z\), that hot trapped electrons have energies much larger than \(v_{T}\), and that the plasmoid density greatly exceeds the ambient density (hence quasineutrality is approximately given by \(n_{0}=Zn_{i}\)), from Eqs. (2.62),(2.65) we see that the bounce-averaged QE collision operator in a square well is given by \[\langle C_{\rm QE}(f)\rangle=\frac{n_{0}e^{4}\ln\Lambda}{4\pi\varepsilon_{0}^ {2}m_{e}^{2}}\left[\frac{1+Z}{v^{3}}\frac{1}{2\sin\theta}\frac{\partial}{ \partial\theta}\left(\sin\theta\frac{\partial f}{\partial\theta}\right)+ \frac{1}{v^{2}}\frac{\partial}{\partial v}\left(f+\frac{T}{m_{e}v}\frac{ \partial f}{\partial v}\right)\right]. \tag{2.102}\] We consider \(\langle C_{\rm QE}(f)\rangle=0\) separately in the \({\cal E}<0\) (i.e. \(v<v_{c}=\sqrt{2e\phi_{m}/m_{e}}\) in a square well) and \({\cal E}>0\) (\(v>v_{c}\)) regions of phase-space, using the most convenient coordinates in each case. It is convenient to write \(f=f_{0}+f_{1}\) and solve for \(f_{1}\); in both regions we Figure 3: Top left: Numerical distribution function at \(z=0\) in SI units. Top right: effective phase-space flow velocity \({\bf u}^{*}\) (Eq. (2.99)). Bottom left: phase-space trajectories of electrons (i.e. the streamlines of \({\bf u}^{*}\)). Bottom right: collisional flux into the trapped region (Eqs. (2.100),(2.101)). \(v_{c}=\sqrt{2\phi_{m}/m_{e}}\) is the parallel escape velocity at \(z=0\). neglect \(v\)-diffusion for \(f_{1}\) (i.e. the term proportional to \(T/(m_{e}v)\) in the above), leaving only \(v\)-friction. In the \(\mathcal{E}>0\) region we use the variables \((v,v_{\parallel}=v\cos\theta)\), meaning we must solve \(\mathcal{D}(f_{1})=0\) for \[\mathcal{D}(f_{1})=\frac{1+Z}{2}\frac{\partial}{\partial v_{\parallel}}\left[ (v^{2}-v_{\parallel}^{2})\frac{\partial f_{1}}{\partial v_{\parallel}}\right] +v\frac{\partial f_{1}}{\partial v}+v_{\parallel}\frac{\partial f_{1}}{ \partial v_{\parallel}}. \tag{103}\] It is convenient to consider the limit where \(v\gg v_{\parallel}\), which represents the correct limit in the majority of \(\mathcal{E}>0\), \(\mathcal{E}_{\parallel}<0\) phase-space: \[\mathcal{D}(f_{1})=\frac{1+Z}{2}v^{2}\frac{\partial^{2}f_{1}}{\partial v_{ \parallel}^{2}}+v\frac{\partial f_{1}}{\partial v}, \tag{104}\] which has superposable solutions (that are even in \(v_{\parallel}\)) provided by the separation of variables: \[f_{k}=C_{k}\mathrm{e}^{-\frac{1}{2}\lambda_{k}v^{2}}\cosh\left(v_{\parallel} \sqrt{\frac{2}{1+Z}\lambda_{k}}\right) \tag{105}\] for constants \(\{C_{k}\}\) and \(\{\lambda_{k}\}\). The boundary condition (52) is satisfied by the sum of two solutions: \[f_{1}=f_{a}(\mathcal{E})\frac{\cosh\left(\frac{v_{\parallel}}{v_{T_{a}}} \sqrt{\frac{4}{1+Z}}\right)}{\cosh\left(\frac{v_{c}}{v_{T_{a}}}\sqrt{\frac{4}{ 1+Z}}\right)}-f_{0}(\mathcal{E})\frac{\cosh\left(\frac{v_{\parallel}}{v_{T}} \sqrt{\frac{4}{1+Z}}\right)}{\cosh\left(\frac{v_{c}}{v_{T}}\sqrt{\frac{4}{1+Z }}\right)}. \tag{106}\] In the \(\mathcal{E}<0\) region we use the variables variables \((v,\xi=\cos\theta)\). So, we must solve \(\mathcal{G}(f_{1})=0\) where \[\mathcal{G}(f_{1})=\frac{1+Z}{2}\frac{\partial}{\partial\xi}\left[\left(1-\xi ^{2}\right)\frac{\partial f_{1}}{\partial\xi}\right]+v\frac{\partial f_{1}}{ \partial v}. \tag{107}\] We note that Legendre polynomials are the eigenfunctions of the operator in \(\xi\), so we write \(f\) in this basis: \[f_{1}=\sum_{n=0}^{\infty}a_{n}(v)P_{n}(\xi), \tag{108}\] which gives the following equations for \(\{a_{n}(v)\}\): \[v\frac{\partial a_{n}}{\partial v}-\frac{1+Z}{2}n(n+1)a_{n}=0 \tag{109}\] with the solutions \[a_{n}=c_{n}v^{\frac{1+Z}{2}n(n+1)} \tag{110}\] for \(\{c_{n}\}\) constants. The continuity of the distribution function at \(v=v_{c}\) provides the expressions for \(c_{n}\): \[c_{n}=\frac{2n+1}{2}v_{c}^{-\frac{1+Z}{2}n(n+1)} \Bigg{[}\frac{f_{a}(\mathcal{E}=0)}{\cosh\left(\frac{v_{c}}{v_{T_{ a}}}\sqrt{\frac{4}{1+Z}}\right)}\int_{-1}^{1}\cosh\left(\frac{v_{c}}{v_{T_{a}}} \sqrt{\frac{4}{1+Z}}\xi\right)P_{n}(\xi)\,\mathrm{d}\xi-\] \[\frac{f_{0}(\mathcal{E}=0)}{\cosh\left(\frac{v_{c}}{v_{T}}\sqrt{ \frac{4}{1+Z}}\right)}\int_{-1}^{1}\cosh\left(\frac{v_{c}}{v_{T}}\sqrt{\frac{ 4}{1+Z}}\xi\right)P_{n}(\xi)\,\mathrm{d}\xi\Bigg{]}. \tag{111}\] To summarise, the analytical solution to the QE problem in a square well is given by Eq. (106) in the \({\cal E}>0\), \({\cal E}_{\parallel}<0\) region, Eq. (108) in the \({\cal E}\leqslant 0\) region, and \(f=f_{a}\) in the \({\cal E}_{\parallel}\geqslant 0\) region. The phase-space domain of the QE problem is the same given any potential, so the substitution of \(\sqrt{(2/m_{e})({\cal E}+e\phi_{m})}\) for \(v\) and \(\sqrt{(2/m_{e})({\cal E}_{\parallel}+e\phi_{m})}\) for \(v_{\parallel}\) yields an (approximate) analytical solution to the QE problem valid in a self-consistent potential. We refer to this analytical solution in figures as \(f_{\rm an}\). Figure 4 shows the analytical distribution function for the same parameters as those used to produce Fig. 3. The top left plot shows the distribution in velocity space at \(z=0\). The top right plot shows the percentage difference from the numerical solution given in Fig. 3. The bottom left plot shows the distribution function at \({\cal E}_{\perp}=0\). The bottom right plot shows the distribution function at \({\cal E}_{\parallel}=-e\phi_{m}\). The qualitative behaviour of the distribution is captured well, in particular the 'flattening' of the distribution function in the high-energy part of region II. The observation that in here the contours of the distribution function are horizontal lines in \((v_{\parallel},v_{\perp})\) can be explained by the fact that for the analytical QE distribution function \[\frac{\partial}{\partial v_{\parallel}}f\left(v_{\parallel},v_{\perp}\right) \propto\frac{2}{v_{T_{a}}^{2}}v_{\parallel}\left(\frac{2}{1+Z}-1\right)+O(v_{ \parallel}^{3}), \tag{112}\] which vanishes to lowest order if \(Z=1\). We note that the simplification made by neglecting the \(v\)-diffusion term for \(f_{1}\) reduces a formerly second-order problem in \(v\) to a first-order problem. Solving the QE problem in the \({\cal E}>0\) part of region II with the continuity boundary condition then fixes the value of \(f\) at \({\cal E}=0\). Similarly, in the \({\cal E}<0\) part of region II we only have the opportunity to apply the continuity boundary condition at \({\cal E}=0\), but not at \({\cal E}={\cal E}_{{\rm I}/{\rm II}}\). As a consequence, as \({\cal E}\to{\cal E}_{{\rm I}/{\rm II}}\), \(f_{1}\to c_{0}\), which does violate this continuity boundary condition. On the other hand, \(c_{0}\) is several orders of magnitude smaller than \(f_{0}\) in region \(I\), so the discontinuity is small. This is purely an artefact of the approximations made to obtain the analytical solution; no such behaviour is seen in the numerical solution. ### Expressions for \(\eta\) and \(\phi\) in terms of \(T\) We have found an analytical solution to the QE problem in terms of the lowest-order distribution function \(f_{0}\), which is uniquely determined by the parameters \(\eta\) and \(T\). The electric potential \(\phi\) is as yet unknown, but must be such that quasineutrality (84) is satisfied. Additionally, the no-net-flux condition (96) must be satisfied. Therefore, we may reduce the number of unknowns in the system from three (\(\eta\), \(T\), and \(\phi\)) to one (which we choose to be \(T\)) by imposing quasineutrality and no-net-flux. The analytical solution to QE, however, does not capture the 'ejection structure' near \({\cal E}_{\parallel}={\cal E}_{\perp}=0\) characterising the flux of electrons out of trapped phase-space, so the no-net-flux condition may not be used directly. Instead, analysis of the flow of electrons in phase-space allows us to formulate a condition that is approximately equivalent to Eq. (96). We note that, owing to the flattening of the distribution function high-energy part of region II, pitch-angle scattering acts to scatter newly-trapped electrons to lower parallel velocity while leaving their energy unchanged. Simultaneously the scattered electrons lose energy via friction. This can be seen in the streamline plot of Fig. 3. The net effect is that a large fraction of electrons entering the trapped region of phase-space are eventually drawn into the \({\cal E}<0\) region. In order for there to be no net flux into the trapped region, the same number of electrons entering the \({\cal E}<0\) region must escape from it, eventually being scattered and ejected from trapped phase-space (forming the ejection structure). Therefore, an approximation to the condition (96), which states that there is no net flux into the trapped region of phase-space, is that there is no net flux into the \({\cal E}<0\) region of phase-space. Since the flux into and out of this region is characterised by integrals on the \({\cal E}=0\) line it is not necessary to have a description of the ejection structure, whereas directly applying Eq. (2.96) requires us to integrate along \({\cal E}_{\parallel}=0\), where the ejection structure strongly affects the flux. Since the analytical solution to QE potentially has a discontinuous derivative at \({\cal E}=0\), the flux into the \({\cal E}<0\) region has contributions from \({\cal E}\to 0^{+}\) and \({\cal E}\to 0^{-}\). We apply this approximation of the no-net-flux condition to the analytical solution to QE in a square well. Since the analytical solution to the QE problem is derived from that posed in a square well, we will calculate the flux in terms of the variables \((v,\xi,\varphi)\). Pitch-angle scattering may not change the energy of electrons, so the pitch-angle scattering terms of the collision operator cannot contribute to the flux across \({\cal E}=0\). In a square well, the number of electrons entering the \({\cal E}<0\) region of phase-space is equal to the number of electrons entering the \(v<v_{c}\) region of phase space. From Eq. (2.102) we see that the collisional flux in the \(\hat{\bf v}\) direction at \(v=v_{c}\) (assuming that \(v_{c}\gg v_{T}\)) is given by \[F=-\frac{n_{0}e^{4}\ln\Lambda}{4\pi\varepsilon_{0}^{2}m_{e}^{2}}\frac{1}{v_{c} ^{2}}\left(f+\frac{T}{m_{e}v}\frac{\partial f}{\partial v}\right)\Bigg{|}_{{ \cal E}=0}. \tag{2.113}\] The phase-space area element on the \(v=v_{c}\) sphere is given by \(v_{c}^{2}{\rm d}\xi\,{\rm d}\varphi\), so the net flux into the \(v<v_{c}\) region is given by \[G=\int_{0}^{2\pi}\int_{-1}^{1}v_{c}^{2}\left(F\big{|}_{\mathcal{E}=0^{+}}+F\big{|} _{\mathcal{E}=0^{-}}\right)\,\mathrm{d}\xi\,\mathrm{d}\varphi. \tag{2.114}\] Setting the above to zero will provide the relation \(\eta\), \(T\), and \(\phi\) required for no net flux into the \(\mathcal{E}<0\) region. It will be seen that the expression obtained from \(G=0\) in the limit \(Z\to\infty\) is quite accurate, even for \(Z=1\). In this limit the analytical solution is given by \(f=f_{a}\) for \(\mathcal{E}>0\) and \(f=c_{0}+f_{0}\) for \(\mathcal{E}<0\), where \(c_{0}=f_{a}(\mathcal{E}=0)-f_{0}(\mathcal{E}=0)\). Then, \(G=0\) yields the relation \[\eta=n_{a}\left(\frac{T}{T_{a}}\right)^{\frac{3}{2}}\left(2-\frac{T}{T_{a}} \right). \tag{2.115}\] Now, quasineutrality allows us to express both \(\eta\) and \(\phi\) in terms of \(T\) and the ion profile. In the variables \((\mathcal{E}_{\parallel},\mathcal{E}_{\perp},z,t)\) the electron distribution is independent of \(z\), so the electron density is a function of \(\phi\), time, and the parameter \(T\) (\(\eta\) being already expressed in terms of \(T\) via the above). Therefore the quasineutrality condition (2.84) demands that \[n_{e}(\phi,t;T)=\sum_{k}Z_{k}n_{ik}, \tag{2.116}\] which gives an implicit expression for \(\phi\) at every point, given some \(T\) and some ion density profiles \(\{n_{ik}\}\). We can immediately obtain an expression for the potential height when the plasmoid density is high; in this case the electron density at \(z=0\) is well-approximated by \[n_{e}(z=0)=\eta\mathrm{e}^{\frac{e\phi_{m}}{T}}, \tag{2.117}\] so \[e\phi_{m}=T\left[\ln\left(\frac{n_{e}(z=0)}{n_{a}}\right)+\frac{3}{2}\ln\left( \frac{T_{a}}{T}\right)-\ln\left(2-\frac{T}{T_{a}}\right)\right]. \tag{2.118}\] This is a modification of the Boltzmann relation for a plasma in thermal equilibrium, the extra contributions accounting for the fact that in the quasi-equilibrium state the distribution function is not necessarily Maxwellian. As expected, the above reduces to the Boltzmann relation for \(T=T_{a}\) (when the distribution becomes Maxwellian), agreeing with the rough estimate (1.4) which was used to justify the ordering leading to QE. As noted earlier, \(e\phi_{m}\) given by (2.118) exceeds that given by the Boltzmann relation when \(T<T_{a}\). The height of the potential in the numerical solution to the QE problem (Fig. 3), which is calculated entirely self-consistently, and leads to no net flux into the trapped region, is within \(3\%\) of the estimate (2.118). This is a remarkable agreement considering that the estimate was derived in the limit \(Z\to\infty\), but the numerical solution is with \(Z=1\). The estimate also correctly predicts the height of the self-consistent potential for the solution to the time-dependent kinetic problem with isotropic electrons given in Arnold _et al._ (2023), which is strong evidence that the QE state was established there too. ## 3 Plasmoid expansion ### Treatment of ions Two different models for the ions are considered: a collisionless (Vlasov) system with hot ambient ions but cold plasmoid ions, and a cold-fluid expansion. We restrict our attention to a single ion species of charge \(Z\). The collisionless and fluid models are in opposite regimes of collisionality, which will provide the broadest range of qualitative results for the plasmoid expansion. The collisionless system is, however, the most physically accurate model for the plasmoid expansion since ambient ions have a long mean-free-path relative to the plasmoid size. In fact, the ratio of plasmoid size to the ambient ion mean free path is the same as ambient electrons, expressed by the opacity (7). The collisionless system will still model cold plasmoid ions accurately since they will be initialised with zero velocity spread. For the collisionless ion expansion we solve the Vlasov equation for ions: \[\frac{\partial f_{i}}{\partial t}+v_{\parallel}\frac{\partial f}{\partial z}- \frac{Ze}{m_{i}}\frac{\partial\phi}{\partial z}\frac{\partial f_{i}}{\partial v _{\parallel}}=0. \tag{11}\] For the cold-fluid expansion we solve the equations \[\frac{\partial n_{i}}{\partial t}+\frac{\partial}{\partial z}(n_{i}u_{i})=0, \tag{12}\] \[\frac{\partial u_{i}}{\partial t}+u_{i}\frac{\partial u_{i}}{\partial z}+ \frac{Ze}{m_{i}}\frac{\partial\phi}{\partial z}=0. \tag{13}\] ### Treatment of electrons on expansion timescale Equations (29) and (53) describe the electron dynamics on the expansion and heating timescales. We note that because they take exactly the same form, Eq. (53) and the bounce average of Eq. (29) may be combined to yield \[\frac{\partial f}{\partial t}-\frac{1}{\tau}\frac{\partial J}{\partial t} \frac{\partial f}{\partial\mathcal{E}_{\parallel}}=\langle C(f,f_{\rm II}+f_ {\rm III})\rangle \tag{14}\] to describe the both regions I and II (writing the electron distribution in both regions as \(f\)). This equation describes the long-term expansion of the plasmoid, which is driven by the heating term on the right hand side. By solving the QE problem with the restrictions of quasineutrality and the no-net-flux condition, the distribution is known up to the parameter \(T\); the evolution of the electron distribution function is completely characterised by how \(T\) changes in time. In analogy with the Braginskii equations we obtain an evolution equation for \(T\) by taking moments of the kinetic equation over phase-space. When we take moments the quasi-equilibrium distribution serves the same role as the near-Maxwellian distribution in the Braginskii equations. In contrast to a _local_ equilibrium distribution, where the parameters \(\eta\) and \(T\) defining the Maxwellian distribution are dependent upon \(z\), in our case (analogous to a global thermal equilibrium) the parameters of are independent of \(z\), so we also integrate the kinetic equation over \(z\) as well as the'momentum-like' variables (we note that we have already integrated the kinetic equation over \(z\) when it is bounce-averaged). We also need only take moments over the trapped region of phase-space since the passing electron distribution is known. As we seek an equation for \(T\) it serves to take the \(\mathcal{E}\)-moment over trapped phase-space. The line-integrated energy density of trapped electrons is given by \[W_{t}=\int_{V_{t}}\mathcal{E}f\,\mathrm{d}^{3}v\,\mathrm{d}z, \tag{15}\] where \(V_{t}\) is the trapped region of phase-space. Taking the \(\mathcal{E}\)-moment of Eq. (14) yields \[\frac{\mathrm{d}W_{t}}{\mathrm{d}t}=\left.\frac{\mathrm{d}W_{t}}{\mathrm{d}t} \right|_{\rm adiabatic}+\left.\frac{\mathrm{d}W_{t}}{\mathrm{d}t}\right|_{\rm separatrix}+\left.\frac{ \mathrm{d}W_{t}}{\mathrm{d}t}\right|_{\rm heating} \tag{16}\] for \[\left.\frac{\mathrm{d}W_{t}}{\mathrm{d}t}\right|_{\mathrm{adiabatic}} =-\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{-e\phi_{m}}^{0} \frac{\partial J}{\partial t}f\,\mathrm{d}\mathcal{E}_{\parallel}\,\mathrm{d} \mathcal{E}_{\perp}, \tag{3.7}\] \[\left.\frac{\mathrm{d}W_{t}}{\mathrm{d}t}\right|_{\mathrm{separatrix}} =\frac{2\pi}{m_{e}^{2}}\frac{\partial J_{m}}{\partial t}\int_{0} ^{\infty}\mathcal{E}_{\perp}f_{a}\big{|}_{\mathcal{E}_{\parallel}=0}\, \mathrm{d}\mathcal{E}_{\perp},\] (3.8) \[\left.\frac{\mathrm{d}W_{t}}{\mathrm{d}t}\right|_{\mathrm{heating}} =\int_{V_{t}}\mathcal{E}C(f,f_{\mathrm{II}}+f_{\mathrm{III}})\, \mathrm{d}^{3}v\,\mathrm{d}z. \tag{3.9}\] The details of the procedure are contained in Appendix A. The terms in Eq. (3.6) are descriptive: 'adiabatic' corresponds to the adiabatic change in the electron energy as the well changes shape,'separatrix' corresponds to electrons crossing the trapped-passing separatrix due to the changing depth of the potential well \(e\phi_{m}\), and 'heating' corresponds to collisions of \(f\) with the hot electrons. Although \(W_{t}\) is in principle expressible in terms of \(T\), any tiny change in \(W_{t}\) (and hence \(T\)) results in a huge deviation from quasineutrality due to the exponential dependency of \(n_{0}\) on \(e\phi/T\). Solving the evolution equation for \(W_{t}\) numerically, inverting the relation to find \(T\), and maintaining quasineutrality requires an impractically small timestep. Instead, we derive an energy conservation law for electrons and ions which can be used as an evolution equation for \(T\). The energy conservation law will contain contributions from both passing and trapped electrons, so it is more convenient to consider the energy contained within some interval \(z\in[-L_{S}/2,L_{S}/2]\) for \(L_{S}\) much larger than the plasmoid. The ambient plasma then acts as an infinite source and sink of electrons and energy for this section of the field line. The passing distribution function on this interval being \(f_{a}\) can be understood as the ambient plasma instantly replenishing the passing distribution if it is altered in any way by interaction with the plasmoid; the ambient plasma essentially acts as a 'thermostat' for the passing distribution. The interval \(L_{S}\) must be large enough that the line-integrated trapped electron energy density \(W_{t}\) changes negligibly as \(L_{S}\) increases (i.e. the trapped electron density is negligible at \(|z|=L_{S}/2\)). It is more convenient to work with the line-integrated _kinetic_ energy \[K_{t}=W_{t}+\int_{-\frac{L_{S}}{2}}^{\frac{L_{S}}{2}}e\phi n_{e,t}\,\mathrm{d}z \tag{3.10}\] (where \(n_{e,t}\) is the trapped electron density) rather than \(W_{t}\), since we expect the sum of kinetic energies to exhibit a conservation law. Taking the time derivative of \(K_{t}\) yields \[\left.\frac{\mathrm{d}K_{t}}{\mathrm{d}t}\right. =\left.\frac{\mathrm{d}W_{t}}{\mathrm{d}t}+\int_{-\frac{L_{S}}{2} }^{\frac{L_{S}}{2}}e\frac{\partial\phi}{\partial t}n_{e,t}\,\mathrm{d}z+\int_ {-\frac{L_{S}}{2}}^{\frac{L_{S}}{2}}e\phi\frac{\partial n_{e,t}}{\partial t} \,\mathrm{d}z\right. \tag{3.11}\] \[=\left.\frac{\mathrm{d}W_{t}}{\mathrm{d}t}-\frac{\mathrm{d}W_{t} }{\mathrm{d}t}\right|_{\mathrm{adiabatic}}+\int_{-\frac{L_{S}}{2}}^{\frac{L_{S} }{2}}e\phi\left(Z\frac{\partial n_{i}}{\partial t}-\frac{\partial n_{e,p}}{ \partial t}\right)\,\mathrm{d}z,\] where we have used the quasineutrality condition (assuming that there is a single species of ions with charge \(Z\)) and the fact that \(n_{e,t}+n_{e,p}=n_{e}\) for \(n_{e,p}\) the passing electron density. The latter two terms correspond to the changing kinetic energies of the ions and passing electrons. The term involving ion density is given by \[\int_{-\frac{L_{S}}{2}}^{\frac{L_{S}}{2}}Ze\phi\frac{\partial n_{i}}{\partial t }\,\mathrm{d}z=-\frac{\partial K_{i}}{\partial t} \tag{3.12}\] for \(K_{i}\) the line-integrated kinetic energy of the ions. This can be derived by taking velocity moments Eq. (15) or by constructing an energy equation from Eqs. (16),(17). The same procedure cannot be carried out for the passing electrons since these are restricted to a certain region of phase space. Extending the notion of an orbit integral to electrons with positive parallel energy is straightforward on a finite \(z\) interval: \[\oint g(\mathcal{E}_{\parallel}>0)\,\mathrm{d}z=2\int_{-\frac{L_{S}}{2}}^{\frac {L_{S}}{2}}g(\mathcal{E}_{\parallel}>0)\,\mathrm{d}z, \tag{23}\] allowing us to define the second adiabatic invariant \(J\) for positive parallel energies. The line-integrated passing electron energy density on the interval \(z\in[-L_{S}/2,L_{S}/2]\) is given by \[W_{p}=\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{J_{m}}^{\infty}\mathcal{E}f _{a}\,\mathrm{d}J\,\mathrm{d}\mathcal{E}_{\perp}, \tag{24}\] hence \[\frac{\mathrm{d}W_{p}}{\mathrm{d}t}=\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty} \int_{0}^{\infty}\left(\frac{\mathcal{E}}{T_{a}}-1\right)\frac{\partial J}{ \partial t}f_{a}\,\mathrm{d}\mathcal{E}_{\parallel}\,\mathrm{d}\mathcal{E}_{ \perp}-\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\mathcal{E}_{\perp}\frac{ \mathrm{d}J_{m}}{\mathrm{d}t}f_{a}(\mathcal{E}_{\perp})\,\mathrm{d}\mathcal{E} _{\perp}. \tag{25}\] We note that \[\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{\infty}\frac{\partial J}{ \partial t}f_{a}\,\mathrm{d}\mathcal{E}_{\parallel}\,\mathrm{d}\mathcal{E}_{ \perp}=-\int_{-\frac{L_{S}}{2}}^{\frac{L_{S}}{2}}e\frac{\partial\phi}{ \partial t}n_{e,p}\,\mathrm{d}z \tag{26}\] and \[\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\mathcal{E}_{\perp}\frac{\mathrm{d}J_{ m}}{\mathrm{d}t}f_{a}(\mathcal{E}_{\perp})\,\mathrm{d}\mathcal{E}_{\perp}= \frac{\mathrm{d}W_{t}}{\mathrm{d}t}\bigg{|}_{\mathrm{separatrix}}, \tag{27}\] so, given that \[K_{p}=W_{p}+\int_{-\frac{L_{S}}{2}}^{\frac{L_{S}}{2}}e\phi n_{e,p}\,\mathrm{d}z, \tag{28}\] we obtain the energy conservation law on the interval \(z\in[-L_{S}/2,L_{S}/2]\) \[\frac{\mathrm{d}}{\mathrm{d}t}\left(K_{t}+K_{p}+K_{i}\right)=\frac{\mathrm{d} W_{t}}{\mathrm{d}t}\bigg{|}_{\mathrm{heating}}+\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{ \infty}\frac{\mathcal{E}}{T_{a}}\frac{\partial J}{\partial t}f_{a}\,\mathrm{d }\mathcal{E}_{\parallel}\,\mathrm{d}\mathcal{E}_{\perp}. \tag{29}\] Evidently, the inclusion of the ion and passing electron energies in the energy conservation law accounts for the absence of \(\mathrm{d}W_{t}/\mathrm{d}t\big{|}_{\mathrm{adiabatic}}\), since this represents energy that is extracted from trapped electrons during the expansion and given to other species. The absence of the separatrix term is simply due to this contribution cancelling out between \(K_{t}\) and \(K_{p}\). The second term on the right hand side of the above arises from the fact that passing electrons have their energy altered when passing through the time-varying potential well, and this energy gain (or loss) is absorbed by (or suffered by) the ambient plasma, which continually supplies passing electrons following the distribution \(f_{a}\). The heating due to this effect is essentially negligible when \(T\ll T_{a}\), but provides a considerable fraction of the heating power when \(T\sim T_{a}\). The first term on the right hand side represents the collisional heating of trapped electrons. In Arnold _et al._ (2023), which treated a high-\(Z\) plasmoid, it was shown that the heating rate for cold electrons in a plasmoid was \(3/4\) that expected for cold electrons in a homogeneous plasma, since the acceleration of passing electrons through the potential well decreases their density and collisionality. That is, given that the per-electron heating rate of a cold Maxwellian in a homogeneous plasma is \(3\nu_{h}(T_{a}-T)\), the per-electron heating rate for electrons trapped in the potential well was found to be \((9/4)\nu_{h}(T_{a}-T)\). In our case the distribution function is also highly isotropic for \(\mathcal{E}<0\), so we approximate the collisional heating term by \[\left.\frac{\mathrm{d}W_{t}}{\mathrm{d}t}\right|_{\mathrm{heating}}=\frac{9}{ 4}\nu_{h}N_{\mathcal{E}<0}(T_{a}-T), \tag{3.20}\] where \(N_{\mathcal{E}<0}\) is the line-integrated density of trapped electrons with \(\mathcal{E}<0\). As shown in Section 2.9, the distribution function is somewhat less than \(f_{a}\) in the \(\mathcal{E}>0\) region for \(Z=1\), so the above is an upper bound for the heating rate. Consolidating trapped and passing electron energies into \(K_{e}=K_{t}+K_{p}\) we have the energy conservation law \[\frac{\mathrm{d}}{\mathrm{d}t}\left(K_{e}+K_{i}\right)=Q \tag{3.21}\] for heating power \[Q=\frac{9}{4}\nu_{h}N_{\mathcal{E}<0}(T_{a}-T)+\frac{2\pi}{m_{e}^{2}}\int_{0} ^{\infty}\int_{0}^{\infty}\frac{\mathcal{E}}{T_{a}}\frac{\partial J}{ \partial t}f_{a}\,\mathrm{d}\mathcal{E}_{\parallel}\,\mathrm{d}\mathcal{E}_{ \perp}. \tag{3.22}\] ### Comparison of the system with earlier work At this point a clear comparison can be drawn between this investigation and Aleynikov _et al._ (2019); Arnold _et al._ (2021). In Aleynikov _et al._ (2019); Arnold _et al._ (2021) a cold-fluid system for ions was coupled with an energy conservation law for the plasmoid ions and electrons. The dynamics of the passing electron distribution were neglected save for the assertion that the presence of the ambient plasma resulted in a per-electron heating term \(3\nu_{h}T_{a}\) for plasmoid electrons (only the linear heating stage was treated). The potential was given by the Boltzmann relation and was unbounded as \(|z|\to\infty\) due to the neglect of passing electrons. Here, we use an energy conservation law with modified heating terms and electron kinetic energy derived from a solution to the electron kinetic problem. The electric potential is given by the quasineutrality condition according to the electron density of this distribution, and vanishes as \(|z|\to\infty\). Taking the limit \(n_{a}\to 0\) in Eq. (2.84) and the energy conservation law (3.21) recovers the same system as Aleynikov _et al._ (2019) with the exception of the \(3/4\) factor on the collisional heating term. ### Numerical solutions of the plasmoid expansion system Numerical solutions to the system created by coupling the energy conservation law (3.21), quasineutrality (2.116), and one of systems describing the ions (the Vlasov equation (3.1) or the cold-ion system (3.2),(3.3)) were obtained using the plasma parameters \(n_{a}=5\times 10^{19}\,\mathrm{m}^{-3}\), \(T_{a}=5\,\mathrm{keV}\) and \(N_{ic}=10^{22}\,\mathrm{m}^{-2}\), the same as in Section 2.9. The plasma was once more assumed to be hydrogenic. The heating timescale with these parameters is \(\nu_{h}^{-1}=162\,\mu\mathrm{s}\) and the expansions were run to \(300\,\mu\mathrm{s}\), at which point the plasmoid and ambient temperatures have nearly equalised and the plasmoid density has dropped to nearly the ambient. In the collisionless ion expansion the ambient ion distribution was initialised to a Maxwellian of density \(n_{a}\) and temperature \(T_{a}\). The plasmoid ions were initialised at a temperature of \(50\,\mathrm{eV}\). In both the cold-fluid and collisionless expansion the plasmoid was initialised in the lowest-\(z\) grid cell and the ambient uniformly across \(z\) The analytical solution to the QE problem given in Section 2.10 was used to calculate densities and energy densities, respectively appearing in the quasineutrality condition (2.116) and the energy conservation law (3.21). We neglect any deviation of \(f\) from \(f_{0}\) in the \({\cal E}<0\) region when evaluating these densities as the difference is negligible. However, we do take into account the fact that \(f\) is flattened in the \({\cal E}>0\), \({\cal E}_{\parallel}<0\) region, as this will have a relatively large impact on the density and energy density. We use Eq. (2.115) as the expression for \(\eta\) in terms of \(T\) when evaluating \(f_{0}\). Additionally, we expand the \(\cosh\) functions found in the analytical expression of \(f\) to second order in order to obtain analytical expressions for the moments. The electron density is then given by \[n_{e}=n_{{\cal E}<0}+n_{{\cal E}_{\parallel}<0}^{{\cal E}>0,c}+n_{{\cal E}_{ \parallel}<0}^{{\cal E}>0,a}+n_{{\cal E}_{\parallel}>0} \tag{3.23}\] for \[n_{{\cal E}<0}=\eta\left({\rm e}^{\frac{e\phi}{T}}{\rm erf}\left(\sqrt{\frac{ e\phi}{T}}\right)-\frac{2}{\sqrt{\pi}}\sqrt{\frac{e\phi}{T}}\right), \tag{3.24}\] \[n_{{\cal E}_{\parallel}<0}^{{\cal E}>0,c}=\frac{2}{\sqrt{\pi}}\eta\left[\sqrt {\frac{e\phi}{T}}-\frac{1}{1+\frac{2}{1+Z}\frac{e\phi_{m}}{T}}\left(\sqrt{ \frac{e\phi}{T}}+\frac{1}{3}\frac{2}{1+Z}\left(\frac{e\phi}{T}\right)^{\frac{ 3}{2}}\right)\right], \tag{3.25}\] \[n_{{\cal E}_{\parallel}<0}^{{\cal E}>0,a}=\frac{2}{\sqrt{\pi}}n_{a}\frac{1}{1 +\frac{2}{1+Z}\frac{e\phi_{m}}{T_{a}}}\left(\sqrt{\frac{e\phi}{T_{a}}}+\frac{ 1}{3}\frac{2}{1+Z}\left(\frac{e\phi}{T_{a}}\right)^{\frac{3}{2}}\right), \tag{3.26}\] and \[n_{{\cal E}_{\parallel}>0}=n_{a}{\rm e}^{\frac{e\phi}{T_{a}}}{\rm erfc}\left( \sqrt{\frac{e\phi}{T_{a}}}\right). \tag{3.27}\] The line-integrated kinetic energies are given by \[K_{e}=K_{{\cal E}<0}+K_{{\cal E}_{\parallel}<0}^{{\cal E}>0,c}+K_{{\cal E}_{ \parallel}<0}^{{\cal E}>0,a}+K_{{\cal E}_{\parallel}>0}, \tag{3.28}\] where \[K_{{\cal E}<0}=\frac{3}{2}N_{{\cal E}<0}T-\frac{2}{\sqrt{\pi}}\eta\int_{-\frac {L_{S}}{2}}^{\frac{L_{S}}{2}}\left(\frac{e\phi}{T}\right)^{\frac{3}{2}}\,{\rm d }z, \tag{3.29}\] \[K_{{\cal E}_{\parallel}<0}^{{\cal E}>0,c}=N_{{\cal E}_{\parallel}<0}^{{\cal E }>0,c}T+\int_{-\frac{L_{S}}{2}}^{\frac{L_{S}}{2}}e\phi n_{{\cal E}_{\parallel }<0}^{{\cal E}>0,c}\,{\rm d}z, \tag{3.30}\] \[K_{{\cal E}_{\parallel}<0}^{{\cal E}>0,a}=N_{{\cal E}_{\parallel}<0}^{{\cal E }>0,a}T_{a}+\int_{-\frac{L_{S}}{2}}^{\frac{L_{S}}{2}}e\phi n_{{\cal E}_{ \parallel}<0}^{{\cal E}>0,a}\,{\rm d}z, \tag{3.31}\] and \[K_{{\cal E}_{\parallel}>0}=\frac{3}{2}N_{{\cal E}_{\parallel}>0}T_{a}+\frac{1 }{\sqrt{\pi}}n_{a}T_{a}\int_{-\frac{L_{S}}{2}}^{\frac{L_{S}}{2}}\sqrt{\frac{e \phi}{T_{a}}}\,{\rm d}z, \tag{3.32}\] where \[N_{\mathcal{E}<0}= \int_{-\frac{L_{\mathcal{E}}}{2}}^{\frac{L_{\mathcal{E}}}{2}}n_{ \mathcal{E}<0}\,\mathrm{d}z, \tag{3.33}\] \[N_{\mathcal{E}_{\parallel}<0}^{\mathcal{E}>0,c}= \int_{-\frac{L_{\mathcal{E}}}{2}}^{\frac{L_{\mathcal{E}}}{2}}n_{ \mathcal{E}_{\parallel}<0}^{\mathcal{E}>0,c}\,\mathrm{d}z,\] (3.34) \[N_{\mathcal{E}_{\parallel}<0}^{\mathcal{E}>0,a}= \int_{-\frac{L_{\mathcal{E}}}{2}}^{\frac{L_{\mathcal{E}}}{2}}n_{ \mathcal{E}_{\parallel}<0}^{\mathcal{E}>0,a}\,\mathrm{d}z,\] (3.35) \[N_{\mathcal{E}_{\parallel}>0}= \int_{-\frac{L_{\mathcal{E}}}{2}}^{\frac{L_{\mathcal{E}}}{2}}n_{ \mathcal{E}_{\parallel}>0}\,\mathrm{d}z. \tag{3.36}\] Figure 5 shows the ion distribution of the collisionless ion expansion at various times. We see that the self-similar flow velocity matches in the regions of highest ion density even at late times. Figures 6 and 7 show quantities derived from the ion distribution or from the flow velocities and densities of the cold-fluid expansion. In the fluid expansion an ion front rapidly forms, resulting in very large density gradients. The front appears to develop an oscillatory character at later times. Sound waves and perhaps even solitons would develop near the ion front and propagate into the ambient plasma, but the \(z\) grid used was too course to resolve such waves. In contrast the collisionless expansion exhibits no steep front owing to the fact that the hot ambient plasma does not 'pile-up' on the moving plasmoid; the majority of passing electrons have large enough parallel velocity to either pass over the plasmoid or be reflected from the front much more quickly than the expansion. In both cases the electron temperature approaches \(T_{a}\) somewhat more quickly than the estimate (1.2), a consequence of the term in Eq. (3.22) corresponding to the energy lost adiabatically by the passing electrons but gained by trapped electrons and ions. This term constitutes the majority of the heating when \(T\sim T_{a}\). Of particular note, and ultimately what is sought in this investigation, is the energy balance between electrons and ions, plotted in the top left of Figs. 6 and 7. The two lines represent the fraction of the heating energy (\(\int_{0}^{t}Q\,\mathrm{d}t\) for \(Q\) in Eq. (3.21)) deposited into each species, and in all cases the energy balance tends to a near equal split between electrons and ions, which is quite remarkable considering the opposite collisionality regimes for ions in the fluid and Vlasov models. This energy balance is similar to those predicted by the self-similar expansions in Aleynikov _et al._ (2019) and Arnold _et al._ (2021). Although not shown here, numerical simulations of the expansion with larger line-integrated plasmoid densities exhibit an energy balance even closer to an equal split. A larger line-integrated plasmoid density also results in the plasmoid and ambient temperature equilibrating before the plasmoid density drops to the ambient, which furnishes the possibility of modelling the expansion (and wave generation) from that point with the much simplified equations resulting from a Maxwellian electron distribution function. Figure 5: Ion distribution function of the collisionless ion expansion at various times. The dashed line is the self-similar flow velocity given in Aleynikov _et al._ (2019). Figure 6: Derived quantities of the collisionless ion expansion. Top left: the relative amounts of energy deposited into the electrons and ions. Top right: the plasmoid electron temperature \(T\) and the estimated temperature evolution given a homogeneous plasma. Bottom: plots of the electric potential and electron density at various times. Figure 7: Derived quantities of the cold-fluid ion expansion. Top left: the relative amounts of energy deposited into the electrons and ions. Top right: the plasmoid electron temperature \(T\) and the estimated temperature evolution given a homogeneous plasma. Bottom: plots of the electric potential and electron density at various times. ## 4 Discussion and Conclusions A model for the expansion of a plasmoid produced by fuel pellet ablation has been developed that, considering the high complexity of the problem, provides a relatively simple steady-state electron kinetic problem that was rigorously derived from the ordering of timescales. The long-term evolution of the electrons is decsribed by an energy balance obtained by taking moments of the electron kinetic equation. Ions were described with cold-fluid equations or with the Vlasov equation; in the latter case the large temperature difference between ambient and plasmoid ions can be accounted for. [Discussion of shock; happens when \(T_{a}\) is small] The model presented in this paper is contrasted with earlier work that simply assumed a Maxwellian electron distribution for plasmoid electrons and entirely neglected ambient electrons except in a simple heating term. Earlier work also only considered a cold-fluid model for the ions. The largest pitfall of earlier work was the combination of an unbounded (and therefore unphysical) electric potential and the lack of a proper treatment of passing electrons, which called into question the conclusions about the qualitative character of the expansion, most notably the electron to ion energy transfer. The rectification of these issues was the main motivation for developing a model in a more rigorous manner. It has been shown that during the expansion electrons reach a 'quasi-equilibrium' state: a dynamical steady-state on the fastest collisional timescale which establishes an electron distribution that has no net flux through the trapped-passing separatrix. An analytical solution to the QE electron kinetic problem was obtained and compared to a numerical solution. An estimate of the height of the self-consistent electric potential that supports quasi-equilibrium has been derived. The estimate is consistent with the Boltzmann relation when the temperatures of the plasmoid electrons and ambient electrons are equal, and is in agreement with the height of the self-consistent potential found in the solution to the time-dependent electron kinetic problem in a high-\(Z\) plasmoid (Arnold _et al._, 2023), providing strong evidence for the establishment of the QE state. The QE kinetic equation and energy balance can therefore be incorporated into established codes to describe electron dynamics in a pellet plasmoid. The quasineutrality condition, no-net-flux condition, and QE kinetic problem allow a description of the QE distribution function and electric potential in terms of the plasmoid electron temperature \(T\) and the ion densities. Analogous to the Braginskii equations, the evolution equation for \(T\) was obtained by taking the energy moment over the electron kinetic equation that holds on the expansion timescale; this evolution equation takes the form of an energy conservation law for both electrons and ions. The evolution of \(T\) is driven by the energy exchange between passing electrons, trapped electrons, and ions; heating power initially deposited in the plasmoid electrons by collisions with the hot ambient electrons can be redistributed between the species. When modelling the expansion collisionless and fluid models for ions were used because of their opposite collisionality regimes; it is expected that the shared qualitative properties of these expansions hold with a more accurate model for the ions. The most important qualitative feature of the expansions is the near-equal split of the heating power between electrons and ions. This energy balance is in agreement with that of the self-similar solutions to the expansion found in Aleynikov _et al._ (2019); Arnold _et al._ (2021). The explanation for this result is that self-similar solutions tend to be 'attractors' for more complicated systems, with the particularly robust predictions being those that do not contain reference to any parameters at all, such as the energy balance (Barenblatt, 1996). It is reasonable therefore to suggest that the energy balance holds with a more accurate model of the ions, perhaps also one including motion transverse to magnetic field lines, which would allow a description of the assimilation of the plasmoid into the core of the device. Since this energy balance entails a considerable transfer of energy from electrons to ions we conclude that the ambipolar expansion of a pellet plasmoid is a potent mechanism for the heating of ions on a much faster timescale than that on which electron-ion collisions occur; the expansion happens on the hot electron-hot electron collision timescale and the resulting ion flow energy is converted to thermal energy on the ion-ion collision timescale, which is approximately \(\sqrt{m_{i}/m_{e}}\) times smaller than the electron-ion collision timescale. Hence fuel pellet injection should be considered as not just a method for replenishing lost plasma, but also as a technique for rapidly heating ions if their temperature is exceeded by that of the electrons. Appendix A Calculation of the \(\mathcal{E}\)-moment of the electron kinetic equation on the expansion timescale The volume element in the variables \((\mathcal{E}_{\parallel},\mathcal{E}_{\perp},z)\) is given by \[\mathrm{d}^{3}v\,\mathrm{d}z=\frac{4\pi}{m_{e}^{2}}\frac{1}{v_{\parallel}}\, \mathrm{d}\mathcal{E}_{\parallel}\,\mathrm{d}\mathcal{E}_{\perp}\,\mathrm{d}z, \tag{10}\] so phase-space integrals over the trapped domain \(V_{t}\) may be expressed as \[\begin{split}\int_{V_{t}}h\,\mathrm{d}v^{3}\,\mathrm{d}z& =\frac{4\pi}{m_{e}^{2}}\int_{-\infty}^{\infty}\int_{0}^{\infty} \int_{-e\phi}^{0}\frac{h}{v_{\parallel}}\,\mathrm{d}\mathcal{E}_{\parallel }\,\mathrm{d}\mathcal{E}_{\perp}\,\mathrm{d}z\\ &=\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{-e\phi_{m}}^{0} \oint\frac{h}{v_{\parallel}}\,\mathrm{d}z\,\mathrm{d}\mathcal{E}_{\parallel }\,\mathrm{d}\mathcal{E}_{\perp}\\ &=\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{-e\phi_{m}}^{0} \left\langle h\right\rangle\tau\,\mathrm{d}\mathcal{E}_{\parallel}\,\mathrm{d }\mathcal{E}_{\perp},\\ &=\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{J_{m}}\left\langle h \right\rangle\,\mathrm{d}J\,\mathrm{d}\mathcal{E}_{\perp}.\end{split} \tag{11}\] Hence, for a term \(g\) that is already bounce-averaged, the integral over phase-space is given by \[\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{-e\phi_{m}}^{0}\tau g\,\mathrm{d }\mathcal{E}_{\parallel}\,\mathcal{E}_{\perp}=\frac{2\pi}{m_{e}^{2}}\int_{0}^ {\infty}\int_{0}^{J_{m}}g\,\mathrm{d}J\mathrm{d}\mathcal{E}_{\perp}. \tag{12}\] We note that Eq. (10) may be expressed as \[\frac{\partial f}{\partial t}\bigg{|}_{J}=\left\langle C(f,f_{\mathrm{II}}+f_ {\mathrm{III}})\right\rangle, \tag{13}\] where \(\cdot|_{J}\) indicates a derivative at constant \(J\) rather than constant \(\mathcal{E}_{\parallel}\). Hence its integral over phase-space (after being multiplied by \(\mathcal{E}\)) may be expressed as \[\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{J_{m}}\mathcal{E}\frac{ \partial f}{\partial t}\bigg{|}_{J}\,\mathrm{d}J\,\mathrm{d}\mathcal{E}_{ \perp}=\int_{V_{t}}\mathcal{E}C(f,f_{\mathrm{II}}+f_{\mathrm{III}})\,\mathrm{d }^{3}v\,\mathrm{d}z, \tag{14}\] where we have used the fact \[\frac{2\pi}{m_{e}^{2}}\int_{0}^{\infty}\int_{0}^{J_{m}}\mathcal{E}\left\langle C (f,f_{\mathrm{II}}+f_{\mathrm{III}})\right\rangle\,\mathrm{d}J\,\mathrm{d} \mathcal{E}_{\perp}=\int_{V_{t}}\mathcal{E}C(f,f_{\mathrm{II}}+f_{\mathrm{III }})\,\mathrm{d}^{3}v\,\mathrm{d}z. \tag{15}\] From Eq. (3.5) we find \[\begin{split}\frac{\mathrm{d}W_{t}}{\mathrm{d}t}=-\frac{2\pi}{m_{e}^{ 2}}&\int_{0}^{\infty}\int_{-e\phi_{m}}^{0}\frac{\partial J}{ \partial t}f\,\mathrm{d}\mathcal{E}_{\parallel}\,\mathrm{d}\mathcal{E}_{ \perp}+\frac{2\pi}{m_{e}^{2}}\frac{\partial J_{m}}{\partial t}\int_{0}^{ \infty}\mathcal{E}_{\perp}f_{a}\big{|}_{\mathcal{E}_{\parallel}=0}\,\mathrm{d} \mathcal{E}_{\perp}\\ &+\int_{V_{t}}\mathcal{E}C(f,f_{\Pi}+f_{\mathrm{III}})\,\mathrm{d} ^{3}v\,\mathrm{d}z,\end{split}\] (A.7) where we have used the fact that \[\frac{\partial\mathcal{E}}{\partial t}\bigg{|}_{J}=-\frac{1}{\tau}\frac{ \partial J}{\partial t}\bigg{|}_{\mathcal{E}_{\parallel}}.\] (A.8) ## Funding This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 - EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission and be held responsible for them. This work was supported by the U.S. Department of Energy under Contract Nos. DEFG02-04ER54742 and DESC0016283. ## Declaration of interests The authors report no conflict of interest.
2302.12115
Dynamic realization of miscellaneous profile services in elastic optical networks using spectrum partitioning
Optical backbone networks are required to be highly dynamic in supporting requests with flexible bandwidth granularities to cope with the demands of new broadband wireless and fixed access networks. To provide this flexibility, services are offered by taking requested bandwidth profile into consideration, instead of assigning a fixed amount of bandwidth to each request. New techniques are developed for the resource management of the elastic optical networks to realize services with a specified bandwidth profile, consisting of minimum, average, and maximum required number of spectrum slots, in addition to holding time. In this work, two new schemes are proposed to realize such services, exploiting a probabilistic spectrum partitioning approach. This new probabilistic spectrum partitioning scheme is devised to enhance the chance of accommodating requests and consequently lower request blocking probability. It enforces different probabilities to contributing spectrum partitions in a certain service realization. Taking advantage of this probabilistic spectrum partitioning and a profile-based routing, we introduce two multistage spectrum assignment methods to make a certain lightpath meet the requested service profile constraints, considering the time-weighted average of the assigned spectrum slots. The results indicate that our algorithms can successfully realize the requests with the probability of 0.993 for the offered loads less than 400 erlang.
Behnam Gheysari, Arash Rezaee, Lotfollah Beygi
2023-02-23T15:55:18Z
http://arxiv.org/abs/2302.12115v4
Dynamic realization of miscellaneous profile services in elastic optical networks using spectrum partitioning ###### Abstract Optical backbone networks are required to be highly dynamic in supporting requests with flexible bandwidth granularities to cope with the demands of new broadband wireless and fixed access networks. To provide this flexibility, services are offered by taking requested bandwidth profile into consideration, instead of assigning a fixed amount of bandwidth to each request. New techniques are developed for the resource management of the elastic optical networks to realize services with a specified bandwidth profile, consisting of minimum, average, and maximum required number of spectrum slots, in addition to holding time. In this work, two new schemes are proposed to realize such services, exploiting a probabilistic spectrum partitioning approach. This new probabilistic spectrum partitioning scheme is devised to enhance the chance of accommodating requests and consequently lower request blocking probability. It enforces different probabilities to contributing spectrum partitions in a certain service realization. Taking advantage of this probabilistic spectrum partitioning and a profile-based routing, we introduce two multistage spectrum assignment methods to make a certain lightpath meet the requested service profile constraints, considering the time-weighted average of the assigned spectrum slots. The results indicate that our algorithms can successfully realize the requests with the probability of 0.993, for erlangs lower than 400. ## 1 Introduction New technologies such as 5G and Internet of things in addition to various applications such as online gaming and data backup pose heterogeneous demands on telecommunications networks [1]. To cope with these demands, optical backbone networks need to serve the requests with certain characteristics such as flexible bandwidth granularities and significantly high dynamism [2]. Classic fixed grid optical networks are not able to address the aforementioned demands, thus elastic optical networks (EON) have emerged [3]. Spectrum-sliced elastic optical path (SLICE), established on a frequency slot, in addition to dynamically adjusting transmission parameters such as the modulation format enables the network to support various data rates, leading to a higher spectrum efficiency [4, 5]. ### Problem statement To exploit the potential of an EON in serving numerous customers and providing them with diverse services, implementing optical transmission as a service (TaaS) is highly effective. Developing heuristic techniques to adopt service models, specified by a required minimum, average, and maximum bandwidth together with a certain holding time, realizes optical (TaaS) in backbone EONs, aiming at the following purposes: * Allowing service providers greater latitude to develop new policies for resource management, e.g., one may dedicate a specific amount of bandwidth to delay-sensitive applications, considering the minimum required bandwidth specification, and postpone the transmission of less delay-sensitive application data, according to available network resources. * Significantly reducing blocking probability, resulting in more offerable services and higher amount of profit for upper-tier service providers. ### Related works To address the previously mentioned objectives, routing and spectrum assignment (RSA) are performed based on this new service model. RSA is concerned with finding proper spectrum slots, considering continuity and contiguity constraints [5, 6]. According to these constraints, selected spectrum slots must be neighbor to each other and aligned along the selected path [7, 8]. In order to update the number of assigned spectrum resources according to the network state and requested service profile, spectrum reallocation is implemented. Also, modulation level can be dynamically re-configured to increase the spectrum efficiency, considering the optical reach reduction constraint for higher-order modulation formats [9, 10]. To reduce the complexity, in this work we consider the modulation format to be fixed and we only focus on the number of spectrum slots for developing the ability of accommodating miscellaneous profile services. Dynamic provisioning and release of lightpaths with heterogeneous bandwidths could result in emergence of vacant isolated spectrum slots. This phenomenon is called spectrum fragmentation, which is responsible for a signif icant amount of blocking [11, 12]. Moreover, in this situation, the requests which demand more spectrum slots are more likely to be blocked, referred to as unfairness problem [13]. To considerably reduce spectrum fragmentation and enhance spectrum utilization, hitless spectrum reallocation is an essential EON feature [14, 15]. Spectrum reallocation introduces some implementation complexity compared to fixed spectrum allocation, so additional technologies and methods are needed [16]. Two key enabling technologies are flexible spectrum selective switches, which allow switching arbitrary spectrum slices, and bandwidth variable transponders (BVT), generating paths with variable bit rates [17]. Hitless spectrum reallocation is well supported in the literature by proposing node structure [18] and practical methods [19, 20]. In [21] a protocol to share and synchronize transmission parameters between a transmitter and a receiver is proposed. Also hitless BVTs with zero loss of data in reconfiguration are presented and proved experimentally in [22, 23]. One of the most effective methods to mitigate the fragmentation and unfairness effects, is spectrum partitioning (SP). It divides the whole spectrum into several partitions, each dedicated to a specific group of requests [24, 25]. Utilizing the SP approach could increase the total blocking if it is not deployed properly because in some cases, the requests related to fully occupied partitions are blocked while there might exist enough free slots in other partitions [26]. To alleviate this problem some papers suggest sharing resources among partitions. In [27], they proposed accommodating the requests in their dedicated partitions by the use of first-fit (FF) policy and using last-fit (LF) policy to enable sharing spectrum resources among partitions. Exploiting spectrum partitioning along with the service model, considered in this work, enables an inherent sharing among partitions since requests are dynamically accommodated based on their profile and current network state. Each request is specified by a certain profile, including bandwidth profile, consisting of minimum, average, and maximum required number of spectrum slots, in addition to holding time. This service model has been investigated from different aspects, where its offline planning is studied [16] and its dynamic implementation, considering traffic shaping, in the edge of metro network is introduced in [28]. To the best of our knowledge, we are the first to investigate the dynamic realization of this new service model in the backbone network, merely considering optical layer. The proposed fragmentation avoidance and dynamic RSA methods are properly aligned for these new services. ### The main contributions In this work, two new schemes are proposed for a certain service profile realization (SPR). Both of these methods exploit a probabilistic SP where the chance of accommodating requests is enhanced and consequently request blocking probability is lowered. One may summarize the main contributions of this paper as follows: 1. **New probabilistic SP scheme:** We enforce different probabilities to contributing spectrum partitions in a certain service realization. 2. **Two heuristic SPR methods:** We introduce two multistage spectrum assignment schemes to make a certain lightpath meet the required service profile average by applying a new profile-based routing and considering the time-weighted average. The stages of the introduced SPR methods are determined either based on decision points (DPM) or average tracking (ATM) methods. The DPM minimizes the needed spectrum allocation stages, while the ATM keeps the time-weighted average close to the requested average, throughout the holding time. Using spectrum partitioning while considering this new service model, in the backbone networks, not only improves control over network resources but also enables inherent sharing among partitions, which leads to blocking reduction in comparison to current network management techniques, designed for accommodating traditional services. The results show that the DPM and the ATM improve the blocking probability more than seven times and two orders of magnitude, respectively, in erlang 400, compared to the available spectrum management techniques [13, 26, 27]. The rest of the paper is structured as follows. The network model is introduced in the next section. In Section 3, routing and spectrum assignment initial stage are discussed and in Section 4, the next stages of spectrum assignment, used for meeting the requested average, are elaborated analytically. Numerical results are investigated in Section 5. Finally, Section 7 concludes the paper. ## 2 Network model and preliminary concepts The optical fiber spectrum is sliced up into \(FS\) frequency slots with equal bandwidth. Every connection request, \(\mathsf{S}_{i}\), is characterized by its quality of service in terms of required bandwidth and holding time, specified in the corresponding service level agreement. More precisely, we consider \(\mathsf{S}_{i}=\{b_{m},b_{Ave},b_{M},H\}\), where \(b_{m},b_{Ave}\) and \(b_{M}\) stand for minimum, average and maximum required number of contiguous slots needed for accommodating the request, respectively, and \(H\) stands for the holding time. In the network provisioning, \(k\)-shortest paths between all pair of nodes are computed and partitions are calculated offline. A centralized software-defined network (SDN) controller is implemented with a global view over the network. The SDN controller is responsible for checking the present network state, by gathering information related to network resources and updating the lists, as well as performing path computations. To attain the required flexibility in spectrum assignment and reallocation, the node architecture of Fig. 1 is exploited. Each node is equipped with BVT, flexible add/drop and flexible optical switch technologies to provide the required flexibility [29]. This work is entirely agnostic to available techniques and mechanisms employed in implementation of these technologies. In SP schemes, a bin refers to a set of contiguous spectrum slots, where its _bin-size_ denotes the number of slots forming the bin [30]. In deed, a SP scheme divides the whole spectrum into several partitions, each dedicated to the bins with an specific _bin-size_. The partitions are indexed from 1 to \(N\) and the partition number is denoted by \(p\)num. \(B=\{b_{1},b_{2},....,b_{N}\}\) denotes the set of offered network _bin-sizes_, where \(b_{j}\) is the _bin-size_ of \(j\)th partition and \(b_{1}<b_{2}...<b_{N}\). To find the number of bins devoted to each partition, the following procedure is proposed to determine these numbers probabilistically. Considering the arrival of the request \(\mathrm{S}_{i}\), \(\mathrm{S}_{i}=\{b_{m},b_{Max},b_{M},H\}\), bins from \(j\)th partition contribute in accommodation of this request if and only if \(m\leq j\leq M\). The probability that \(j\)th partition could contribute to accommodation of the request is called contribution probability and denoted by \(P_{\mathrm{c}}(j)\). For a uniformly distributed service requests, as provided in the appendix, we get \[P_{\mathrm{c}}(j)=\frac{2\cdot j\cdot(N-j+1)}{N\cdot(N+1)} \tag{1}\] from (18). In this work, we split slots among partitions according to their _bin-sizes_ and contribution probability. The number of bins and slots, denoted by \(Nb_{j}\) and \(FS_{j}\), respectively, dedicated to the \(j\)th partition are given by \[Nb_{j}=\left\lfloor FS\cdot\frac{P_{\mathrm{c}}(j)}{\sum_{i=1}^{N}b_{i}\cdot P _{\mathrm{c}}(i)}\right\rfloor \tag{2}\] and \[FS_{j}=Nb_{j}\cdot b_{j}. \tag{3}\] Bins of each partition are indexed from 1 to \(Nb_{j}\), and denoted by \(p\)num. Using (2) and (3), there might exist a number of slots, which are not included in any partition. These slots are assigned arbitrarily to a few partitions, in order that \[\sum_{j=1}^{N}FS_{j}=FS. \tag{4}\] Our proposed method which is an updated form of the conventional SP [13, 26], is named spectrum interval partitioning (SIP) because it considers the contribution probability of different partitions according to their position on the spectrum. We use the SIP since not only it reduces fragmentation and unfairness but also enables us to simply keep track of the number of available resources to evaluate network accommodation capability. As an example, consider the simple network of Fig. 2, where 3, 2, 2, 4, and 3 contiguous slots have been assigned to requests \(S_{1}(A\to B)\), \(S_{2}(A\to B)\), \(S_{3}(A\to D)\), \(S_{4}(B\to D)\), and \(S_{5}(C\to D)\), respectively. Now assume that \(S_{6}=\{2,4,6,90\}\) arrives at node C, although there are 2 free slots on \(C\to D\), this request gets blocked due to contiguity constraint. By utilizing the SIP scheme as seen in Fig. 3, the whole spectrum is carved up into three partitions with \(b_{1}=2,b_{2}=3\) and \(b_{3}=4\). As shown, the SIP method makes the realization of \(S_{6}\) feasible, using the same spectrum as Fig. 2. The parameters used in this paper, are summarized in Table 1. \begin{table} \begin{tabular}{|c|l|} \hline Notations & meaning \\ \hline \(FS\) & Total number optical fiber frequency slots \\ \(S_{1}\) & The \(m\) service request \\ \(b_{m},b_{Max},b_{M}\) & Minimum, average, and maximum required _bin-sizes_ \\ N & Number of partitions \\ \(p\)num & Partition number \\ \(p\)num & Bin number \\ \(Nb_{j}\) & Number of bins dedicated to the \(j\)th-partition \\ \(FS_{j}\) & Number of slots dedicated to the \(j\)th-partition \\ \hline \end{tabular} \end{table} Table 1: The definition of the exploited parameters Figure 1: The exploited flexible node structure using BVT, flexible add/drop and flexible optical switch technologies. Figure 3: The whole spectrum is divided into three partitions, utilizing the SIP scheme. The first partition has two bins, each composed of two slots; the second and the third partitions have one bin, consisting of three and four slots, respectively. The blocked connection request in Fig. 2, \(S_{6}\), determined by the yellow color, is accommodated with two spectrum slots. Figure 2: The five connection requests, \(S_{1}(A\to B)\), \(S_{2}(A\to B)\), \(S_{3}(A\to D)\), \(S_{4}(B\to D)\), and \(S_{5}(C\to D)\), are determined by north-west red lines, blue dots, green crosshatches, black north-east lines, and purple horizontal lines, respectively. A new connection request, \(S_{6}=\{2,3,4,90\}\), arrives at node C and gets blocked since the two left free slots on \(C\to D\) do not fulfill the contiguity constraint. ## 3 Profile based RSA For the sake of simplicity and performance improvement, \(k\)-shortest-path between all possible sources and destinations in the network are computed offline and are indexed from 1 to \(k\). When a new request arrives, the pre-computed paths are used as a database. This paper suggests two methods for path selection, least loaded routing (LLR) and profile-based routing (PBR). The LLR chooses the path with the most number of unoccupied spectrum slots, and the PBR only considers the partitions which can contribute in accommodation of the request, choosing the path with the most number of free bins in [\(b_{m}\),\(b_{M}\)] interval. If two or more paths have the same value, using each of the mentioned methods, the least indexed path is chosen. Both proposed approaches are simple and consider the present network available resources, which increases the probability of accommodating new requests. The bin used to accommodate a specific request is specified by \((b\text{num}_{i},p\text{num}_{i})\) in which \(b\text{num}_{i}\) and \(p\text{num}_{i}\) are, respectively, the bin number and partition number used to accommodate \(i^{\text{th}}\) request. All occupied bins, related to a specific \((route,path)\) couple, are indicated by an occupied bin vector, \(\text{OBV}_{(route,path)}=\{(p\text{num}_{1},b\text{num}_{1})...(p\text{ num}_{i},b\text{num}_{i})\}\). The unoccupied bin vector, \(\text{UBV}_{(route,path)}\), can be computed by complementing the \(\text{OBV}_{(route,path)}\) set. For example, assume that fiber spectrum on all links are divided into three partitions with only one bin, if \(\text{OBV}_{(1,1)}=\{(1,1),(3,1)\}\), then \(\text{UBV}_{(1,1)}=\{(2,1)\}\). The \(\text{UBV}\) is the set of all \(\text{UBV}_{(route,path)}\), assuming \(r\) possible _routes_ in the network, \(\text{UBV}\)=\(\{\text{UBV}_{(1,1)},\text{UBV}_{(1,2)},...\text{UBV}_{(1,k)},...\text{UBV}_{(r,k)}\}\). The pseudocode processes of the routing methods are indicated in Algorithms 1 and 2. At the initial stage of the spectrum assignment scheme, each request is assigned the maximum possible number of slots according to its service profile and available network resources. As illustrated in Algorithm 3, the partitions in the interval of [\(M,m\)], on the selected path, are investigated in descending order, and if there exists some free bin, it is assigned to the request using FF, otherwise the request is blocked. ``` Inputs: 1: (i) Network topology 2: (ii) Source and destination nodes of the request (\(route\)) 3: (iii) Partition numbers (\(p\text{num}\)) and their bin-size(\(b_{p\text{num}}\)) 4: (iv) Unoccupied bin vector (UBV) 5: \(k\)-shortest precalculated paths for all possible routes 6:Parameters: 7: (i) The number of free slots on a specific path, \(sum_{path}\) 8: (ii) \(sum=\{sum_{1},sum_{2},...sum_{k}\}\) 9: (iii) \(sum\) = \(b_{p\text{num}}+sum_{p\text{num}}\) 10: (iii) \(sum\) = \(b_{p\text{num}}+sum_{p\text{num}}\) 11: endfor 12: (i) \(sum\) = \(b_{p\text{num}}+sum_{p\text{num}}\) 13: endfor 14: (ii) \(sum\) = \(b_{p\text{num}}+sum_{p\text{num}}\) 15: endfor 16: (iii) \(sum\) = \(b_{p\text{num}}+sum_{p\text{num}}\) 17: endfor 18: (iii) \(sum\) = \(b_{p\text{num}}+sum_{p\text{num}}\) 19: endfor 20: (iii) \(sum\) = \(b_{p\text{num}}+sum_{p\text{num}}\) 21: endfor 22: (iii) \(sum\) = \(b_{p\text{num}}+sum_{p\text{num}}\) 23: endfor 24: (iv) = \(b_{p\text{num}}+sum_{p\text{num}}\) 25: endfor 26: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 27: endfor 28: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 29: endfor 30: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 31: endfor 32: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 33: endfor 34: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 35: endfor 36: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 37: endfor 38: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 39: endfor 39: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 40: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 41: endfor 42: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 43: endfor 44: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 45: endfor 46: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 47: endfor 48: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 49: endfor 50: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 51: endfor 52: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 53: endfor 54: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 55: endfor 56: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 57: endfor 58: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 59: endfor 59: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 60: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 61: endfor 62: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 63: endfor 64: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 65: endfor 66: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 67: endfor 68: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 69: endfor 70: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 71: endfor 72: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 73: endfor 74: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 75: endfor 76: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 77: endfor 78: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 79: endfor 79: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 80: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 81: endfor 82: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 83: endfor 84: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 85: endfor 86: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 87: endfor 88: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 89: endfor 90: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 91: endfor 92: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 93: endfor 94: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 95: endfor 96: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 97: endfor 98: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) 99: endfor 99: (v) = \(b_{p\text{num}}+sum_{p\text{num}}\) [MISSING_PAGE_POST] to minimize the possible reallocation delays, imposed in the implementation of some reallocation schemes. The DPM is employed to determine when and to which bin the reallocation needs to be implemented. At the provisioning step, some critical time points referred to as decision points (_DP_), are calculated. _DP_s are used to ascertain which partitions could realize the requested average in different time points. As shown in Fig. 4, every \(t_{0}\) seconds based on relative time position to _DP_s, corresponding partitions are investigated to see if there exists any free bin to accommodate the request. In the following, the computations regarding the _DP_ are illustrated. Let \(d\) and \(b_{d}\) refer to the desired partition, for the realization of the requested service profile, and its _bin-sizes_. It is desired for the time-weighted average of the assigned _bin-sizes_ to be greater than or equal to \(b_{Av}\). This can be represented by \[b_{AS}\cdot t+b_{d}\cdot(H-t)\geq b_{Av}\cdot H, \tag{5}\] rephrased as \[(b_{d}-b_{AS})\cdot t\leq(b_{d}-b_{Av})\cdot H. \tag{6}\] Based on \(b_{AS}\), requests are divided into three groups: 1. \(b_{AS}<b_{Av}\), 2. \(b_{AS}=b_{Av}\), **Inputs:** (i) Requested service profile, \(\text{S}_{i}=[b_{m},b_{Av},b_{M},H]\) (ii) Partition numbers (_p_num) and bin numbers (_b_num) (iii) Selected _path_ of the _route (Spath_) (iv) Unoccupied bin vector (UBV) **Output:** Updated UBV ``` 1:Inputs: 2:(i) Requested service profile, \(\text{S}_{i}=[b_{m},b_{Av},b_{M},H]\) 3:(ii) Partition numbers (_p_num) and bin numbers (_b_num) 4:(iii) Selected _path_ of the _route (Spath_) 5:(iv) Unoccupied bin vector (UBV) 6:(iv) Check time (\(t_{0}\)) 7:Parameters: 8:(i) Number of free bins, dedicated to a specific partition (\(F_{p\text{num}}\)) 9:(ii) \(F=\{F_{1},F_{2},...F_{N}\}\) 10:Output: Updated UBV 11:Procedure Service profile realization (SPR) 12:if\(b_{AS}=b_{Av}\)then 13: set flag = 1 14:elseif\(b_{AS}<b_{Av}\) & flag == 0 then 15:for\(d=Ave+1:M\)do 16: Calculate \(DP_{d}\) using (8) 17:endfor 18:if\(t=nt_{0}\), \(t\leq DP_{d}\)then 19:for\(d=Ave+1:M\)do 20:if\(t\leq DP_{d}\)then 21:for all \(\text{UBV}_{(route,Spath)}\) members do 22:if\(d\leq p\text{num}\leq M\)then 23:\(F_{p\text{num}}=F_{p\text{num}}+1\) 24:endif 25:endfor 26:endfor 27:endif 28:if maximum(F) \(\neq\) 0 then 29:Return a partition's bin with maximum F, using FF 30: set flag = 1 31:endif 32:else 33: Calculate \(DP_{m}\) 34:if\(t=nt_{0}\), \(DP_{m}\leq t\leq H\)then 35:for all \(\text{UBV}_{(route,Spath)}\) members do 36:if\(m\leq p\text{num}\leq Ave\)then 37:\(F_{p\text{num}}=F_{p\text{num}}+1\) 38:endif 39:endfor 40:endif 41:if maximum(F) \(\neq\) 0 then 42:return a partition's bin with maximum F, using FF 43:endif 44:endif 45:endfor 46:endfor 47:if maximum(F) \(\neq\) 0 then 48:Return a partition's bin with maximum F, using FF 49: set flag = 1 50:endif 51:Update UBV 52:endfor 53:Update UBV ``` **Algorithm 4** Decision points method (DPM) ### 3. \(b_{AS}>b_{Av}\) The requests of the second group remain in the same bin throughout the holding time. For the first group, we investigate the partition numbers larger than \(Ave\), \[Ave+1\leq d\leq M. \tag{7}\] Figure 4: If a request belongs to \(b_{AS}<b_{Av}\) group and \(t\leq DP_{M}\), every \(t_{0}\) second all the partitions, which can be used in realizing the requested average, shown on the line between _DP_s, are investigated. Given that \(b_{d}>b_{AS}\), from (6) we get \[t\leq\frac{b_{d}-b_{Ave}}{b_{d}-b_{AS}}\cdot H. \tag{8}\] The \(DP\), related to partition d, is denoted by \(DP_{d}\) and could be found using \[DP_{d}=\frac{b_{d}-b_{Ave}}{b_{d}-b_{AS}}\cdot H. \tag{9}\] To realize the requested average, according to (8) and (9), the request should be shifted to a partition, which its number is higher than or equal to \(b_{d}\), earlier than \(DP_{d}\) seconds, as shown in Fig. 4. If there exist several partitions to satisfy our desire, the bin dedicated to the partition with the most number of free bins is selected, using FF policy. In this method, we check the possibility of reallocating the bin every \(t_{0}\) seconds. After moving the request to a new bin, we set the flag in order not to change the bin anymore. For the third group, (6) is solved for \(m\) as the desired partition, i.e, \(b_{d}=b_{m}\). According to the fact that \(b_{d}<b_{AS}\), we have \[t\geq\frac{b_{d}-b_{Ave}}{b_{d}-b_{AS}}\cdot H, \tag{10}\] and \(DP_{d}\) is calculated using (9). In this case \(DP_{m}\) is used as a single decision point. According to (9) and (10), we can make sure that after \(DP_{m}\) seconds the average is met no matter to which partition in the \([m,Ave]\) interval the request is shifted. After \(DP_{m}\), every \(t_{0}\) seconds, partitions are checked and the request is shifted to an available bin of partition in \([m,Ave]\) interval, with the most number of free bins, using FF. The pseudocode process is indicated in Algorithm 4. ### Average tracking method One of the major plus points of the DPM is that bin-reallocation is implemented not more than once for each request, but this advantage can raise an issue, losing the ability to catch some of the network dynamic behaviors. The time-weighted average of the size of the assigned bins, until the second \(t\), is named assigned average, and is denoted by \(AV(t)\). Here we propose a new method to track the requested average, throughout the holding time. For each request, we want \(AV(t)\) to be close to \(b_{Ave}\). After the provisioning step, the assigned average is equal to the size of the assigned bin, \(AV(0)=b_{AS}\). Every \(t_{0}\) seconds, requests are sorted, in ascending order, based on _departure time_, defined as \[\textit{departure time}=\textit{arrival time}+H. \tag{11}\] In fact we prioritize the requested services with less time for realization. Afterwards, based on \(AV(t)\) the requests are divided into two groups: 1. \(AV(t)>b_{Ave}\), 2. \(AV(t)\leq b_{Ave}\). **Inputs:** (i) Requested service profile, \(\mathrm{S}_{i}=[b_{m},b_{Ave},b_{M},H]\) (ii) Selected \(path\) of the _route_ (_Spath_) (iii) Partition numbers (\(\mathrm{pnum}\)) and bin numbers (\(\mathrm{bnum}\)) (iv) Unoccupied bin vector (\(\mathrm{UBV}\)) (v) Check time (\(t_{0}\) ) (vi) Assigned average until \(t\), (\(AV(t)\)) (vii) Assigned partition number at \(t\), (\(AS(t)\)) **Parameters:** (i) Number of free bins, dedicated to a specific partition (\(F_{\mathrm{pnum}}\)) (ii) \(F=\{F_{1},F_{2},...F_{N}\}\) **Output:** Updated UBV **Procedure** Service profile realization (SPR) ``` 1:for every \(t_{0}\) seconds do Sort the requests using (11) 2:if\(AV(t)\geq b_{Ave}\)then 3: calculate \(b_{d}(min)\) using (15) 4:for all \(\mathrm{UBV}_{(route,path)}\) members do 5:if\(b_{d}(min)\leq\mathrm{pnum}<M\)then 6:\(F_{\mathrm{pnum}}=F_{\mathrm{pnum}}+1\) 7:endif 8:endfor 9:if\(\mathrm{maximum(F)}\neq 0\)then 10:Return a partition's bin with maximum (F) 11:endif 12:else 13:for\(p\mathrm{num}=M:AS(t)+1\)do 14:if(\(p\mathrm{num}\), \(b\mathrm{num}\)) \(\in\mathrm{UBV}_{(route,path)}\)then 15: Assign the request, using FF 16:endif 17:endfor 18:endif 19:endfor 20: Update UBV ``` **Algorithm 5** Average tracking method (ATM) The requests of the first group are shifted to a partition with more number of free bins, compared to their current partition, to make room for satisfaction of other requests. The assigned average after \(t_{0}\) seconds, \(AV(t+t_{0})\), is computed as \[AV(t+t_{0})=\frac{AV(t)\cdot t+b_{d}\cdot t_{0}}{t+t_{0}}, \tag{12}\] where \(b_{d}\) is size of the desired bin. We want assigned average, until the next check time, to be greater than or equal to \(b_{Ave}\), \[\frac{AV(t)\cdot t+b_{d}\cdot t_{0}}{t+t_{0}}\geq b_{Ave}. \tag{13}\] From (13) we get \[b_{d}\geq\frac{b_{Ave}\cdot(t+t_{0})-AV(t)\cdot t}{t_{0}}, \tag{14}\] and the minimum of \(b_{d}\), \(b_{d}(min)\), is calculated as \[b_{d}(min)=\frac{b_{Ave}\cdot(t+t_{0})-AV(t)\cdot t}{t_{0}}. \tag{15}\] So, the requests of the first group are shifted to a free bin, which its size is in \([b_{d}(min),b_{M}]\) interval and belongs to the partition with maximum number of free bins. Adding a constant, \(C\), as a margin when requests are grouped, as \[AV(t)>b_{Ave}+C\] can reduce the frequency of the bin reallocation. Let AS (t) and \(b_{AS}(t)\) refer to the partition number and size of the bin, assigned to the request, at the moment \(t\). The requests of the second group are shifted to a free bin with maximum possible _bin-size_ in \([b_{AS}(t),b_{M}]\) interval. The pseudocode process and flowchart for this approach are brought in Algorithm 5 and Fig. 5. ## 5 Simulation results and discussions To indicate the efficiency of our proposed approaches through numerical simulations and inspection of the obtained results, we carried out intensive simulations in an object-oriented modular discrete event simulator, called OMNeT++ [31]. Deutsche Telekom network topology [32], shown in Fig. 6, is employed comprising 14 nodes and 23 links.The total available optical fiber bandwidth on each link is assumed to be 4.5 THz, which is sliced up into 360 spectrum slots with a bandwidth of 12.5 GHz. Transponders utilize dual polarization quadrature phase shift keying (DP-QPSK) modulation. The number of paths in the \(k\)-shortest path algorithm, i. e., parameter \(k\), is set to 4. One hundred thousands of service requests are generated in each trial. The results meet the desired confidence interval, either at a 90% confidence level or the maximum number of independent trials (ten trials). The reported results are the average of these trial results. The services are requested according to the network offered bin-sizes, \(B=\{1,2,3,4,5,6,7,8,9,10\}\). The whole fiber spectrum is divided into ten partitions, using partitioning methods. For our simulation, we assume that services are formed by the generation of three iid random variables, in the interval of Figure 5: The flow chart of the SIP-PBR-ATM algorithm. Figure 6: Deutsche Telekom network topology, consisting of 14 nodes and 23 links [32]. Distance units are in kilometers. Figure 7: The blocking probability performance of our two SPR methods, utilizing the SIP and the conventional SP schemes. The set of network offered bin-sizes is considered to be \(B=\{1,2,3,4,5,6,7,8,9,10\}\). The routing method is fixed to the PBR in this simulation. \([1,10]\), with uniform distribution, and are sorted in ascending order to represent, \(b_{m}\), \(b_{Av}\) and \(b_{M}\), respectively. The holding time of service requests follows an exponential distribution with a mean of \(1/\mu\). The service requests arrive at a Poisson rate of \(\lambda\) and are uniformly distributed among the network nodes. Erlang is used as a metric to demonstrate the traffic intensity and is equal to \(\lambda/\mu\). In order to evaluate our partitioning scheme we have simulated our SPR methods with the SIP and the SP, considering the PBR as routing algorithm, shown in Fig. 7. Expectedly, the SIP better suits our approach since it takes the service characteristics into account while apportioning the spectrum. Fig. 8 demonstrates the functionality of our presented routing methods. Both of our methods, including the ATM and the DPM, perform better when they are implemented by the BPR because the LLR considers the number of free slots on the whole spectrum, while the PBR only examines the free bins that can take part in accommodation of the request. Being confident about the fact that our SPR approaches show their best performance when they are implemented using the SIP and the PBR, in Fig. 9 we have depicted our SPR methods along with conventional partitioning methods, here referred to as CPM, presented in [13, 26], and one of the best partitioning methods in the literature, which benefits from resource management techniques and enables sharing among partitions by the use of first-last fit reconfiguration mechanism for spectrum assignment, FASA-SP-FLF-RM, introduced in [27]. As a result of inherent sharing among partitions and postponing the data transmission related to less delay-sensitive applications, our proposed methods exhibit significant improvement compared to the CPM and the FASA-SP-FLF-RM methods. It is worthwhile mentioning that the ATM shows better performance compared to the DPM, mainly because it can better utilize spectrum resources according to the current network state, benefiting from the resource reallocation ability as much as needed. Spectrum utilization ratio (SUR) is defined as the ratio of the number of utilized spectrum slots to the total number of spectrum slots in the network. Fig. 10 shows the SUR performance of the above mentioned methods; SUR is higher for our proposed methods, due to lower blocking probability and assigning more spectrum slots to the requests when the network is less loaded. In Figs. 11 and 12, the difference between the requested maximum and the average value, as Figure 8: The blocking probability performance of our SPR methods utilizing the LLR and the PBR methods with \(k\)=4. The partitioning method is fixed to the SIP in this simulation. Figure 10: The SUR performance of the SIP-PBR-ATM, the SIP-PBR-DPM, the FASA-SP-FLF-RM and the CPM. The whole fiber spectrum is divided into ten partitions, using different partitioning methods. Figure 9: The blocking probability performance of the SIP-PBR-ATM, the SIP-PBR-DPM, the FASA-SP-FLF-RM, and the CPM, considering 10 types of requests for the FASA-SP-FLF-RM and the CPM. Contrary to the CPM, the FASA-SP-FLF-RM enables sharing among partitions. well as the difference between the minimum and the average value is limited to a specific number, called variation factor (_VF_). From these figures, we can infer that confining the offered services together with expanding the _VF_ could reduce the blocking probability, as more partitions could contribute to accommodation of requests. AS mentioned before one of our desires is meeting the requested average in the duration of holding time. The requests which are accommodated and met their requested average are referred to as fully realized requests. The realization factor (_RF_) represents the number of fully realized requests divided by the total number of requests. Fig. 13 indicates the _RF_ of requests for the SIP-PBR-ATM and the SIP-PBR-DPM. ## 6 Complexity analysis The complexity of the previously mentioned algorithms is analyzed by dividing them into two main parts, routing and spectrum assignment. \(K\) shortest path is a main part of all mentioned routing algorithms, which its complexity is equal to \(\mathcal{O}(K.V.(L+V\log V))\), where V, and L are the number of the nodes and the links in the network, respectively [33]. The complexity of LLR is equal to PBR, so we only consider PBR while analyzing the complexity of our algorithms. Algorithm 3 has been considered, collectively, along with algorithms \(4\) and \(5\) for analyzing the spectrum assignment part of our algorithms. Also, to analyze the spectrum assignment part of algorithm FASA-SP-FLF-RM [27], first-last-fit algorithm is considered together with reconfiguration mechanism. Complexity analysis of all mentioned algorithms is given in table 2, wherein U, Z, and NR stand for the number of the bins on each path of the network routes, the maximum number of bins assigned to any partition in the network, and the maximum number of requests that could be accommodated, respectively. ## 7 conclusion We have investigated the realization of a new approach for offering miscellaneous profile services in EONs. The new designed algorithms are to satisfy the requested services, exploiting a probabilistic partitioning method. More precisely, our suggested partitioning method considers partition contribution probabilities and also benefits from inherent sharing among partitions, when combined with our SPR methods, leading to more fairness and blocking reduction. Figure 11: The blocking probability performance of the SIP-PBR-DPM implemented in different _VF_s. The difference between the maximum and the average value, as well as the difference between the minimum and the average value, for requested service profiles, is limited to a specific _VF_. Figure 12: The blocking probability performance of the SIP-PBR-ATM scheme implemented in different _VF_s. Figure 13: The realization factor versus offered network load for the SIP-PBR-ATM and the SIP-PBR-DPM. We have suggested two different routing methods and showed that considering the profile of the requested service in the routing step could result in the blocking probability reduction. We also designed two methods to realize the requested profile, the DPM and the ATM. The DPM mainly focuses on minimizing the needed spectrum reallocations. On the other hand, the ATM aims at keeping the assigned average close to the requested average throughout the holding time, leading to the improvement of the experienced quality of service. Although our methods demand higher implementation complexity compared to network management techniques, designed for accommodating traditional services, they provide more freedom to postpone the transmission of less delay-sensitive data applications. ## Appendix In this appendix, we aim at computing \(P_{c}(j)\), assuming the requests are formed by the use of random variables, _min_ and _Max_, which are the minimum and maximum partition number that could be used for accommodating a request. By the use of traffic forecast and historical trends, we consider that we are aware of network traffic distribution. Joint probability density function of _min_ and _Max_, \(f_{min,Max}(x,y)\), is easily calculated using our knowledge of traffic distribution. More precisely, \(P_{c}(j)\) could be determined as \[P_{c}(j)=P(min\leq j\leq Max)=\sum_{y=j}^{N}\sum_{x=1}^{j}f_{min,Max}(x,y). \tag{16}\] Since \(b_{x}\leq b_{y}\), the number our sample space members, \(\{(b_{1},b_{1}),(b_{1},b_{2}),(b_{1},b_{N}),(b_{2},b_{2}),(b_{2},b_{3}),...(b_ {N},b_{N})\}\), can be derived as \[\sum_{n=1}^{N}n=\frac{N\cdot(N+1)}{2}. \tag{17}\] Finally, assuming service requests are distributed uniformly, i. e., \(f_{min,Max}(x,y)\) has a uniform distribution with respect to _min_ and _Max_ of the requested services, one can get \[\begin{split}& P_{c}(j)=\sum_{y=j}^{N}\sum_{x=1}^{j}f_{min,Max}(x,y)= \\ &\sum_{y=j}^{N}\sum_{x=1}^{j}\frac{1}{\sum_{n=1}^{N}n}=\sum_{y=j}^ {N}\sum_{x=1}^{j}\frac{2}{N\cdot(N+1)}.\end{split} \tag{18}\]
2302.05469
A Comparison of Void-Finding Algorithms using Crossing Numbers
We study how well void-finding algorithms identify cosmic void regions and whether we can quantitatively and qualitatively compare the voids they find with dynamical information from the underlying matter distribution. Using the ORIGAMI algorithm to determine the number of dimensions along which dark matter particles have undergone shell-crossing (crossing number) in N-body simulations from the AbacusSummit simulation suite, we identify dark matter particles that have undergone no shell crossing as belonging to voids. We then find voids in the corresponding halo distribution using two different void-finding algorithms: VoidFinder and Voronoi Voids (V2), a ZOBOV-based algorithm. The resulting void catalogs are compared to the distribution of dark matter particles to examine how their crossing numbers depend on void proximity. While both algorithms' voids have a similar distribution of crossing numbers near their centers, we find that beyond 0.25 times the effective void radius, voids found by VoidFinder exhibit a stronger preference for particles with low crossing numbers than those found by V2. We examine two possible methods of mitigating this difference in efficacy between the algorithms. While we are able to partially mitigate the ineffectiveness of V2 by using the distance from the void edge as a measure of centrality, we conclude that VoidFinder more reliably identifies dynamically distinct regions of low crossing number.
Dahlia Veyrat, Kelly A. Douglass, Segev BenZvi
2023-02-10T19:02:04Z
http://arxiv.org/abs/2302.05469v3
# Void-Finding Systematics using Crossing Numbers ###### Abstract We study how well void-finding algorithms identify cosmic void regions and whether we can quantitatively and qualitatively describe their biases by comparing the voids they find with dynamical information from the underlying matter distribution. Using the ORIGAMI algorithm to determine the number of dimensions along which dark matter particles have undergone shell-crossing (crossing number) in \(N\)-body simulations from the AbacusSummit simulation suite, we identify dark matter particles which have undergone no shell crossing as belonging to voids. We then find voids in the corresponding halo distribution using two different void-finding algorithms: VoidFinder and V\({}^{2}\), a ZOBOV-based algorithm. The resulting void catalogs are compared to the distribution of dark matter particles to examine how their crossing numbers depend on void proximity. While both algorithms' voids have a similar distribution of crossing numbers near their centers, we find that beyond 0.25 times the effective void radius, voids found by VoidFinder exhibit a stronger preference for particles with low crossing numbers than those found by V\({}^{2}\). We examine two possible methods of mitigating this difference in efficacy between the algorithms. While we are able to partially mitigate the ineffectiveness of V\({}^{2}\) by using distance from the void edge as a measure of centrality, we conclude that VoidFinder more reliably identifies dynamically-distinct regions of low crossing number. 0000-0002-4000-7000]Dahlia Veyrat 0000-0002-4880-7000]Kelly A. Douglass 0000-0002-0703-0888]Segev BenZvi ## 1 Introduction On cosmic scales, the structure of matter in the observable universe exhibits a complex, web-like distribution (Bond et al., 1996), with distinct filaments stretching between dense clusters of galaxies. In the space between these superstructures, galaxy redshift surveys have found vast, relatively empty regions containing very few galaxies (Joeveer et al., 1978; Gregory & Thompson, 1978; Kirshner et al., 1981). These cosmic voids, which occupy most of the volume of the universe (de Lapparent et al., 1986; Geller & Huchra, 1989), are the result of the gravitational instability in primordial underdense regions, similar to the formation of clusters from gravitational collapse of primordial overdensities (Zeldovich, 1970; van de Weygaert & Platen, 2011). The emptiness of voids provides a unique environment of gravitational evolution for cosmological and astrophysical studies. Cosmologically, the lack of large-scale gravitational collapse causes dynamics within voids to remain in the linear regime for a relatively longer time (Goldberg & Vogeley, 2004). Their uniqueness as cosmic structures has made voids well-suited to studies of the Alcock-Paczynski effect (e.g. Lavaux & Wandelt, 2012; Sutter et al., 2012, 2014; Hamaus et al., 2016; Mao et al., 2017; Nadathur et al., 2019), dark energy (e.g. Pisani et al., 2015; Verza et al., 2019), baryon acoustic oscillations (e.g. Nadathur et al., 2019; Zhao et al., 2020, 2021), and weak lensing (e.g. Melchior et al., 2014; Chantavat et al., 2017). Additionally, the uniqueness of voids as an intergalactic environment results in measurably different evolution and properties of galaxies within them (e.g. Hoyle et al., 2005; Rojas et al., 2005; Patiri et al., 2006; Douglass et al., 2019; Habouzit et al., 2020). A number of different types of algorithms have been used to detect voids in the cosmic web. The VoidFinder algorithm (El-Ad & Piran, 1997; Hoyle & Vogeley, 2002) finds relatively empty spheres in the distribution of galaxies and combines them into individual voids. An other common strategy is to link low-density regions together using a watershed algorithm. Several implementations using a Voronoi tessellation to approximate local density exist (e.g. Neyrinck, 2008; Sutter et al., 2015; Nadathur et al., 2019). While the aforementioned algorithms find voids geometrically, voids are expected to be dynamically unique structures (Sheth and van de Weygaert, 2004). With simulations, it is possible to define voids relative to the dynamics of the matter within them. Cosmological simulations have been used to predict properties of voids (Ricciardelli et al., 2014; Hamaus et al., 2014) and forecast the cosmological constraining power of voids (Pisani et al., 2015). One method for identifying dynamical voids in simulations is the computation of the number of dimensions along which matter has gravitationally collapsed (Falck et al., 2012). Voids are defined as regions undergoing no shell-crossing. In this work, we investigate the accuracy of void-finding algorithms in detecting dynamical void regions in the matter distribution from dark matter tracer information. We use an \(N\)-body simulation from the AbacusSummit simulation suite (Maksimova et al., 2021), on which the ORIGAMI algorithm (Falck et al., 2012) is run to determine their crossing numbers -- the number of dimensions along which they have undergone shell-crossing (that is to say, the number of dimensions of gravitational collapse). Then, using the crossing numbers to classify particles as belonging to voids, walls, filaments, or clusters, we compare this classification with the voids identified by two different void-finding algorithms in the Void Analysis Software Toolkit (VAST; Douglass et al., 2022): VoidFinder and V\({}^{2}\). This allows us to quantify the relative accuracy of these algorithms in detecting dynamically distinct regions dominated by low crossing numbers. The paper is organized as follows. In Section 2, we describe the theory of void evolution relevant to our study, and in Section 3, we describe the AbacusSummit simulation suite and the properties of the simulation used in the crossing number analysis. Sections 4 and 5 present the algorithms used to compute crossing numbers of dark matter particles and to find voids in the distribution of halos, respectively. Results are discussed in Section 6, examining the relationships between void regions defined by void-finding algorithms and the void particles identified by ORIGAMI. In Section 7, we explore ways to mitigate the relatively poor classification of the watershed algorithm, V\({}^{2}\). ## 2 Excursion Formalism The excursion-set formalism, first proposed by Press and Schechter (1974), provides an analytical model of the gravitational collapse and virialization of dark matter halos. Bond et al. (1991) developed the excursion model to describe the halo mass function, but the model was not applied to a theoretical description of voids until the pioneering work of Sheth and van de Weygaert (2004). The excursion-set model of voids describes the evolution of an initial underdensity in the matter distribution. Sheth and van de Weygaert (2004) find that, as matter is attracted to the overdense surroundings, the typical shape of underdensities becomes more spherical and can be effectively described by a series of spherical shells. More central shells experience a stronger "repulsion" from void centers, leading to a distinct "wall" feature -- a build up of expanding matter near the void edge as shells approach and cross each other. We expect, then, to find a high but narrow preference for high dark matter particle crossing numbers (the number of dimensions along which they have undergone shell-crossing) near void edges, along with the preference for low crossing numbers in voids' more central regions. ## 3 Simulations The AbacusSummit simulation suite (Maksimova et al., 2021) is a set of over 100 periodic \(N\)-body simulations spanning several box sizes up to \((7.5~{}\mathrm{Gpc}/h)^{3}\), particle mass resolutions down to \(\sim 3\times 10^{8}M_{\odot}\), and implementing several cosmologies. AbacusSummit was produced with the Abacus \(N\)-body algorithm (Garrison et al., 2021) at the Oak Ridge Leadership Computing Facility's Summit supercomputer and is the largest high-accuracy \(N\)-body data set produced to date. We use an AbacusSummit Hugebase simulation, which evolves \(2304^{3}\) dark matter particles of mass \(\sim 5\times 10^{10}M_{\odot}\) within a periodic box of width \(2~{}\mathrm{Gpc}/h\). This simulation uses Planck 2018 \(\Lambda\)CDM cosmology: \(\Omega_{m}=0.315\), \(h=0.674\), \(\sigma_{8}=0.811\), \(n_{s}=0.965\)(Planck Collaboration et al., 2020). In addition to full particle timeslices, the AbacusSummit data products include halo catalogs produced using the CompaSO halo-finder (Hadzhiyska et al., 2021). We run all void-finding algorithms on the corresponding halo catalog of the Hugebase simulation, which contains \(\sim 1.8\times 10^{7}\) halos. ## 4 Crossing Numbers Using Origami The gravitational collapse of the matter distribution in these \(N\)-body simulations results in the crossing of dark matter particle positions as their dynamics become nonlinear. The number of dimensions of shell-crossing, and thus of nonlinearity, determines the classification of dark matter particles into various types of large-scale structures: clusters from three-dimensional collapse, filaments from two-dimensional collapse, and walls from one-dimensional collapse. Particles which have undergone no shell-crossing are defined as belonging to cosmic voids. The ORIGAMI algorithm (Falck et al., 2012) computes the number of shell-crossing dimensions, referred to as the crossing number, or CN, by comparing the final relative positions of pairs of particles with their relative positions on the initial grid. Running the base ORIGAMI algorithm on simulations with more than \(512^{3}\) particles requires a prohibitively large amount of memory. We introduce modifications that compute crossing numbers for subvolumes of periodic simulations by implementing a buffer zone and subsequently merge the subvolume results, allowing ORIGAMI to be run on larger simulations. Crossing numbers in the AbacusSummit simulations were computed using this modified version of ORIGAMI1. Footnote 1: The modified ORIGAMI is available for download at [https://github.com/dveyrat/origami/tree/subdivide](https://github.com/dveyrat/origami/tree/subdivide). ## 5 Vast We identify voids in the distribution of halos using the Void Analysis Software Toolkit2 (VAST; Douglass et al., 2022). VAST includes an implementation of the VoidFinder algorithm, which grows and merges nearly empty spheres, and V\({}^{2}\), an implementation of the ZOBOV algorithm (Neyrinck, 2008), which uses a watershed method to identify voids. The voids found by each algorithm are shown in Figure 1, overlaying the dark matter particles within a 10 Mpc/\(h\)-thick slice of the AbacusSummit Huegebase simulation. Footnote 2: VAST is available for download at [https://github.com/desi-ur/vast](https://github.com/desi-ur/vast). ### VoidFinder VoidFinder, which was originally described by El-Ad & Piran (1997), begins by removing isolated tracers from the catalog, defined as those with a distance to their third-nearest neighbor \(d_{\rm 3NN}>\overline{d_{\rm 3NN}}+1.5\sigma_{d_{\rm 3NN}}\). The remaining tracers are placed on a three-dimensional grid, and a sphere is grown from each empty cell until its surface is bounded by four tracers. Next, maximal spheres, defined as spheres with radii \(r\geq 10\)Mpc/\(h\) that do not overlap a larger maximal sphere by more than 10% of their volume, are selected from the population of spheres. Each maximal sphere is identified as belonging to a void, and that void is built from the union of all spheres which intersect exactly one maximal sphere by at least 50% of their volume. See Hoyle & Vogeley (2002) for a more detailed description of VoidFinder. Figure 1: A 100 Mpc/\(h\)\(\times\)100 Mpc/\(h\) subsection of a 10 Mpc/\(h\) thick slice of the AbacusSummit Huegebase simulation. The locations of particles with crossing number 0 are shown in blue, 1 in orange, 2 in red, and 3 in violet. The intersections of voids with the center of the slice are shown in blue for VoidFinder (top), V\({}^{2}\)/VIDE (middle), and V\({}^{2}\)/REVOLVER (bottom). ### Voronoi Voids (V\({}^{2}\)) Voronoi Voids, or V\({}^{2}\), is a Python implementation of the ZOBOV algorithm, first described in Neyrinck (2008). V\({}^{2}\) begins by creating a Voronoi tesselation of the tracer distribution, whose cell volumes are used as an estimate of local density. A watershed algorithm is then used to combine groups of adjacent Voronoi cells into "zones," where each cell is put into the same zone as its least dense neighboring cell, and any cell which is less dense than any of its neighbors is identified as a local density minimum that serves as its zone's central cell. Adjacent zones are then linked together by the least-dense pair of adjacent cells between them, whose density is referred to as the "linking density." The collections of linked zones (including single zones) are identified as voids, creating a hierarchy of voids up to one containing the entire catalog. Several methods exist to prune the full hierarchy and extract meaningful voids from it (see Neyrinck, 2008; Sutter et al., 2015; Nadathur et al., 2019). We use two of the more popular pruning methods: VIDE pruning (V\({}^{2}\)/VIDE, Sutter et al., 2015), which sets both a maximum linking density for the zone-linking step and a maximum central density for each void, and REVOLVER pruning (V\({}^{2}\)/REVOLVER, Nadathur et al. (2019)), which labels each zone as a unique void and removes any voids with effective radius less than the median void effective radius. ## 6 Results 410,852 voids were found in the Abacus Hugebase simulation by VoidFinder, 79,721 by V\({}^{2}\)/VIDE, and 43,430 by V\({}^{2}\)/REVOLVER. The fraction of dark matter particles with each crossing number in the AbacusSummit Hugebase simulation, and of those in voids defined by each of the void-finding algorithms studied, can be found in Table 1. Throughout the entire simulation, we find that \(\sim\)31% of the dark matter particles have undergone no shell-crossing, \(\sim\)19% have undergone shell-crossing along one dimension, \(\sim\)18% along two dimensions, and \(\sim\)32% along three dimensions. These results are in good agreement with the crossing number distribution trends in Falck and Neyrinck (2015) for similar particle resolutions. Our results, using a simulation with an initial grid spacing \(L/N=(1000~{}\mathrm{Mpc}/h)/1152\), are similar to those in Falck and Neyrinck (2015) for initial grid spacing \(L/N=(100~{}\mathrm{Mpc}/h~{})/128\) but with a shift towards more void (CN=0) particles and fewer cluster (CN=3) particles, agreeing with the trend for greater initial grid spacing. ### Crossing Number Distribution within Voids We expect the distribution of crossing numbers in void regions to contain an excess of CN=0 particles, because any dark matter particles which are identified by ORIGAMI to have undergone shell-crossing are expected to be part of a gravitationally bound structure: either wall, filament, or cluster. To determine the effectiveness of VoidFinder and V\({}^{2}\) at detecting regions with low crossing number, we examine the distribution of crossing numbers inside and outside voids. We observe that the distributions of dark matter particles with CN=0 (void particles) and CN=3 (cluster particles) differ significantly when the particles are categorized by whether they are inside or outside a void (Table 1). The distributions of CN=1 and CN=2 particles differ relatively little between void/non-void regions. While particles in voids are more likely to have CN=0 and less likely to have CN=3 than the background distribution, this preference is significantly stronger in VoidFinder voids, where \(\sim\)47% of particles have CN=0 (compared to \(\sim\)31% in V\({}^{2}\)/VIDE and V\({}^{2}\)/REVOLVER voids, and \(\sim\)31% overall) and only \(\sim\)15% have CN=3 (compared to \(\sim\)32% in V\({}^{2}\)/VIDE voids, \(\sim\)31% in V\({}^{2}\)/REVOLVER voids, and \(\sim\)32% overall). Figure 2 shows the crossing number density profiles of dark matter particles around voids, and Figure 3 shows how the distribution of crossing numbers is related to the normalized distance to void centers, \(r/R_{\mathrm{eff}}\), where the effective radius \(R_{\mathrm{eff}}\) is defined as the radius of a sphere with the same volume as the void. We find that the particles closest to void centers are significantly more likely to have crossing number 0 than 3 for both VoidFinder and V\({}^{2}\). However, while this preference for low crossing numbers extends to nearly \(r=R_{\mathrm{eff}}\) for VoidFinder voids, the distribution around V\({}^{2}\) voids begins to gradually shift toward the background crossing number distribution at \(r\approx 0.25R_{\mathrm{eff}}\) for V\({}^{2}\)/VIDE voids, and \(r\approx 0R_{\mathrm{eff}}\) for V\({}^{2}\)/REVOLVER voids. At a normalized distance of \(r\approx 0.8R_{\mathrm{eff}}\), the distribution of crossing numbers in V\({}^{2}\) voids is nearly identical to the background. The expected wall feature of voids is visually evident in both Figures 2 and 3, but it is more prominent around voids found by VoidFinder. There is a clear, sharp increase in preference for higher crossing numbers near \(r=R_{\mathrm{eff}}\), in good agreement with the theory developed in Sheth and van de Weygaert (2004), which predicts a build-up and crossing of mass shells as they expand radially outward from void centers (see Section 2 for details). The wall feature around V\({}^{2}\) voids is both less pronounced and less localized around \(r=R_{\mathrm{eff}}\). We also observe a sharp feature at \(r=0.25R_{\mathrm{eff}}\) in the distributions of crossing numbers in V\({}^{2}\)/VIDE voids. This is a result of the central density cut present in the algorithm, which removes all voids with density above some threshold within \(r<0.25R_{\rm eff}\). As seen in the bottom panel of Figures 2 and 3, The distribution of crossing numbers around voids found by V\({}^{2}\)/REVOLVER, which does not include a central density cut, lacks this feature. ### CN=0 Particles not in Voids While the majority of particles with crossing number 0 are located within VoidFinder voids, \(\sim\)28% do not fall within a void. Further, while particles with lower crossing numbers are more likely to be found within V\({}^{2}\) voids than those with higher crossing numbers, \(\sim\)36% of CN=0 particles do not fall within a V\({}^{2}\)/VIDE void and \(\sim\)15% do not fall within a V\({}^{2}\)/REVOLVER void. To determine whether these particles belong to regions we expect to be classified as void, we investigate their local environments. For each dark matter particle, we investigate its local environment by identifying all particles within 2Mpc/\(h\) of it, and counting these particles by crossing number. In a void-like environment, we expect fewer neighboring particles with higher crossing numbers, as well as a lower density of particles overall, leading to a lower particle count for CN\(>\)0 particles relative to CN=0 and a lower total particle count. The crossing numbers within 2 Mpc/\(h\) of CN=0 particles inside and outside of voids are shown in Table 2 for both VoidFinder and V\({}^{2}\). Overall, CN=0 particles are found to be located in void-like environments with little clustering (CN\(>\)0 particles). CN=0 particles outside of VoidFinder voids, however, tend to be located in environments with more neighboring particles overall and several times more CN\(>\)0 particles than in the local environments of CN=0 particles in VoidFinder voids. This suggests that, while dynamical information alone classifies all CN=0 particles as belonging to void regions, many are located in significantly denser environments and subsequently are not expected to belong to cosmic voids. These are therefore expected to not be located in VoidFinder voids. The local environments of CN=0 particles in V\({}^{2}\) voids, however, are much more similar to those of their undetected counterparts and of CN=0 particles as a whole. Like the overall crossing number distributions in Table 1, this indicates worse classification of void regions by V\({}^{2}\), with no evident environmental distinction between the CN=0 particles inside voids and those outside. ### Void Finding in Redshift Space As an additional check, both void-finding algorithms were run on a version of the halo catalog modified using peculiar velocity information to simulate the effect of peculiar velocities on apparent positions, an effect known as redshift-space distortions (RSDs). Because peculiar velocities within voids tend to be directed outwards, voids appear elongated in the line-of-sight direction in redshift space, and we expect a smoothing of features in the radial distribution of crossing numbers around voids. Figure 4 shows how the distribution of crossing numbers is related to normalized distance to void centers found in redshift space. While the distributions are similar to those shown in Figure 3 at low and high \(r/R_{\rm eff}\), the distinct features at \(r/R_{\rm eff}=1\) for VoidFinder and \(r/R_{\rm eff}=0.25\) for V\({}^{2}\)/VIDE appear to be smoothed out, as expected due to the distortion of the voids along the line of sight. This suggests that performing reconstruction of real-space positions in a redshift catalog before \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & & particles in & particles in & particles in & particles not in & particles in & particles not in \\ crossing & all DM & VoidFinder & VoidFinder & V\({}^{2}\)/VIDE & V\({}^{2}\)/VIDE & V\({}^{2}\)/REVOLVER & V\({}^{2}\)/REVOLVER \\ number & particles & voids & voids & voids & voids & voids & voids \\ \hline 0 & 30.8\% & 46.9\% & 16.6\% & 30.6\% & 31.3\% & 31.3\% & 28.7\% \\ 1 & 19.4\% & 22.4\% & 16.6\% & 19.4\% & 19.4\% & 19.4\% & 19.1\% \\ 2 & 18.2\% & 15.6\% & 20.5\% & 18.2\% & 18.1\% & 18.1\% & 18.5\% \\ 3 & 31.6\% & 15.0\% & 46.3\% & 31.8\% & 31.2\% & 31.2\% & 33.6\% \\ \hline \end{tabular} Note. – Distribution of crossing numbers within and outside of voids found by VoidFinder and V\({}^{2}\), as well as the total distribution in the AbacusSummit Huegbase simulation. All uncertainties are below 0.1%, estimated using several AbacusSummit Huegbase realizations. \end{table} Table 1: Crossing number distributions running a void-finding algorithm may improve the voids found. ### Summary of Results We expect void regions to contain an excess of particles with crossing number 0 relative to the background distribution, and both voids found by VoidFinder and voids found by V\({}^{2}\) contain such an excess. In V\({}^{2}\) voids, however, there is a strong excess only in the most central (\(r<0.25R_{\rm eff}\)) regions, while in VoidFinder voids it extends nearly to \(R_{\rm eff}\). Further, there is an excess of CN=3 (cluster) particles near the edge of VoidFinder voids, in agreement with dynamical theories of void evolution. We conclude that VoidFinder identifies void regions more accurately than V\({}^{2}\). ## 7 Mitigating Poor Classification of V\({}^{2}\) Voids ### Linking Density While the V\({}^{2}\) algorithm is mostly parameter-free, there is an input parameter in V\({}^{2}\)/VIDE: the maximum zone-linking density, which limits the density of Voronoi cells which can link adjacent zones into voids. The maximum linking density (see Section 5.2 for details) is defined relative to the mean number density of the catalog, \(\overline{n}\), and is equal to \(0.2\overline{n}\) by default, but can be as low as 0 (allowing no linking of zones, similar to V\({}^{2}\)/REVOLVER) or be undefined (leading to all zones being linked together into a single void). While a lower maximum linking density is expected to limit the merging of voids across denser regions, the growth of V\({}^{2}\) voids up to density maxima occurs during the zone creation step, rather than in the subsequent zone-merging step. Consequently, varying the amount of zone merging that occurs is not expected to mitigate the inclusion of the dense shell-crossing region in the void volume of V\({}^{2}\)/VIDE voids. We examine the effect of the maximum linking density on the distribution of crossing numbers within V\({}^{2}\)/VIDE voids. These results are shown in Table 3. The percent of particles in voids with a given crossing number does not vary by more than \(\sim\)1% across different linking densities. This is a minor shift compared to the difference between VoidFinder voids and V\({}^{2}\)/VIDE voids (\(\sim\)12% for both CN=0 and CN=3; see Table 1), and still leaves the V\({}^{2}\)/VIDE distribution relatively similar to the distribution of all crossing numbers. ### Depth-in-Void While the centers of V\({}^{2}\)/VIDE voids have similar crossing number distributions to those of VoidFinder voids, this is only out to roughly \(0.25R_{\rm eff}\) (see the middle panel of Figure 3). Restricting the analysis to central regions using distance from the void center is therefore not an effective modification of V\({}^{2}\), as a significant amount of the void volume is omitted, restricting analysis to a small subset of void regions. Because V\({}^{2}\) voids generally have irregular shapes, distance from the edge (depth-in-void) may serve as a better measure of centrality than distance from the center, which is more effective for the more spherical voids found by VoidFinder. Zaidouni et al. (2023) show that void \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline & & \multicolumn{3}{c}{VoidFinder} & \multicolumn{3}{c}{V\({}^{2}\)/VIDE} & \multicolumn{3}{c}{V\({}^{2}\)/REVOLVER} \\ crossing number & all & inside voids & outside voids & inside voids & outside voids & inside voids & outside voids \\ \hline 0 & 19 \((11,28)\) & 18 \((11,27)\) & 21 \((13,30)\) & 19 \((11,28)\) & 19 \((11,28)\) & 19 \((11,28)\) & 19 \((11,28)\) \\ \(1-3\) & 11 \((0,61)\) & 7 \((0,41)\) & 27 \((3,121)\) & 11 \((0,61)\) & 10 \((0,59)\) & 10 \((0,60)\) & 12 \((0,65)\) \\ \hline total & 32 \((13,86)\) & 28 \((12,67)\) & 51 \((20,146)\) & 33 \((14,87)\) & 32 \((13,85)\) & 32 \((13,86)\) & 34 \((14,90)\) \\ \hline \end{tabular} Note. – Median number of particles within 2 Mpc/\(h\) of particles with CN=0 that fall either inside or outside a void. Values in parentheses indicate the boundaries of the central 68% of the distribution. \end{table} Table 2: Local environment of CN=0 particles \begin{table} \begin{tabular}{c|c c c c} \hline \hline & \multicolumn{3}{c}{maximum linking density} \\ crossing number & \(0.1\overline{n}\) & \(0.2\overline{n}\) & \(0.5\overline{n}\) & \(1.0\overline{n}\) \\ \hline 0 & 32.2\% & 33.0\% & 33.2\% & 33.2\% \\ 1 & 20.9\% & 20.9\% & 20.8\% & 20.8\% \\ 2 & 18.5\% & 18.4\% & 18.2\% & 18.2\% \\ 3 & 28.4\% & 27.7\% & 27.8\% & 27.8\% \\ \hline \end{tabular} Note. – Effect of maximum linking density on the distribution of crossing numbers within V\({}^{2}\)/VIDE voids. Void-finding was done using a \((128\) Mpc/\(h)^{3}\) subvolume of the AbacusSummit Huegbase simulation. \end{table} Table 3: Crossing number distributions Figure 3: The distributions of crossing numbers by normalized distance from void centers found in real space by VoidFinder (top), V\({}^{2}\)/VIDE (middle), and V\({}^{2}\)/REVOLVER (bottom). Figure 2: Normalized void density profiles by crossing number for VoidFinder (top), V\({}^{2}\)/VIDE (middle), and V\({}^{2}\)/REVOLVER (bottom) voids. All uncertainties are negligibly small, estimated using several AbacusSummit Hugebase realizations. Figure 4: The distributions of crossing numbers by normalized distance from void centers found in redshift space by VoidFinder (top), V\({}^{2}\)/VIDE (middle), and V\({}^{2}\)/REVOLVER (bottom). Figure 5: Distribution of crossing numbers by normalized depth from void edges within VoidFinder (top), V\({}^{2}\)/VIDE (middle), and V\({}^{2}\)/REVOLVER (bottom) voids. Depths were computed only for a (1 Gpc/\(h\))\({}^{3}\) subvolume of the Abacus-Summit Huegebase simulation. galaxies with a normalized distance from the V\({}^{2}\)/VIDE void edge of at least \(0.4d_{\rm max}\) have a distribution of astrophysical properties similar to those of galaxies within VoidFinder voids. We examine the distribution of crossing numbers at different depths within V\({}^{2}\) voids to determine whether depth is a better measure of centrality than distance from the center. The results are shown in Figure 5 for both V\({}^{2}\) pruning methods as well as VoidFinder. The previously observed trend of lower crossing numbers for more centrally located void particles is apparent, but just as in the distribution based on distance from the void center (Figure 3), the V\({}^{2}\) voids do not exhibit the sharp wall feature observed in VoidFinder voids. Further, the central regions of voids defined using depths (\(1-d/d_{\rm max}<0.25\)) do not favor low crossing numbers as strongly as central regions defined using distance from void centers (\(r/R_{\rm eff}<0.25\)). These results indicate there is no clear method for extracting the dynamically-distinct parts of the void regions from V\({}^{2}\) voids. The shift back to lower crossing numbers at the edge of V\({}^{2}\) voids can be attributed to the fact that these void edges are made up of boundaries between halos' Voronoi cells, an inherently low-density region. ## 8 Conclusions We study the relative accuracy of two different void-finding algorithms in detecting dynamical void regions by examining the positions of their voids relative to non-clustering regions in the dark matter distribution. Using a (2 Gpc/\(h\))\({}^{3}\)\(N\)-body simulation from the AbacusSummit simulation suite (Maksimova et al., 2021), the evolutionary history of each dark matter particle was identified using the ORIGAMI algorithm (Falck et al., 2012), which counts the number of dimensions along which dark matter particles have undergone shell-crossing. Identifying particles with crossing number 0 as belonging to voids, we compare their positions with voids found in the corresponding halo distribution by VoidFinder and V\({}^{2}\), two void-finding algorithms implemented in VAST (Douglass et al., 2022). We find that while both void-finding algorithms produce voids with a central bias towards lower crossing numbers, this bias gradually diminishes at greater distances from V\({}^{2}\) void centers, while the preference remains strong up to the edge of VoidFinder voids. This suggests that V\({}^{2}\) includes the shell-crossing region surrounding voids as part of the void volume. Further, at the void edge defined as \(r/R_{\rm eff}\approx 1\), voids found by VoidFinder have a preference for higher crossing numbers that is even stronger than the background distribution at larger radii, in good agreement with the shell-crossing predictions of the excursion-set formalism by Sheth and van de Weygaert (2004). This feature is absent in the distribution of crossing numbers around voids found by V\({}^{2}\). Given the relative inability of V\({}^{2}\) to identify dynamically-distinct voids, we attempt several methods to improve the classification of void regions by V\({}^{2}\). The V\({}^{2}\)/VIDE algorithm has one user-set parameter, the maximum linking density. We find that varying the maximum linking density has little effect on the distribution of crossing numbers within voids. We also examine the crossing number distribution as a function of distance from the void edge (depth) rather than distance from the void center. While void depth has been used to provide a definition of a central region with a stronger preference for void-like galaxy properties (Zaidouni et al., 2023), this did not improve the crossing number profiles of V\({}^{2}\)/VIDE voids. We conclude that VoidFinder more effectively identifies the dynamically-distinct regions primarily occupied by dark matter particles with low crossing number. The AbacusSummit Huegbase simulation analyzed in this work is one of the largest to be used in a crossing number analysis. We modified the ORIGAMI algorithm (available for downloads at [https://github.com/dveyrat/origami/tree/subdivide](https://github.com/dveyrat/origami/tree/subdivide)) to allow it to be run on this \(2304^{3}\) particle set. These modifications enable similar analyses using other AbacusSummit simulations, such as studies of the effects of different void-finding algorithms on cosmological constraints. ## Acknowledgements The authors would like to thank Stephen W. O'Neill, Jr. for his help improving the VoidFinder algorithm in VAST, and Michael Vogeley and Fiona Hoyle for the original VoidFinder code and void-finding expertise. D.V. and S.B. acknowledge support from the U.S. Department of Energy Office of High Energy Physics under the grant DE-SC0008475. K.D. and D.V. acknowledge support from grant 62177 from the John Templeton Foundation. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory.
2302.02306
Fair Spatial Indexing: A paradigm for Group Spatial Fairness
Machine learning (ML) is playing an increasing role in decision-making tasks that directly affect individuals, e.g., loan approvals, or job applicant screening. Significant concerns arise that, without special provisions, individuals from under-privileged backgrounds may not get equitable access to services and opportunities. Existing research studies fairness with respect to protected attributes such as gender, race or income, but the impact of location data on fairness has been largely overlooked. With the widespread adoption of mobile apps, geospatial attributes are increasingly used in ML, and their potential to introduce unfair bias is significant, given their high correlation with protected attributes. We propose techniques to mitigate location bias in machine learning. Specifically, we consider the issue of miscalibration when dealing with geospatial attributes. We focus on spatial group fairness and we propose a spatial indexing algorithm that accounts for fairness. Our KD-tree inspired approach significantly improves fairness while maintaining high learning accuracy, as shown by extensive experimental results on real data.
Sina Shaham, Gabriel Ghinita, Cyrus Shahabi
2023-02-05T05:15:11Z
http://arxiv.org/abs/2302.02306v1
# Fair Spatial Indexing: A paradigm for Group Spatial Fairness ###### Abstract. Machine learning (ML) is playing an increasing role in decision-making tasks that directly affect individuals, e.g., loan approvals, or job applicant screening. Significant concerns arise that, without special provisions, individuals from under-privileged backgrounds may not get equitable access to services and opportunities. Existing research studies _fairness_ with respect to protected attributes such as gender, race or income, but the impact of location data on fairness has been largely overlooked. With the widespread adoption of mobile apps, geospatial attributes are increasingly used in ML, and their potential to introduce unfair bias is significant, given their high correlation with protected attributes. We propose techniques to mitigate location bias in machine learning. Specifically, we consider the issue of miscalibration when dealing with geospatial attributes. We focus on _spatial group fairness_ and we propose a spatial indexing algorithm that accounts for fairness. Our KD-tree inspired approach significantly improves fairness while maintaining high learning accuracy, as shown by extensive experimental results on real data. ## 1. Introduction Recent advances in machine learning (ML) led to its adoption in numerous decision-making tasks that directly affect individuals, such as loan evaluation or job application screening. Several studies (Ghinita et al., 2017; Ghahimi et al., 2017; Ghahimi et al., 2017) pointed out that ML techniques may introduce bias with respect to protected attributes such as race, gender, age or income. The last years witnessed the introduction of _fairness_ models and techniques that aim to ensure all individuals are treated equitably, focusing especially on conventional protected attributes (like race or gender). However, the impact of geospatial attributes on fairness has not been extensively studied, even though location information is being increasingly used in decision-making for novel tasks, such as recommendations, advertising or ride-sharing. Conventional applications may also often rely on location data, e.g. allocation of local government resources, or crime prediction by law enforcement using geographical features. For example, the Chicago Police Department releases monthly crime datasets (Ghahimi et al., 2017) and classifies neighborhoods based on their crime risk level. Subsequently, the risk level is used to determine vehicle and house insurance premiums, which are increased to reflect the risk level, and in turn, result in additional financial hardship for individuals from under-privileged groups. Fairness for geospatial data is a challenging problem, due to two main factors: (i) data are more complex than conventional protected attributes such as gender or race, which are categorical and have only a few possible values; and (ii) the correlation between locations and protected attributes may be difficult to capture accurately, thus leading to hard-to-detect biases. We consider the case of _group fairness_(Ghahimi et al., 2017), which ensures no significant difference in outcomes occurs across distinct population groups (e.g., females vs. males). In our setting, groups are defined with respect to geospatial regions. The data domain is partitioned into disjoint regions, and each of them represents a group. All individuals whose locations belong to a certain region are assigned to the corresponding group. In practice, a spatial group can correspond to a zip code, a neighborhood, or a set of city blocks. Our objective is to support arbitrary geospatial partitioning algorithms, which can handle the needs of applications that require different levels of granularity in terms of location reporting. _Spatial indexing_(Ghahimi et al., 2017; Shaham et al., 2017; Shaham et al., 2017) is a common approach used for partitioning, and numerous techniques have been proposed that partition the data domain according to varying criteria, such as area, perimeter, data point count, etc. We build upon existing spatial indexing techniques, and adapt the partition criteria to account for the specific goals of fairness. By carefully combining geospatial and fairness criteria in the partitioning strategies, one can obtain spatial fairness while still preserving the useful spatial properties of indexing structures (e.g., fine-level clustering of the data). Specifically, we consider a set of partitioning criteria that combines physical proximity and _calibration error_. Calibration is an essential concept in classification tasks which quantifies the quality of a classifier. Consider a binary classification task, such as a loan approval process. Calibration measures the difference between the observed and predicted probabilities of any given point being labeled in the positive class. If one partitions the data according to some protected attribute, then the expectation would be that the probability should be the same across both groups (e.g., males and females should have an equal chance, on aggregate, to be approved for a loan). If the expected and actual probabilities are different, that represents a good indication of unfair treatment. Our proposed approach builds a hierarchical spatial index structure by using a composite split metric, consisting of both geospatial criteria (e.g., compact area) and miscalibration error. In doing so, it allows ML applications to benefit from granular geospatial information, while at the same time ensuring that no significant bias is present in the learning process. Our specific contributions include: * We identify and formulate the problem of spatial group fairness, an important concept which ensures that geospatial information can be used reliably in a classification task, without introducing, intentionally or not, biases against individuals from underprivileged groups; * We propose a new metric to quantify unfairness with respect to geospatial boundaries, called Expected Neighborhood Calibration Error (ENCE); * We propose a technique for fair spatial indexing that builds on KD-trees and considers both geospatial and fairness criteria, by lowering miscalibration and minimizing ENCE; * We perform an extensive experimental evaluation on real datasets, showing that the proposed approach is effective in enforcing spatial group fairness while maintaining data utility for classification tasks. The rest of the paper is organized as follows: Section 2 provides background and fundamental definitions. Section 3 reviews related work. We introduce the proposed fair index construction technique in Section 4. Section 5 presents the results of our empirical evaluation, followed by conclusions in Section 6. ## 2. Background ### System Architecture We consider a binary classification task \(T\) over a dataset \(D\) of individuals \(u_{1}\),..., \(u_{|D|}\). The feature set recorded for \(u_{i}\) is denoted by \(\mathbf{x}_{i}\in\mathbb{R}^{l}\), and its corresponding label by \(y_{i}\in\{0,1\}\). Each record consists of \(l\) features, including an attribute called _neighborhood_, which captures an individual's location, and is the main focus of our approach. The sets of all input data and labels are denoted by \(\mathcal{X}\) and \(\mathcal{Y}\), respectively. A classifier \(h(.)\) is trained over the input data resulting in \(h(\mathcal{X})=(\hat{Y},\hat{S})\) where \(\hat{\mathcal{Y}}=\{\hat{y}_{1},...,\hat{y}_{|D|}\}\) is the set of predicted labels (\(\hat{y}_{i}\in\{0,1\}\)) and \(\mathcal{S}=\{s_{1},...,s_{|D|}\}\) is the set of confidence scores (\(s_{i}\in[0,1]\)) for each label. The dataset's neighborhood feature indicates the individual's _spatial group_. We assume the spatial data domain is split into a set of partitions of arbitrary granularity. Without loss of generality, we consider a \(U\times V\) grid overlaid on the map. The grid is selected such that its resolution captures adequate spatial accuracy as required by application needs. A set of neighborhoods is a non-overlapping partitioning of the map that covers the entire space, with the \(i^{th}\) neighborhood denoted by \(N_{i}\), and the set of neighborhoods denoted by \(\mathcal{N}\). Figure 1 illustrates the system overview. Figure 0(a) shows the map divided into 4 non-overlapping partitions \(\mathcal{N}=\{N_{1},N_{2},N_{3},N_{4}\}\). The neighborhood is recorded for each individual \(u_{1}\),..., \(u_{11}\) together with other features, and a classifier is trained over the data. The classifier's output is the confidence score for each entry which turns into a class label by setting a threshold. ### Fairness Metric Our primary focus is to achieve _spatial group fairness_ using as metric the concept of _calibration_(Zhou et al., 2017; Zhang et al., 2018), described in the following. In classification tasks, it is desirable to have scores indicating the probability that a test data record belongs to a certain class. Probability scores are especially important in ranking problems, where top candidates are selected based on relative quantitative performance. Unfortunately, it is not granted that confidence scores generated by a classifier can be interpreted as probabilities. Consider a binary classifier that indicates an individual's chance of committing a crime after their release from jail (recidivism). If two individuals \(u_{1}\) and \(u_{2}\) get confidence scores 0.4 and 0.8, this cannot be directly interpreted as the likelihood of committing a crime by \(u_{2}\) being twice as high as for \(u_{1}\). The model _calibration_ aims to alleviate precisely this shortcoming. **Definition 1**.: (Calibration). An ML model is said to be calibrated if it produces calibrated confidence scores. Formally, outcome score \(R\) is _calibrated_ if for all _scores_\(r\) in support of \(R\) it holds that \[P(y=1|R=r)=r \tag{1}\] This condition means that the set of all instances assigned a score value \(r\) contains an \(r\) fraction of positive instances. The metric is a group-level metric. Suppose there exist 10 people who have been assigned a confidence score of 0.7. In a well-calibrated model, we expect to have 7 individuals with positive labels among them. Thus, the probability of the whole group is 0.7 to be positive, but it does not indicate that every individual in the group has this exact chance of receiving a positive label. To measure the amount of miscalibration for the whole model or for an output interval, the ratio of two key factors need to be calculated: expected confidence scores and the expected value of true labels. Abiding by the convention in (Zhou et al., 2017), we use functions \(o(.)\) and \(e(.)\) to return the true fraction of positive instances and the expected value of confidence scores, respectively. For example, the calibration of the model in Figure 0(b) is computed as: \[\frac{e(h)}{o(h)}=\frac{(\sum_{u\in D}\hat{p}_{u})/|D|}{(\sum_{u\in D}y_{u})/| D|}=\frac{5.2/11}{7/11}\approx.742 \tag{2}\] Perfect calibration is achieved when a specific ratio is equal to one. Ratios that are above or below one are considered miscalibration Figure 1. An example of the miscalibration problem with respect to neighborhoods. ### Problem Formulation Even when a model is overall well-calibrated, it can still lead to unfair treatment of individuals from different neighborhoods. In order to achieve spatial group fairness, we must have a well-calibrated model with respect to _all_ neighborhoods. The existence of calibration error in a neighborhood can result in classifier bias and lead to systematic unfairness against individuals from that neighborhood (in Section 5, we support this claim with real data measurements). Definition 2 ().: (Calibration for Neighborhoods). Given neighborhood set \(\mathcal{N}=\{N_{1},...,N_{t}\}\), we say that the score \(R\) is calibrated in neighborhood \(N_{i}\) if for all the scores \(r\) in support of \(R\) it holds that \[P(y=1|R=r,N=N_{i})=r,\qquad\forall i\in[1,t] \tag{3}\] The following equations can be used to measure the amount of miscalibration with respect to neighborhood \(N_{i}\), \[\frac{e(h|N=N_{i})}{o(h|N=N_{i})}\quad\text{ or }\quad|e(h|N=N_{i})-o(h|N=N_{i})| \tag{4}\] Going back to the example in Figure 1d, the calibration amount for neighborhoods \(N_{1}\) to \(N_{4}\) is visualized on a plot. Neighborhood \(N_{4}\) is well-calibrated, whereas the others suffer from miscalibration. Problem 1 ().: _Given m binary classification tasks \(T_{1},T_{2},...,T_{m}\), we seek to partition the space into continuous non-overlapping neighborhoods such that for each decision-making task, the trained model is well-calibrated for all neighborhoods._ ### Evaluation Metrics A commonly used metric to evaluate the calibration of a model is Expected Calibration Error (ECE) (Han et al., 2017). The goal of ECE (detailed in Appendix A.1) is to understand the validity of output confidence scores. However, our focus is on identifying the calibration error imposed on different neighborhoods. Therefore, we extend ECE and propose the Expected Neighborhood Calibration Error (ENCE) that captures the calibration performance over all neighborhoods. Definition 3 ().: (Expected Neighborhood Calibration Error). Given \(t\) non-overlapping geospatial regions \(\mathcal{N}=\{N_{1},...,N_{t}\}\) and a classifier \(h\) trained over data located in these neighborhoods, the ENCE metric is calculated as: \[\text{ENCE}=\sum_{i=1}^{t}\frac{|N_{i}|}{|D|}|o(N_{i})-e(N_{i})| \tag{5}\] where \(o(N_{i})\) and \(e(N_{i})\) return the true fraction of positive instances and the expected value of confidence scores for instances in \(N_{i}\)1. Footnote 1: Symbol \([.]\) denotes absolute value. ## 3. Related Work Fairness NotionsThere exist two broad categories of fairness notions (Fairness, 1992; Dwork and Goyal, 2015): individual fairness and group fairness. In group fairness, individuals are divided into groups according to a protected attribute, and a decision is said to be fair if it leads to a desired statistical measure across groups. Some prominent group fairness metrics are calibration (Srivastava et al., 2015), statistical parity (Krizhevsky et al., 2014)(Krizhevsky et al., 2014), equalized odds (Krizhevsky et al., 2014), treatment equality (Beng et al., 2015), and test fairness (Beng et al., 2015). Individual fairness notions focus on treating similar individuals the same way. Similarity may be defined with respect to a particular task (Krizhevsky et al., 2014; Beng et al., 2015). Spatial FairnessNeighborhoods or individual locations are commonly used features for decision-making in government agencies, banks, etc. Unfairness may arise in tasks such as mortgage lending (Lewis et al., 2017), job recruitment (Lewis et al., 2017), admission to schools (Beng et al., 2015), and crime risk prediction (Srivastava et al., 2015). In (Srivastava et al., 2015), recidivism prediction models constructed using data from one location tend to perform poorly when they are used to predict recidivism in another location. The authors in (Srivastava et al., 2015) formulate a loss function for individual fairness in social media and location-based advertisements. Pujol et al. (Pujol et al., 2017) demonstrate the unequal impact of differential privacy on neighborhoods. Several attempts have been made to apply fairness notions for clustering \begin{table} \begin{tabular}{l l} \hline \hline Symbol & Description \\ \hline \(k\) & Number of features \\ \(D=\{u_{1},...,u_{|D|}\}\) & Dataset of individuals \\ \((x_{i},y_{i})\) & (Set of features, true label) for \(u_{i}\) \\ \(D=[\mathcal{X},\mathcal{Y}]\) & Dataset with user features and labels \\ \(\hat{\mathcal{Y}}=\{\hat{y}_{1},..,\hat{y}_{|D|}\}\) & Set of predicted labels \\ \(\mathcal{S}=\{s_{1},...,s_{|D|}\}\) & Set of confidence scores \\ \(\mathcal{N}=\{N_{1},...,N_{t}\}\) & Set of neighborhoods \\ \(U\times V\) & Base grid resolution \\ \(T\) & Binary classification task \\ \(m\) & Number of binary classification tasks \\ \(t\) & Number of neighborhoods \\ \(t_{h}\) & Tree height \\ \hline \hline \end{tabular} \end{table} Table 1. Summary of Notations. Figure 2. Overview of the proposed mitigation techniques. dapoints in the Cartesian space. The notion in (Hamilton et al., 2017) defines clustering conducted for a point as fair if the average distance to the points in its own cluster is not greater than the average distance to the points in any other cluster. The authors in (Kang et al., 2017) focus on defining individual fairness for \(k\)-median and \(k\)-means algorithms. Clustering is defined to be individually fair if every point expects to have a cluster center within a particular radius. **Unfairness Mitigation** techniques can be categorized into three broad groups: pre-processing, in-processing, and post-processing. Pre-processing algorithms achieve fairness by focusing on the classifier's input data. Some well-known techniques include suppression of sensitive attributes, change of labels, reweighting, representation learning, and sampling (Hamilton et al., 2017). In-processing techniques achieve fairness during training by adding new terms to the loss function (Hamilton et al., 2017) or including more constraints in the optimization. Post-processing techniques sacrifice the utility of output confidence scores and align them with the fairness objective (Shi et al., 2017). ## 4. Spatial Fairness Through Indexing We introduce several algorithms that achieve group spatial fairness by constructing spatial index structures in a way that takes into account fairness considerations when performing data domain splits. We choose KD-trees as a starting point for our solutions, due to their ability to adapt to data density, and their property of covering the entire data domain (as opposed to structures like R-trees that may leave gaps within the domain). Figure 2 provides an overview of the proposed solution. Our input consists of a _base grid_ with an arbitrarily-fine granularity overlaid on the map, the attributes/features of individuals in the data, and their classification labels. The attribute set includes individual location, represented as the grid cell enclosing the individual. We propose a suite of three alternative algorithms for fairness, which are applied in the pre-processing phase of the ML pipeline and lead to the generation of new neighborhood boundaries. Once spatial partitioning is completed, the location attribute of each individual is updated, and classification is performed again. The proposed algorithms are: * _Fair KD-tree_ is our primary algorithm and it re-districts spatial neighborhoods based on an initial classification of data over a base grid. Fair KD-tree can be applied to a single classification task. * _Iterative Fair KD-tree_ improves upon Fair KD-tree by refining the initial ML scores at every height of the index structure. It incurs higher computational complexity but provides improved fairness. * _Multi-Objective Fair KD-tree_ enables Fair KD-trees for multiple classification tasks. It leads to the generation of neighborhoods that fairly represent spatial groups for multiple objectives. Next, we prove an important result that applies to all proposed algorithms, which states that any non-overlapping partitioning of the location domain has a weighted average calibration greater or equal to the overall model calibration. The proofs of all theorems are provided in Appendix A. Theorem 1 ().: _For a given model \(h\) and a complete non-overlapping partitioning of the space \(\mathcal{N}=\{N_{1},N_{2},..,N_{t}\}\),ENCE is lower-bounded by the overall calibration of the model._ A broader statement can also be proven, showing that further partitioning leads to poorer ENCE performance. Theorem 2 ().: _Consider a binary classifier \(h\) and two complete non-overlapping partitioning of the space \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\). If \(\mathcal{N}_{2}\) is a sub-partitioning of \(\mathcal{N}_{1}\), then:_ \[ENCE(N_{1})\leq ENCE(N_{2}) \tag{6}\] _Neighborhood set \(\mathcal{N}_{2}\) is a sub-partitioning of \(\mathcal{N}_{1}\) if for every \(N_{i}\in\mathcal{N}_{1}\), there exists a set of neighborhoods in \(\mathcal{N}_{2}\) such that their union is \(N_{i}\) Figure 3. Overview of Fair KD-tree algorithm. ### Fair KD-tree We build a KD-tree index that partitions the space into non-overlapping regions according to a split metric that takes into account the miscalibration metric within the regions resulting after each split. Figure 3 illustrates this approach, which consists of three steps. Algorithm 1 presents the pseudocode of the approach. ``` 1:Input: Grid (\(U\times V\)), Features (\(\mathcal{X}\)), Labels (\(\mathcal{Y}\)), Height (\(t_{h}\)). 2:Output: New neighborhoods and updated feature set 3:\(N_{1}\leftarrow\) Grid 4:Set all neighborhoods in \(\mathcal{X}\) to \(N_{1}\) 5:\(\mathcal{N}\leftarrow\{N_{1}\}\) 6:while\(t_{h}>0\)do 7: Scores (\(\mathcal{S}\)) \(\leftarrow\) Train ML model on \(\mathcal{X}\) and \(\mathcal{Y}\) 8:\(\text{\small{$\operatorname{\mathsf{new}}$}}\leftarrow\{\}\) 9:for\(N_{i}\) in \(\mathcal{N}\)do 10:\(L,R\leftarrow\) Split Neighborhood(\(N_{i},\mathcal{S},\mathcal{Y},t_{h}\%2\)) 11:\(\mathcal{N}_{\text{new}}\leftarrow\mathcal{N}_{\text{new}}+L,R\) 12:\(\mathcal{N}\leftarrow\mathcal{N}_{\text{new}}\) 13: Update neighborhoods in \(\mathcal{X}\) based on \(\mathcal{N}\) 14:\(t_{h}\gets t_{h}-1\) 15:return\(\mathcal{N}\), \(\mathcal{X}\) ``` **Algorithm 2** Split Neighborhood _Step 1._ The base grid is used as input, where the location of each individual is represented by the identifier of their enclosing grid cell. This location attribute, alongside other features, is used as input to an ML classifier \(h\) for training. The classifier's output is a set of confidence scores \(\mathcal{S}\), as illustrated in Figure 2(a). Once confidence scores are generated, the true fraction of positive instances and the expected value of predicted confidence scores of the model with respect to neighborhoods can be calculated as follows: (7) \[e(h|N=N_{i})=\frac{1}{|N_{i}|}(\sum_{u\in N_{i}}s_{u})\qquad \forall i\in[1,t]\] \[o(h|N=N_{i})=\frac{1}{|N_{i}|}(\sum_{u\in N_{i}}y_{u})\qquad \forall i\in[1,t]\] (8) where \(t\) is the number of neighborhoods. _Step 2._ This step performs the actual partitioning, by customizing the KD-tree split algorithm with a novel objective function. KD-trees are binary trees where a region is split into two parts, typically according to the median value of the coordinate across one of the dimensions (latitude or longitude). Instead, we select the division index that minimizes fairness metric, i.e., ENCE misalignment. Confidence scores and labels resulted from the previous training step are used as input for the split point decision. For a given tree node, assume the corresponding partition covers \(U^{\prime}\times V^{\prime}\) cells of the entire \(U\times V\) grid. Without loss of generality, we consider partitioning on the horizontal axis (i.e., row-wise). The aim is to find an index \(k\) which groups rows 1 to \(k\) into one node and rows \(k+1\) to \(U^{\prime}\) into another, such that the fairness objective is minimized. Let \(L_{k}\) and \(R_{k}\) denote the left and right regions generated by splitting on index \(k\). The fairness objective for index \(k\) is: \[z_{k}=\big{|}\ |L_{k}|\times|o(L_{k})-e(L_{k})|-|R_{k}|\times|o(R_{k})-e(R_{k}) \big{|}\big{|} \tag{9}\] In the above equation, \(|L_{k}|\) and \(|R_{k}|\) return the number of data entries in the left and right regions, respectively. The intuition behind the objective function is to minimize the model miscalibration difference as we heuristically move forward. Two key points about the above function are: (i) the formulation of calibration is used in linear format due to the possibility of a zero denominator, and (ii) the calibration values are weighted by their corresponding regions' cardinalities. The optimal index \(k^{*}\) is selected as: \[k^{*}=\arg\min_{k}\ \ z_{k} \tag{10}\] _Step 3._ On completion of the fair KD-tree algorithm, the index leaf set provides a non-overlapping partitioning of the map. In the algorithm's final step, the neighborhood of each individual in the dataset is updated according to the leaf set and used for training. The Fair KD-tree pseudocode is provided in Algorithms 1 and 2. The latter returns the split point based on the fairness objective, and it is being called several times in Algorithm 1. This function will also be used within the Iterative KD-tree algorithm. Theorem 3 ().: _For a given dataset \(D\), the required number of neighborhoods \(t\) and the model \(h\), the computational complexity of Fair KD-tree is \(\mathcal{O}(|D|\times[\log(t)])+\mathcal{O}(h)\)._ ### Iterative Fair KD-tree One drawback of the Fair KD-tree algorithm is its sensitivity to the initial execution of the model, which uses the baseline grid to generate confidence scores. Even though the space is recursively partitioned following the initial steps, the scores are not re-computed until the index construction is finalized. The _iterative_ fair KD-tree addresses this limitation by re-training the model and computing updated confidence scores after each split (i.e., at each level of the tree). A refined version of ML scores is used at every height of the tree, leading to a more fair redistific of the map. Similar to the Fair KD-tree algorithm, the baseline grid is initially used, and all grid cells are considered to be in the same neighborhood (i.e., a single spatial group covering the entire domain). The algorithm is implemented in \(t_{h}\) iterations with the root node corresponding to the initial point (entire domain). As opposed to the Fair KD-tree algorithm that follows Depth First Search (DFS) recursion, the Iterative Fair KD-tree algorithm is based on Breadth First Search (BFS) traversal. Therefore, all nodes in the given height \(i-1\) are completed before moving forward to the height \(i\). Suppose we are in the \(i^{th}\) level of the tree, and all nodes at that level are generated. Note that, the set of nodes at the same height represents a non-overlapping partitioning of the grid. The algorithm continues by updating the neighborhoods at height \(i\) based on the \(i-1\) level partitioning. Then, the updated dataset is used to train a new model, thus updating confidence scores for each individual. Algorithm 3 presents the Iterative Fair KD-tree algorithm. Let \(\mathcal{N}\) denote the set of all neighborhoods at level \(i\) of the tree. For each neighborhood \(N_{i}\in\mathcal{N}\), Iterative Fair KD-tree splits the region \(N_{i}\) by calling the \(SplitNeighbor\) function in Algorithm 2. The split is done on the \(x\)-axis if \(i\) is even and on the \(y\)-axis otherwise. The algorithm provides a more effective way of determining a fair neighborhood partitioning by re-training the model at every tree level, but incurs higher computation complexity. Theorem 4 ().: _For a given dataset \(D\), the required number of neighborhoods \(t\) and the model \(h\), the computational complexity of Iterative Fair KD-tree is \(\mathcal{O}(|D|\times\lceil\log(t)\rceil)+\lceil\log(t)\rceil\times\mathcal{O }(h)\)._ ### Multi-Objective Fair KD-tree So far, we focused on achieving a fair representation of space given a _single_ classification task. In practice, applications may dictate multiple classification objectives. For example, a set of neighborhoods that are fairly represented in a city budget allocation task may not necessarily result in a fair representation of a map for deriving car insurance premia. Next, we show how Fair KD-tree can be extended to incorporate multi-objective decision-making tasks. We devise an alternative method to compute initial scores in Line 8 of Algorithm 2, which can then be called as part of Fair KD-tree in Algorithm 1. A separate classifier is trained over each task to incorporate all classification objectives. Let \(h_{i}\) be the \(i^{th}\) classifier trained over \(D\) and label set \(\mathcal{Y}_{i}\) representing the task \(T_{i}\). The output of the classifier is denoted by \(\mathcal{S}_{i}=\{s_{i}^{i},...,s_{|D|}^{i}\}\), where in \(s_{j}^{i}\), the superscript identifies the set \(\mathcal{S}_{i}\) and the subscript indicates individual \(u_{j}\). Once confidence scores for all models are generated, an auxiliary vector is constructed as follows: \[\mathbf{v}_{i}=\begin{bmatrix}s_{i}^{i}-y_{i}^{i}\\ s_{2}^{i}-y_{2}^{i}\\ \vdots\\ s_{|D|}^{i}-y_{|D|}^{i}\end{bmatrix},\qquad\forall i\in[1...t] \tag{11}\] To facilitate task prioritization, hyper-parameters \(\alpha_{1},...,\alpha_{m}\) are introduced such that \(\sum_{i=1}^{m}\alpha_{i}=1\) and \(0\leq\alpha_{i}\leq 1\). Coefficient \(\alpha_{i}\) indicates the significance of classification \(T_{i}\). The complete vector used for computing the partitioning is then calculated as, \[\mathbf{v}_{tot}=\sum_{i=1}^{m}\alpha_{i}\mathbf{v}_{i}=\begin{bmatrix}\sum_{i=1}^{m} \alpha_{i}(s_{1}^{i}-y_{1}^{i})\\ \sum_{i=1}^{m}\alpha_{i}(s_{2}^{i}-y_{2}^{i})\\ \vdots\\ \sum_{i=1}^{m}\alpha_{i}(s_{|D|}^{i}-y_{|D|}^{i})\end{bmatrix} \tag{12}\] In the above formulation, each row corresponds to a unique individual and captures its role in all classification tasks. Let \(\mathbf{v}_{tot}[u_{i}]\) denote the entry corresponding to \(u_{i}\) in \(\mathbf{v}_{tot}\). Then the classification objective function in Eq. 9 is returned by: \[\mathbf{z}_{k}=\big{|}\ |L_{k}|\times|\sum_{u_{i}=T_{k}}\mathbf{v}_{tot}[u_{i}]|\ -\ |R_{k}|\times|\sum_{u_{i}=R_{k}}\mathbf{v}_{tot}[u_{i}]|\ \big{|} \tag{13}\] and the optimal split point is selected as, \[k^{*}=\arg\min_{k}\ \ \ z_{k} \tag{14}\] Vector aggregation is illustrated in Figure 5. Theorem 5 ().: _For a given dataset \(D\), the required number of neighborhoods \(t\) and \(m\) classification tasks modelled by \(h_{1},..,h_{m}\), computational complexity of Multi-Objective Fair KD-tree is \(\mathcal{O}(|D|\times\lceil\log(t)\rceil)+\sum_{i=1}^{m}\mathcal{O}(h_{i})\)._ Figure 4. Overview of Iterative Fair KD-tree algorithm. Figure 5. Aggregation in Multi-Objective Fair KD-tree. ## 5. Experimental Evaluation ### Experimental Setup We use two real-world datasets provided by EdGap (K ### Mitigation Algorithms #### 5.3.1. Evaluation w.r.t. ENCE Metric ENCE is our primary evaluation metric that captures the amount of calibration error over neighborhoods. Recall that Fair KD-tree and its extension Iterative Fair KD-tree can work for any given classification ML model. We apply algorithms for Logistic Regression, Decision Tree, and Naive Bayes classifiers to ensure diversity in models. We focus on student SAT performance following the prior work in (Krizhevsky et al., 2017) by setting the threshold to 22 for label generation. Figure 7 provides the results in Los Angeles and Houston on the EdGap dataset. The \(x\)-axis denotes the tree's height used in the algorithm. Having a higher height indicates a finer-grained partitioning. The \(y\)-axis is log-scale. Figure 8. Performance Evaluation with respect to other indicators. Figure 9. Impact of features on decision-making. Figure 7 demonstrates that both Fair KD-tree and Iterative Fair KD-tree outperform benchmarks by a significant margin. The improvement percentage increases as the number of neighborhoods increase, which is an advantage of our techniques, since finer spatial granularity is beneficial for most analysis tasks. The intuition behind this trend lies in the overall calibration of the model: given that the trained model is well-calibrated overall, dividing the space into a smaller number of neighborhoods is expected to achieve a calibration error closer to the overall model. This result supports Theorem 1, stating that ENCE is lower-bounded by the number of neighborhoods. Iterative Fair KD-tree behaves better, as confidence scores are updated on every tree level. The improvement achieved compared to Fair KD-trees comes at the expense of higher computational complexity. On average Fair KD-tree achieves 45% better performance in terms of computational complexity. The time taken for Fair KD-tree with 10 levels is 102 seconds, versus 189 seconds for the iterative version. #### 5.3.2. Evaluation w.r.t. other Indicators In Figure 8 we evaluate fairness with respect to three other key indicators: model accuracy, training miscalibration, and test miscalibration. We focus on logistic regression, one of the most widely adopted classification units. The accuracy of all algorithms follows a similar pattern and increases at higher tree heights. This is expected, as more geospatial information can be extracted at finer granularities. Figure 7(b) shows training miscalibration calculated for the overall model (a lower value of calibration error indicates better performance). Our proposed algorithms have comparable calibration errors to benchmarks, even though their fairness is far superior. To understand better underlying trends, Figure 9 provides the heatmap for the tree-based algorithms over 10 different tree heights. The amount of contribution each feature has on decision-masking is captured using a different color code. One observation is that the model shifts focus to different features based on the height. Such sudden changes can impact the generated confidence scores and, subsequently, the overall calibration of the model. As an example, consider the median KD-tree algorithm at the height of 8 in Los Angeles (Figure 7(b)): there is a sudden drop in training calibration, which can be explained by looking at the corresponding heat map in Figure 8(a). At the height of 8, the influential features on decision-making consist of different elements than the heights 4, 6, and 10, leading to the fluctuation in the model calibration. ### Performance of multi-objective approach. When multi-objective criteria are used, we need a methodology to unify the geospatial boundaries generated by each task. Our proposed multi-objective fair partitioning predicated on Fair KD-trees addresses exactly this problem. In our experiments, we use the two criteria of ACT scores and employment percentage of families as the two objectives used for partitioning. These features are separated from the training dataset in the pre-processing phase and are used to generate labels. The threshold for ACT is selected as before (22), and the threshold for label generation based on family employment is set to 10 percent. Figure 10 presents the results of the Multi-Objective Fair KD-tree (to simplify chart notation, we use the 'Fair KD-tree' label). We choose a \(\alpha\) value of 0.5 to give equal weight to both objectives. We emphasize that, the output of the Multi-Objective Fair KD-tree is a single non-overlapping partitioning of the space representing neighborhoods. Once the neighborhoods are generated, we show the performance with respect to each objective function, i.e., ACT and employment. The first row of the figure shows the performance for varying tree heights in Los Angeles, and the second row corresponds to Houston. The proposed algorithm improves fairness for both objective functions. The margin of improvement increases as the height of the tree increases. ## 6. Conclusion We proposed an indexing-based technique that achieves spatial group fairness in machine learning. Our technique performs a partitioning of the data domain in a way takes into account not only geographical features, but also calibration error. Extensive evaluation results on real data show that the proposed technique is effective in reducing unfairness when training on location attributes, Figure 10. Performance evaluation of multi-objective algorithm. and also preserves data utility. In future work, we plan to further investigate custom split metrics for fairness-aware spatial indexing that take into account data distribution characteristics. We will also investigate alternative indexing structures, such as R\({}^{+}\) trees, that completely cover the data domain and provide superior clustering properties.
2303.15553
MoViT: Memorizing Vision Transformers for Medical Image Analysis
The synergy of long-range dependencies from transformers and local representations of image content from convolutional neural networks (CNNs) has led to advanced architectures and increased performance for various medical image analysis tasks due to their complementary benefits. However, compared with CNNs, transformers require considerably more training data, due to a larger number of parameters and an absence of inductive bias. The need for increasingly large datasets continues to be problematic, particularly in the context of medical imaging, where both annotation efforts and data protection result in limited data availability. In this work, inspired by the human decision-making process of correlating new evidence with previously memorized experience, we propose a Memorizing Vision Transformer (MoViT) to alleviate the need for large-scale datasets to successfully train and deploy transformer-based architectures. MoViT leverages an external memory structure to cache history attention snapshots during the training stage. To prevent overfitting, we incorporate an innovative memory update scheme, attention temporal moving average, to update the stored external memories with the historical moving average. For inference speedup, we design a prototypical attention learning method to distill the external memory into smaller representative subsets. We evaluate our method on a public histology image dataset and an in-house MRI dataset, demonstrating that MoViT applied to varied medical image analysis tasks, can outperform vanilla transformer models across varied data regimes, especially in cases where only a small amount of annotated data is available. More importantly, MoViT can reach a competitive performance of ViT with only 3.0% of the training data.
Yiqing Shen, Pengfei Guo, Jingpu Wu, Qianqi Huang, Nhat Le, Jinyuan Zhou, Shanshan Jiang, Mathias Unberath
2023-03-27T19:12:02Z
http://arxiv.org/abs/2303.15553v3
# MoViT: Memorizing Vision Transformers for Medical Image Analysis ###### Abstract The synergy of long-range dependencies from transformers and local representations of image content from convolutional neural networks (CNNs) has led to advanced architectures and increased performance for various medical image analysis tasks due to their complementary benefits. However, compared with CNNs, transformers require considerably more training data, due to a larger number of parameters and an absence of inductive bias. The need for increasingly large datasets continues to be problematic, particularly in the context of medical imaging, where both annotation efforts and data protection result in limited data availability. In this work, inspired by the human decision-making process of correlating new "evidence" with previously memorized "experience", we propose a Memorizing Vision Transformer (MoViT) to alleviate the need for large-scale datasets to successfully train and deploy transformer-based architectures. MoViT leverages an external memory structure to cache history attention snapshots during the training stage. To prevent overfitting, we incorporate an innovative memory update scheme, attention temporal moving average, to update the stored external memories with the historical moving average. For inference speedup, we design a prototypical attention learning method to distill the external memory into smaller representative subsets. We evaluate our method on a public histology image dataset and an in-house MRI dataset, demonstrating that MoViT applied to varied medical image analysis tasks, can outperform vanilla transformer models across varied data regimes, especially in cases where only a small amount of annotated data is available. More importantly, MoViT can reach a competitive performance of ViT with only 3.0% of the training data. In conclusion, MoViT provides a simple plug-in for transformer architectures which may contribute to reducing the training data needed to achieve acceptable models for a broad range of medical image analysis tasks. Keywords:Vision Transformer External Memory Prototype Learning Insufficient Data. ## 1 Introduction With the advent of Vision Transformer (ViT), transformers have gained increasing popularity in the field of medical image analysis [2], due to the capability of capturing long-range dependencies. However, ViT and its variants require considerably larger dataset sizes to achieve competitive results with convolutional neural networks (CNNs), due to larger model sizes and the absence of convolutional inductive bias [2, 10]. Indeed, ViT performs worse than ResNet [4], a model of similar capacity, on the ImageNet benchmark[11], if ViT does not enjoy pre-training on JFT-300M [12], a large-scale dataset with 303 million weakly annotated natural images. The drawback of requiring exceptionally large datasets prevents transformer-based architectures to fully evolve its potential in the medical image analysis context, where data collection and annotation continue to pose considerable challenges. To capitalize on the benefits of transformer-based architectures for medical image analysis, we seek to develop an effective ViT framework capable of performing competitively even when only comparably small data is available. In the literature, the problematic requirement for large data is partly alleviated by extra supervisory signals. Data-efficient Image Transformer (DeiT), for example, distills hard labels from a strong teacher transformer [14]. Unfortunately, this approach only applies to problems where data-costly, high-capacity teacher transformer can be developed. Moreover, DeiT enables the training on student transformers exclusively for mid-size datasets, between 10k to 100k samples, and the performance dramatically declines when the data scale is small [14]. Concurrently, another line of work attempts to introduce the shift, scale, and distortion invariance properties from CNNs to transformers, resulting in a series of hybrid architecture designs [9, 17, 19, 21]. To give a few examples, Van _et al._ fed the extracted features from CNNs into a transformer for multi-view fusion in COVID diagnosis [15]. Barhoumi _et al._ extended a single CNN to multiple CNNs for feature extraction before the fusion by a transformer [1]. Importantly, they note that pre-training on ImageNet is still required to fuse convolutional operations with self-attention mechanisms, particularly in the medical context [1]. Yet, pre-training on large-scale medical dataset is practically unaffordable, due to the absence of centralized dataset as well as the privacy regularizations. Our developments to combat the need for large data to train transformer-models is loosely inspired by the process that clinicians use when learning how to diagnose medical images from a relative very limited number of cases compared to regular data size. To mimic this human decision-making process, where new information or "evidence" is often conceptually correlated with previously memorized facts or "experience", we present the _Memorizing Vision Transformer_ (MoViT) for efficient medical image analysis. MoViT introduces external memory, allowing the transformer to access previously memorized experience, _i. e._ keys and values, in the self-attention heads generated during the training. In the inference stage, the external memory then enhances the instance-level attention by looking up the correlated memorized facts. Introducing external memory enables long-range context to be captured through attention similar to language modeling, which provides supplementary attention with the current ViT and variants [18, 6, 3]. The contributions of this paper are three-fold, summarized as follows. (1) A novel _Memorizing Vision Transformer_ (MoViT), which introduces storage for past attention cues by caching them into external memory, without introducing additional trainable parameters. (2) A new approach to updating the memory using a _Attention Temporal Moving Average_ scheme, that accumulates attention snapshots and optimizes data in the external memory dynamically. In contrast, previous work, such as [18], is restricted to a random dropping-out scheme to keep a fixed amount of external memorized events. (3) A new _post hoc_ scheme, _Prototypical Attention Learning_, to distill the large-scale cached data into a representative prototypical subset, which accelerates computation during inference. Experiments are carried out across different modalities, _i. e_. Magnetic Resonance (MRI) Images and histopathological images, demonstrating superior performance to vanilla transformer models across all data regimes, especially when only small amounts of training samples are available. ## 2 Methods **Memorizing Vision Transformer.** Memorizing Vision Transformer (MoViT) accumulates snapshots of attention cues _i. e_. key \(k\) and value \(v\) generated by the attention heads as the memorized "experience", and caches them to an external memory bank in the form of an indexed triplet (\(ID\), \(k\), \(v\)) during the training process. The enumerated index \(ID\) of the data sample and the generated attention fact (\(k,v\)) are prepared for an efficient lookup and update in the subsequent Attention Temporal Moving Average (ATMA) scheme. Importantly, the gradients are not back-propagated into the memory bank, and thus, the caching operation only costs slightly extra training. This approach is practical in that the proposed MoViT can be easily plugged into any Vision Transformer (ViT) or its variants, by replacing one or multiple vanilla transformer blocks with MoViT blocks. An overview of the MoViT framework is presented in Fig. 1. Figure 1: An overview of the proposed Memorizing Vision Transformer (MoViT). Attention Temporal Moving Average.To remove stale memorized "experience" [16], and also to prevent overfitting to the training set, we introduce a novel Attention Temporal Moving Average (ATMA) strategy to update external memory. Current approaches routinely employ a fixed capacity to store triplets, where outdated cached memory triplets are dropped and new facts are taken in randomly. Different from this approach, we improve the mechanism by accumulating all the past snapshots _w.r.t_ index \(ID\), with the introduction of Exponential Moving Average update [13]. The outdated cached "experience" (\(k_{\text{old}},v_{\text{old}}\)) _w.r.t_ index \(ID\) denote the fact generated in the previous epoch, and is updated by the subsequently generated facts (\(k_{\text{generated}},v_{\text{generated}}\)) in the proposed ATMA according to: \[\begin{cases}k_{\text{new}}=\alpha_{k}\cdot k_{\text{generated}}+(1-\alpha_{k })\cdot k_{\text{old}},\\ v_{\text{new}}=\alpha_{v}\cdot v_{\text{generated}}+(1-\alpha_{v})\cdot v_{ \text{old}},\end{cases} \tag{1}\] where the subscripts "new" denotes the updated facts to the external memory, and \(\alpha_{k}\), \(\alpha_{v}\) are the friction terms. In the smoothing process, both coefficients uniformly follow the ramp-down scheme [8] for a steady update. Specifically, the coefficients are subject to the current training epoch number \(t\), _i. e_. \[\alpha=\begin{cases}1-\alpha_{0}\cdot\exp(-t_{0}(1-\frac{t}{t_{0}})^{2},&t\leq t _{0}\\ 1-\alpha_{0},&t>t_{0}\end{cases} \tag{2}\] with \(a_{0}=0.01\) and \(t_{0}\) set to 10% of the total training epochs as in previous work [13]. The number of stored facts \(M\) is exclusively correlated with the dataset scale, and network architecture _i. e_. \(M=\#(\text{training samples})\times\#(\text{attention heads})\), where the number of attention heads is often empirically set between the range of 3-12 [2], leading to a bounded \(M\). #### 2.0.2 Prototypical Attention Learning. We write all the cached experience from the training stage of MoViT as \(\mathcal{F}=\{(k_{i},v_{i})\}_{i=1}^{M}\). Then, prototypical attention facts refer to a small number of representative facts to describe \(\mathcal{F}\)_i. e_. \(\mathcal{P}=\{(k_{i}^{p},v_{i}^{p})\}_{i=1}^{P}\), where \(P\) represents the total number of prototypes. To distill the external memorized facts into representative prototypes for efficient inference, we introduce Prototypical Attention Learning (PAL), which is applied _post hoc_ to the external memory after model training. To identify the prototype keys from the cached keys \(\{k_{i}\}_{i=1}^{M}\), we leverage the Maximum Mean Discrepancy (MMD) metric [7] to measure the discrepancy between two distributions. Subsequently, the objective in PAL is steered toward minimizing the MMD metric, _i. e_. \[MMD^{2}=\frac{1}{P^{2}}\sum_{i,j=1}^{P}D(k_{i}^{p},k_{j}^{p})-\frac{1}{PM}\sum _{i,j=1}^{P,M}D(k_{i}^{p},k_{j})+\frac{1}{M^{2}}\sum_{i,j=1}^{M}D(k_{i},k_{j}), \tag{3}\] where \(D(\cdot,\cdot)\) denotes the cosine similarity. We employ a greedy search to find \(\{k_{i}^{p}\}_{i=1}^{P}\) from Eq. (3). To integrate all information after deriving prototype keys, we leverage the weighted average to derive the associated \(\{v_{i}^{p}\}_{i=1}^{P}\)_i. e._ \[v_{i}^{p}=\sum_{j=1}^{M}w_{j,i}v_{j}\ \ \text{with}\ w_{j,i}=\frac{\exp(D(v_{j},v_{i }^{p})/\tau)}{\sum\exp_{k=1}^{M}(D(v_{k},v_{i}^{p})/\tau)}, \tag{4}\] where the weights \(w_{j,i}\) are normalized by the softmax operation, and temperature \(\tau\) is a hyper-parameter to determine the confidence of normalization. #### 2.0.2 Inference Stage. To apply the attention facts \(\mathcal{F}\) or prototypes \(\mathcal{P}\) stored in the external memory during inference, approximate \(k\)-nearest-neighbor (kNN) search is employed to look up the top \(k\) pairs of (key, value) _w.r.t_ the given the local queries. In this way, the same batch of queries generated from the test sample is used for both the multi-head self attentions and external memory retrievals. With the retrieved keys, the attention matrix is derived by computing the softmax operated dot product with each query. Afterwards, we use the attention matrix to compute a weighted sum over the retrieved values. The results attended to local context and external memories are combined using a learned gate scheme [18]. ## 3 Experiments #### 3.0.1 Datasets. Evaluations are performed on two datasets curated from different modalities. (1) Histology Image Dataset: _NCT-CRC-HE-100K_ is a public Hematoxylin & Eosin (H&E) stained histology image dataset with \(100,000\) patches without overlap, curated from \(N=86\) colorectal cancer samples [5]. All RGB images are scaled to \(224\times 224\) pixels at the magnification of \(20\times\). To simulate various data availability conditions, some experiments use a subset from _NCT-CRC-HE-100K_ as the training set. In terms of the test set, an external public dataset _CRC-VAL-HE-7K_ with \(7180\) patches from \(N=50\) patients is employed. This dataset was designed to classify the nine tissue categories from histology image patches, and we use the top-1 test accuracy as the evaluation metric. (2) MRI Dataset: This in-house dataset includes \(147\) scans with malignant gliomas curated from \(N=92\) patients. All data has been deidentified properly to comply with the Institutional Review Board (IRB). Each scan contains five MRI sequences, namely T1-weighted (T1w), T2-weighted (T2w), fluid-attenuated inversion recovery (FLAIR), gadolinium enhanced T1-weighted (Gd-T1w), and amide proton transfer-weighted (APTw). Labels are generated manually at the slice level by experienced radiologists. The corresponding slices from each scan are concentrated after co-registration and z-score normalization, resulting in an input size of \(256\times 256\times 5\). A proportion of \(80\%\) of the patients are divided into the training set, and the remaining \(20\%\) as the test set _i. e._, \(1770\) training samples and \(435\) test samples. This is a binary classification task to distinguish malignant gliomas from normal tissue. We use accuracy, area under the ROC curve (AUC), precision, recall, and F1-score as the evaluation metrics. Implementations.All experiments are performed on one NVIDIA GeForce RTX 3090 GPU with 24 GB memory. An AdamW optimizer is used with a Cosine Annealing learning rate scheduler, where the initial learning rates are \(2\times 10^{-3}\) for the MRI dataset, \(5\times 10^{-4}\) for the Histology image dataset; with the maximum number of training epochs set to 100. We plug the MoViT into the last layer of ViT-Tiny, _i. e._ 12 transformer layers with 3, and set \(k=32\) for the kNN lookup. In PAL, we set the number of prototypes \(P=\#(\text{number of class})\times 32\), and temperature \(\tau=0.5\) for Eq. (4). Comparisons are made to Memorizing \begin{table} \begin{tabular}{l|c c c} \hline \hline Method & ViT-Tiny & ViT-Small & ViT-Base \\ \hline Baseline & \(96.462_{\pm 0.213}\) & \(95.850_{\pm 0.503}\) & \(94.231_{\pm 0.511}\) \\ MT [18] & \(96.511_{\pm 0.312}\) & \(96.621_{\pm 0.108}\) & \(95.102_{\pm 0.272}\) \\ DeiT [14] & \(96.439_{\pm 0.331}\) & \(96.216_{\pm 0.213}\) & \(93.246_{\pm 0.259}\) \\ ProtoPFormer [20] & \(96.712_{\pm 0.521}\) & \(96.032_{\pm 0.364}\) & \(93.002_{\pm 0.752}\) \\ \hline MoViT (Ours) & \(\mathbf{97.792}_{\pm 0.293}\) & \(\mathbf{97.326}_{\pm 0.138}\) & \(\mathbf{95.989}_{\pm 0.205}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison trained on the entire dataset in terms of test accuracy (%). We employ three ViT configurations _i. e._ ViT-Tiny, ViT-Small, and ViT-Base. Figure 2: Performance comparisons of MoViT plugged into the last layer of ViT-Tiny with counterparts, using a varying number of training samples from the Histology image dataset (0.1% to 1% data employed). The solid line and shadow regions represent the average and the standard deviation of test accuracy computed from seven random runs, respectively. The dashed line denotes the performance of the baseline (vanilla ViT) trained with the entire dataset, regarded as the performance upper bound. Transformer (MT) [18], DeiT [14], and ProtoPFormer _i. e._ a prototypical part network framework for ViT [20], where the vanilla ViT is regarded as the baseline. To compute the mean and standard deviation of the metrics, all models are trained from scratch for seven random runs. #### 3.2.2 Results on Histology Image Dataset. To simulate the case where only small data is available as in many medical image analysis tasks, we use a limited proportion of the training set, across varied data regimes, and use the whole _NCT-CRC-HE-100K_ as the test set for a fair comparison. As shown in Fig. 2, MoViT improves over the baseline at any data scale, especially when the number of samples is particularly small, _i. e._ 0.1%, where we can observe a similar trend with a large proportion of the data between 1%-100%. Notably, our method can achieve a close margin to the entire-dataset-trained model (96.462%\(\pm\)0.213%) using only 1.0% data (94.927%\(\pm\)0.378%), and a competitive performance (96.341%\(\pm\)0.201%) with 3.0% data. Additionally, our approach also significantly reduces the performance fluctuations _i. e._ standard deviation, leading to a more stable performance. For example, vanilla ViT is 20.901% when trained with 0.1% data and ours is 5.452% _i. e._ approximately four times smaller. Moreover, our method can consistently outperform state-of-the-art data-efficient transformer DeiT [14] and pure prototype learning method ProtoPFormer [20]. We notice that Memorizing Transformer (MT) [18] performs worse than the baseline although achieving almost 100% training accuracy, where the gap becomes significant with \(0.1\%-0.4\%\) data, which we attribute to the overfitting issue. The large margin between the performance of MT and MoViT implies that ATMA and PAL can alleviate the overfitting issues during the memorization of the facts. Performance comparison is also performed on the entire training set _i. e._ using 100% _NCT-CRC-HE-100K_ as the training set, with different ViT configurations. In Table 1, our method can consistently outperform its counterparts with a large margin, which demonstrates its applicability and scalability to large datasets. This suggests that MoViT scales well to a wide range of data scales as a by-product. The averaged training times per epoch on ViT-Tiny are 162.61(s) for baseline ViT, 172.22(s) for MT, 109.8(s) for DeiT, 639.4(s) for Pro \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Method & Accuracy(\(\uparrow\)) & AUC(\(\uparrow\)) & Precision(\(\uparrow\)) & Recall(\(\uparrow\)) & F1-score(\(\uparrow\)) \\ \hline Baseline & 74.01\({}_{\pm 0.40}\) & 80.62\({}_{\pm 0.40}\) & 57.64\({}_{\pm 0.66}\) & 79.01\({}_{\pm 0.68}\) & 66.64\({}_{\pm 0.54}\) \\ MT [18] & 77.92\({}_{\pm 0.36}\) & 84.83\({}_{\pm 0.40}\) & 62.54\({}_{\pm 0.68}\) & 81.84\({}_{\pm 0.64}\) & 70.95\({}_{\pm 0.56}\) \\ DeiT [14] & 78.63\({}_{\pm 0.40}\) & 85.65\({}_{\pm 0.39}\) & 63.07\({}_{\pm 0.67}\) & 84.68\({}_{\pm 0.56}\) & 72.20\({}_{\pm 0.54}\) \\ ProtoPFormer [20] & 77.53\({}_{\pm 0.40}\) & 85.74\({}_{\pm 0.38}\) & 61.12\({}_{\pm 0.67}\) & 86.07\({}_{\pm 0.53}\) & 71.54\({}_{\pm 0.55}\) \\ \hline Ours & **82.05\({}_{\pm 0.36}\)** & **88.38\({}_{\pm 0.30}\)** & **65.94\({}_{\pm 0.66}\)** & **94.43\({}_{\pm 0.36}\)** & **77.67\({}_{\pm 0.47}\)** \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative comparison on the MRI dataset. MoViT achieves the highest performance across all metrics, further suggesting its ability to perform well in applications where limited data is available. toPFormer, and 171.49(s) for our approach. Our method can boost performance with a reduced training data scale. **Results on MRI Dataset.** As depicted in Table 2, our proposed MoViT achieves the highest performance in terms of all metrics on the MRI dataset, where the dataset scale is relatively small, by nature. Specifically, MoViT can improve the AUC by a margin of 0.026 to the state-of-the-art transformer _i. e_. 0.857 achieved by ProtoPFromer; and can achieve better performance (AUC of 0.821) than baseline with 30% training data. Empirically, our method is superior to other modalities in the generalization ability. **Ablation Study.** To investigate the contribution of each functional block, ablation studies are performed on the MRI dataset. As shown in Table 3, the proposed MoViT benefits from both ATMA and PAL. Although each module brings a similar AUC improvement from 0.010 to 0.013, the exclusion of the two modules suffers an AUC decline of 0.022. Conclusively, the reported results suggest the effectiveness and indispensability of ATMA and PAL. ## 4 Conclusion In conclusion, we show that the use of memory in transformer architectures is beneficial for reducing the amount of training data needed to train generalizable transformer models. The reduction in data needs is particularly appealing in the context of medical image analysis, where large-scale data continues to pose challenges. Our model, the Memorizing Vision Transformer (MoViT) for medical image analysis, caches and updates relevant key and value pairs during training, and then uses them to enrich the attention context for the inference stage. MoViT's implementation is straightforward and can easily be plugged into various transformer models to achieve performance competitive to vanilla ViT with much less training data. Consequently, our method has the potential for benefiting a broad range of applications in the medical image analysis context. Future work includes a hybrid of MoViT with convolutional neural networks for more comprehensive feature extraction. \begin{table} \begin{tabular}{l c|c c c c c} \hline \hline ATMA & PAL & Accuracy(\(\uparrow\)) & AUC(\(\uparrow\)) & Precision(\(\uparrow\)) & Recall(\(\uparrow\)) & F1-score(\(\uparrow\)) \\ \hline & & 78.63\(\pm\)0.39 & 86.14\(\pm\)0.38 & 62.75\(\pm\)0.68 & 86.07\(\pm\)0.60 & 72.62\(\pm\)0.54 \\ & ✓ & 79.35\(\pm\)0.34 & 87.43\(\pm\)0.31 & 61.92\(\pm\)0.63 & 96.55\(\pm\)0.31 & 75.57\(\pm\)0.49 \\ ✓ & & 79.54\(\pm\)0.38 & 87.13\(\pm\)0.32 & 62.27\(\pm\)0.66 & 95.85\(\pm\)0.31 & 75.49\(\pm\)0.48 \\ ✓ & ✓ & 82.05\(\pm\)0.36 & 88.38\(\pm\)0.30 & 65.94\(\pm\)0.66 & 94.43\(\pm\)0.36 & 77.67\(\pm\)0.47 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablations on MRI dataset with MoViT-Tiny as the backbone. We can observe that the exclusion of either ATMA or PAL results in decreased performance with varying degrees.
2304.11792
Mixed radial-angular bounds for Hardy-type operators on Heisenberg groups
In this paper, we will study $n$-dimensional Hardy operator and its dual in mixed radial-angular spaces on Heisenberg groups and obtain their sharp bounds by using the rotation method. Furthermore, the sharp bounds of $n$-dimensional weighted Hardy operator and weighted Ces\`{a}ro operator are also obtained.
Zhongci Hang, Xiang Li, Dunyan Yan
2023-04-24T02:34:25Z
http://arxiv.org/abs/2304.11792v1
# Mixed radial-angular bounds for Hardy-type operators on Heisenberg groups # Mixed radial-angular bounds for Hardy-type operators on Heisenberg groups Zhongci Hang, Xiang Li, Dunyan Yan **Abstract:** In this paper, we will study \(n\)-dimensional Hardy operator and its dual in mixed radial-angular spaces on Heisenberg groups and obtain their sharp bounds by using the rotation method. Furthermore, the sharp bounds of \(n\)-dimensional weighted Hardy operator and weighted Cesaro operator are also obtained. + Footnote †: _Key words and phrases_: Hardy operator; dual operator; weighted Hardy operator; weighted Cesáro operator; mixed radial-angular space; Heisenberg group. _2020 Mathematics Subject Classification_: Primary 42B25; Secondary 42B20, 47H60, 47B47. ## 1 Introduction The classic Hardy operator and its dual operator are defined by \[H(f)(x):=\frac{1}{x}\int_{0}^{x}f(y)dy,\quad H^{*}(f)(x):=\int_{x}^{\infty} \frac{f(y)}{y}dy,\] for the locally integrable function \(f\) on \(\mathbb{R}\) and \(x\neq 0.\) The classic Hardy operator was introduced by Hardy [1] and he showed the following Hardy inequalities \[\|H(f)\|_{L^{p}}\leq\frac{p}{p-1}\|f\|_{L^{p}},\quad\|H^{*}(f)\|_{L^{p}}\leq p \|f\|_{L^{p}},\] where \(1<p<\infty,\) the constants \(\frac{p}{p-1},\)\(p\) are best possible. Faris [2] first extended Hardy-type operator to higher dimension, Christ and Grafakos [3] gave an equivalent version of \(n\)-dimensional Hardy operator \(\mathcal{H}\) for nonnegative function \(f\) on \(\mathbb{R}^{n},\) \[\mathcal{H}f(x):=\frac{1}{\Omega_{n}|x|^{n}}\int_{|y|<|x|}f(y)dy,x\in\mathbb{ R}^{n}\backslash\{0\},\] where \(\Omega_{n}=\frac{\pi^{\frac{n}{2}}}{\Gamma(1+\frac{n}{2})}\) is the volume of the unit ball in \(\mathbb{R}^{n}\).By direct computation,the dual operator of \(\mathcal{H}\) can be defined by setting, for nonnegative function \(f\) on \(\mathbb{R}^{n}\), \[\mathcal{H}^{*}(f)(x):=\int_{|y|\geq|x|}\frac{f(y)}{\Omega_{n}|y|^{n}}dy,x\in \mathbb{R}^{n}\backslash\{0\}.\] Christ and Grafakos [3] proved that the norms of \(\mathcal{H}\) and \(\mathcal{H}^{*}\) on \(L^{p}(\mathbb{R}^{n})(1<p<\infty)\) are also \(\frac{p}{p-1}\) and \(p\), which is the same as that in the 1-dimensional case and is also independent of the dimension. The sharp weak estimate for \(\mathcal{H}\) was obtained by Zhao et al. [4].For \(1\leq p\leq\infty\), \[\|\mathcal{H}(f)\|_{L^{p,\infty}}\leq 1\times\|f\|_{L^{p}},\] where 1 is best constant. In recent years, the research on Hardy operator related problems is receiving increasing attention. In [5], Hardy et al. provided us with the early development and application of Hardy's inequalities. In this paper, we will investigate the sharp bound for Hardy-type operators in the setting of the Heisenberg group, which plays an important role in several branches of mathematics. Now, let us to introduce some basic knowledge about the Heisenberg group which will be used in the following. The Heisenberg group \(\mathbb{H}^{n}\) is a non-commutative nilpotent Lie group, with the underlying manifold \(\mathbb{R}^{2n}\times\mathbb{R}\) with the group law \[x\circ y=\left(x_{1}+y_{1},\ldots,x_{2n}+y_{2n},x_{2n+1}+y_{2n+1}+2\sum_{j=1 }^{n}\left(y_{j}x_{n+j}-x_{j}y_{n+j}\right)\right)\] and \[\delta_{r}\left(x_{1},x_{2},\ldots,x_{2n},x_{2n+1}\right)=\left(rx_{1},rx_{2}, \ldots,rx_{2n},r^{2}x_{2n+1}\right),\quad r>0,\] where \(x=(x_{1},\cdots,x_{2n},x_{2n+1})\) and \(y=(y_{1},\cdots,y_{2n},y_{2n+1})\). The Haar measure on \(\mathbb{H}^{n}\) coincides with the usual Lebesgue measure on \(\mathbb{R}^{2n+1}\). We denote the measure of any measurable set \(E\subset\mathbb{H}^{n}\) by \(|E|\).Then \[|\delta_{r}(E)|=r^{Q}|E|,d(\delta_{r}x)=r_{Q}dx,\] where \(Q=2n+2\) is called the homogeneous dimension of \(\mathbb{H}^{n}\). The Heisenberg distance derived from the norm \[|x|_{h}=\left[\left(\sum_{i=1}^{2n}x_{i}^{2}\right)^{2}+x_{2n+1}^{2}\right]^{ 1/4},\] where \(x=(x_{1},x_{2},\cdots,x_{2n},x_{2n+1})\),is given by \[d(p,q)=d\left(q^{-1}p,0\right)=\left|q^{-1}p\right|_{h}.\] This distance \(d\) is left-invariant in the sense that \(d(p,q)\) remains unchanged when \(p\) and \(q\) are both left-translated by some fixed vector on \(\mathbb{H}^{n}\).Furthermore,\(d\) satisfies the triangular inequality [10] \[d(p,q)\leq d(p,x)+d(x,q),\quad p,x,q\in\mathbb{H}^{n}.\] For \(r>0\) and \(x\in\mathbb{H}^{n}\),the ball and sphere with center\(x\) and radius \(r\) on \(\mathbb{H}^{n}\) are given by \[B(x,r)=\left\{y\in\mathbb{H}^{n}:d(x,y)<r\right\},\] and \[S(x,r)=\left\{y\in\mathbb{H}^{n}:d(x,y)=r\right\},\] respectively.And we have \[|B(x,r)|=|B(0,r)|=\Omega_{Q}r^{Q},\] where \[\Omega_{Q}=\frac{2\pi^{n+\frac{1}{2}}\Gamma(n/2)}{(n+1)\Gamma(n)\Gamma((n+1)/2)}\] is the volume of the unit ball \(B(0,1)\) on \(\mathbb{H}^{n}\), \(\omega_{Q}=Q\Omega_{Q}\) (see [11]). More about Heisenberg group can refer to [6], [8] and [9]. The \(n\)-dimensional Hardy operator and its dual operator on Heisenberg group is defined by Wu and Fu [7] \[H_{h}f(x):=\frac{1}{\Omega_{Q}|x|_{h}^{Q}}\int_{|y|_{h}<|x|_{h}}f(y)dy,H_{h}*f (x):=\int_{|y|_{h}\geq|x|_{h}}\frac{f(y)}{\Omega_{Q}|y|_{h}^{Q}}dy, \tag{1}\] where \(x\in\mathbb{R}^{n}\backslash\{0\}\), \(f\) be a locally integrable function on \(\mathbb{H}^{n}\). They proved \(\mathcal{H}_{h}\) and \(\mathcal{H}_{h}^{*}\) is bounded from \(L^{p}(\mathbb{H}^{n})\) to \(L^{p}(\mathbb{H}^{n})\), \(1<p\leq\infty\). Moreover, \[\|\mathcal{H}_{h}\|_{L^{p}(\mathbb{H}^{n})}=\frac{p}{p-1}\|f\|_{L^{p}(\mathbb{ H}^{n})},\|\mathcal{H}_{h}^{*}\|_{L^{p}(\mathbb{H}^{n})}=p\|f\|_{L^{p}(\mathbb{H}^{ n})}. \tag{2}\] This is the same as the result on \(\mathbb{R}^{n}\). In [14], Chu et al. defined the \(n\)-dimensional weighted Hardy operator on Heisenberg group\(\mathcal{H}_{hw}\) and \(n\)-dimensional weighted Ces\(\acute{a}\)ro operator on Heisenberg group \(\mathcal{H}_{hw}^{*}\). Let us recall their definition. **Definition 1**.: _Let \(w:[0,1]\rightarrow[0,\infty)\) be a function, for a measurable function \(f\) on \(\mathbb{H}^{n}\). The \(n\)-dimensional weighted Hardy operator on Heisenberg group \(\mathcal{H}_{hw}\) is defined by_ \[\mathcal{H}_{hw}:=\int_{0}^{1}f(\delta_{t}x)w(t)dt,x\in\mathbb{H}^{n}.\] **Definition 2**.: _For a nonnegative function \(w:[0,1]\rightarrow(0,\infty)\) and measurable complex-valued function \(f\) on \(\mathbb{H}^{n}\), the \(n\)-dimensional weighted Cesaro operator is defined by_ \[\mathcal{H}^{*}_{hw}:=\int_{0}^{1}\frac{f(\delta_{1/t}x)}{t^{Q}}w(t)dt,x\in \mathbb{H}^{n},\] _which satisfies_ \[\int_{\mathbb{H}^{n}}f(x)(\mathcal{H}_{hw}g)(x)dx=\int_{\mathbb{H}^{n}}g(x)( \mathcal{H}^{*}_{hw})(x)dx,\] _where \(f\in L^{p}(\mathbb{H}^{n}),g\in L^{q}(\mathbb{H}^{n}),1<p<\infty,q=p/(p-1)\), \(\mathcal{H}_{hw}\) is bounded on \(L^{p}(\mathbb{H}^{n})\), and \(\mathcal{H}^{*}_{hw}\) is bounded on \(L^{q}(\mathbb{H}^{n})\)._ Recently, many operators in harmonic analysis have been proved to be bounded on mixed radial-angular spaces, for instance, Duoandikoetxea and Oruetxebarria [13] built the extrapolation theorems on mixed radial-angular spaces to study the boundedness of a large class of operators which are weighted bounded. In [12], Wei and Yan studied the sharp bounds for \(n\)-dimensional Hardy operator and its dual in mixed radial-angular spaces on Euclidean space. Inspired by them, we will investigate the sharp bounds for \(n\)-dimensional Hardy operator and its dual operator in mixed radial-angular spaces on Heisenberg groups. Now, we give the definition of mixed radial-angular spaces on Heisenberg group. **Definition 3**.: _For any \(n\geq 2\),\(1\leq p\),\(\bar{p}\leq\infty\), the mixed radial-angular space \(L^{p}_{|x|_{h}}L^{\bar{p}}_{\theta}(\mathbb{H}^{n})\) consists of all functions \(f\) in \(\mathbb{H}^{n}\) for which_ \[\|f\|_{L^{p}_{|x|_{h}^{p}}L^{\bar{p}}_{\theta}(\mathbb{H}^{n})}:=\left(\int_{ 0}^{\infty}\left(\int_{S^{Q-1}}|f(r,\theta)|^{p}d\theta\right)^{\frac{p}{p}}r ^{Q-1}dr\right)^{\frac{1}{p}}<\infty,\] _where \(S^{Q-1}\) denotes the unit sphere in \(\mathbb{H}^{n}\)._ Next, we will provide the main results of this article. ## 2 Mixed radial-angular bounds for \(\mathcal{H}_{h}\) and \(\mathcal{H}^{*}_{h}\). **Theorem 1**.: _Let \(n\geq 2,1<p,\bar{p}_{1},\bar{p}_{2}<\infty\). Then \(\mathcal{H}_{h}\) is bounded from \(L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n})\) to \(L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^{n})\).Moreover,_ \[\|\mathcal{H}_{h}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n}) \to L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^{n})}=\left(\frac{pQ^{ 1/\bar{p}_{2}-1/\bar{p}_{1}}}{p-1}\right)\left(\frac{2\pi^{n+\frac{1}{2}} \Gamma(n/2)}{(n+1)\Gamma(n)\Gamma((n+1)/2)}\right)^{1/\bar{p}_{2}-1/\bar{p}_{1 }}.\] **Theorem 2**.: _Let \(n\geq 2,1<p,\bar{p}_{1},\bar{p}_{2}<\infty\), then \(\mathcal{H}_{h}^{*}\) is bounded from \(L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n})\) to \(L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^{n})\).Moreover,_ \[\|\mathcal{H}_{h}^{*}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n} )\to L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^{n})}=pQ^{1/\bar{p}_{2 }-1/\bar{p}_{1}}\left(\frac{2\pi^{n+\frac{1}{2}}\Gamma(n/2)}{(n+1)\Gamma(n) \Gamma((n+1)/2)}\right)^{1/\bar{p}_{2}-1/\bar{p}_{1}}.\] Proof of Theorem 1.: Set \[g(x)=\frac{1}{\omega_{Q}}\int_{\mathbb{S}^{Q-1}}f(\delta_{|x|_{h}}\theta)d \theta,x\in\mathbb{H}^{n}, \tag{3}\] then \(g\) is a radial function. Moreover, we have \[\|g\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n})} =\left(\int_{0}^{\infty}\left(\int_{\mathbb{S}^{Q-1}}|g(r,\theta) |^{\bar{p}_{1}}d\theta\right)^{p/\bar{p}_{1}}r^{Q-1}dr\right)^{1/p}\] \[=\left(\int_{0}^{\infty}\left(\frac{1}{\omega_{Q}}|g(r)|^{\bar{p} _{1}}\right)^{p/\bar{p}_{1}}r^{Q-1}dr\right)^{1/p}\] \[=\omega_{Q}^{1/\bar{p}_{1}}\left(\int_{0}^{\infty}|g(r)|^{p}r^{Q- 1}dr\right)^{1/p},\] where \(g(r)\) can be defined as \(g(r)=g(x)\) for any \(x\in\mathbb{H}^{n}\) with \(|x|_{h}=r\). By using Holder inequality, we have \[\|g\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n})} =\omega_{Q}^{1/\bar{p}_{1}}\left(\int_{0}^{\infty}\left|\frac{1}{ \omega_{Q}}\int_{S^{Q-1}}f(\delta_{r}\theta)d\theta\right|^{p}r^{Q-1}dr\right) ^{1/p}\] \[=\omega_{Q}^{1/\bar{p}_{1}-1}\left(\int_{0}^{\infty}\left|\int_{S ^{Q-1}}f(\delta_{r}\theta)d\theta\right|^{p}r^{Q-1}dr\right)^{1/p}\] \[\leq\omega_{Q}^{1/p-1}\left(\int_{0}^{\infty}\left(\int_{S^{Q-1}} |f(\delta_{r}\theta)|^{\bar{p}_{1}}d\theta\right)^{p/\bar{p}_{1}}\left(\int_{S ^{Q-1}}d\theta\right)^{p/\bar{p}_{1}}r^{Q-1}dr\right)^{1/p}\] \[=\left(\int_{0}^{\infty}\left(\int_{S^{Q-1}}|f(\delta_{r}\theta) |^{\bar{p}_{1}}d\theta\right)^{p/\bar{p}_{1}}r^{Q-1}dr\right)^{1/p}\] \[=\|f\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n})}.\] By change of variables, we can get \[\mathcal{H}_{h}g(x) =\frac{1}{\Omega_{Q}|x|_{h}^{Q}}\int_{|y|_{h}<|x|_{h}}\left(\frac{1} {\omega_{Q}}\int_{\mathbb{S}^{Q-1}}f(\delta_{|x|_{h}}\theta)d\theta\right)dy\] \[=\frac{1}{\Omega_{Q}|x|_{h}^{Q}}\int_{0}^{1}\int_{S(0,1)}\left( \frac{1}{\omega_{Q}}\int_{\mathbb{S}^{Q-1}}f(\delta_{r}\theta)d\theta\right)r^ {Q-1}dy^{{}^{\prime}}dr\] \[=\frac{1}{\Omega_{Q}|x|_{h}^{Q}}\int_{0}^{1}\int_{S(0,1)}f(\delta _{r}\theta)r^{Q-1}d\theta dr\] \[=\mathcal{H}_{h}f(x).\] Thus, we obtain \[\frac{\left\|\mathcal{H}_{h}(f)\right\|_{L^{p}_{|x|_{h}}L^{\tilde{p}_{2}}_{ \theta}(\mathbb{H}^{n})}}{\left\|f\right\|_{L^{p}_{|x|_{h}}L^{\tilde{p}_{1}}_{ \theta}(\mathbb{H}^{n})}}\leq\frac{\left\|\mathcal{H}_{h}(g)\right\|_{L^{p}_ {|x|_{h}}L^{\tilde{p}_{2}}_{\theta}(\mathbb{H}^{n})}}{\left\|g\right\|_{L^{p} _{|x|_{h}}L^{\tilde{p}_{1}}_{\theta}(\mathbb{H}^{n})}}.\] This implies the operator \(\mathcal{H}\) and its restriction to radial function have same norm from \(L^{p}_{|x|_{h}}L^{\tilde{p}_{1}}_{\theta}\) to \(L^{p}_{|x|_{h}}L^{\tilde{p}_{2}}_{\theta}\), without loss of generality, we can assume that \(f\) is a radial function in the rest of the proof. Consequently, we have \[\left\|\mathcal{H}_{h}f\right\|_{L^{p}_{|x|_{h}}L^{\tilde{p}_{2}}_ {\theta}(\mathbb{H}^{n})} =\left(\int_{0}^{\infty}\left(\int_{S^{Q-1}}|\mathcal{H}_{h}(f)(r,\theta)|^{\tilde{p}_{2}}d\theta\right)^{p/\tilde{p}_{2}}r^{Q-1}dr\right)^{ \frac{1}{p}}\] \[=\left(\int_{0}^{\infty}\left(\int_{S^{Q-1}}|\mathcal{H}_{h}(f)(r )|^{\tilde{p}_{2}}d\theta\right)^{p/\tilde{p}_{2}}r^{Q-1}dr\right)^{1/p}\] \[=\omega_{Q}^{1/\tilde{p}_{2}}\left(\int_{0}^{\infty}|\mathcal{H}_ {h}(f)(r)|^{p}r^{Q-1}dr\right)^{1/p},\] where \(\mathcal{H}_{h}(f)(r)\) can be defined as \(\mathcal{H}_{h}(f)(r)=\mathcal{H}_{h}(f)(x)\) for any \(|x|_{h}=r\). Next, we use another form of Hardy operator \[\mathcal{H}_{h}(f)(r)=\frac{1}{|B(0,r)|}\int_{B(0,r)}f(y)dy,x\in\mathbb{H}^{n} \backslash\{0\},\] by changing variables, we have \[\mathcal{H}_{h}(f)(r)=\frac{1}{\Omega_{Q}}\int_{B(0,1)}f(\delta_{r}y)dy,x\in \mathbb{H}^{n}\backslash\{0\}.\] Using Minkowski's inequality, we can get \[\left\|\mathcal{H}_{h}f\right\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{ \theta}(\mathbb{H}^{n})} =\omega_{Q}^{1/\bar{p}_{2}}\left(\int_{0}^{\infty}\left|\frac{1}{ \omega_{Q}}\int_{B(0,1)}f(\delta_{r}y)dy\right|^{p}r^{Q-1}dr\right)^{1/p}\] \[=\frac{\omega_{Q}^{1/\bar{p}_{2}}}{\Omega_{Q}}\left(\int_{0}^{ \infty}\left|\int_{B(0,1)}f(\delta_{r}y)dy\right|^{p}r^{Q-1}dr\right)^{1/p}\] \[\leq\frac{\omega_{Q}^{1/\bar{p}_{2}}}{\Omega_{Q}}\int_{B(0,1)} \left(\int_{0}^{\infty}|f(\delta_{|y|_{h}}r)|^{p}r^{Q-1}dr\right)^{1/p}dy\] \[=\frac{\omega_{Q}^{1/\bar{p}_{2}}}{\Omega_{Q}}\int_{B(0,1)}\left( \int_{0}^{\infty}|f(r)|^{p}r^{Q-1}dr\right)^{1/p}|y|_{h}^{-Q/p}dy\] \[=\frac{\omega_{Q}^{1/\bar{p}_{2}-1/\bar{p}_{1}}}{\Omega_{Q}}\int_ {B(0,1)}|y|_{h}^{-Q/p}dy\|f\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}}\] \[=\frac{p}{p-1}\omega_{Q}^{1/\bar{p}_{2}-1/\bar{p}_{1}}\|f\|_{L^{p }_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}}\] \[=\left(\frac{pQ^{1/\bar{p}_{2}-1/\bar{p}_{1}}}{p-1}\right)\left( \frac{2\pi^{n+\frac{1}{2}}\Gamma(n/2)}{(n+1)\Gamma(n)\Gamma((n+1)/2)}\right)^ {1/\bar{p}_{2}-1/\bar{p}_{1}}\left\|f\right\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_ {\theta}}.\] Therefore, we have \[\left\|\mathcal{H}_{h}f\right\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{ \theta}(\mathbb{H}^{n})}\leq\left(\frac{pQ^{1/\bar{p}_{2}-1/\bar{p}_{1}}}{p-1} \right)\left(\frac{2\pi^{n+\frac{1}{2}}\Gamma(n/2)}{(n+1)\Gamma(n)\Gamma((n+1) /2)}\right)^{1/\bar{p}_{2}-1/\bar{p}_{1}}\left\|f\right\|_{L^{p}_{|x|_{h}}L^{ \bar{p}_{1}}_{\theta}}. \tag{4}\] On the other hand, for \(0<\epsilon<1\), take \[f_{\epsilon}(x)=\begin{cases}0,&|x|_{h}\leq 1,\\ |x|_{h}^{-\left(\frac{Q}{p}+\epsilon\right)}&|x|_{h}>1\end{cases},\] then \[\left\|f_{\epsilon}\right\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}}^{p}= \frac{\omega_{Q}^{p/\bar{p}_{1}}}{p\epsilon},\] and \[\mathcal{H}_{h}(f_{\epsilon})(x)=\begin{cases}0,&|x|_{h}\leq 1,\\ \Omega_{Q}^{-1}|x|_{h}^{-\frac{Q}{p}-\epsilon}\int_{|x|_{h}^{-1}<|y|_{h}<1}|y| _{h}^{-\frac{Q}{p}-\epsilon}dy,&|x|_{h}>1\end{cases}.\] So, we have \[\left\|\mathcal{H}_{h}(f_{\epsilon})\right\|_{L^{p}_{|x|_{h}}L^{\bar{ p}_{2}}_{\theta}(\mathbb{H}^{n})} =\frac{\omega_{Q}^{1/\bar{p}_{2}}}{\Omega_{Q}}\left(\int_{r>1} \left|r^{-\frac{Q}{p}-\epsilon}\int_{r^{-1}<|y|_{h}<1}|y|_{h}^{-\frac{Q}{p}- \epsilon}dy\right|^{p}r^{Q-1}dr\right)^{1/p}\] \[\geq\frac{\omega_{Q}^{1/\bar{p}_{2}}}{\Omega_{Q}}\left(\int_{r> \frac{1}{\epsilon}}\left|r^{-\frac{Q}{p}-\epsilon}\int_{\epsilon<|y|_{h}<1}|y|_ {h}^{-\frac{Q}{p}-\epsilon}dy\right|^{p}r^{Q-1}dr\right)^{1/p}\] \[=\frac{\omega_{Q}^{1/\bar{p}_{2}}}{\Omega_{Q}}\left(\int_{r> \frac{1}{\epsilon}}r^{-p\epsilon-Q}dr\right)^{1/p}\int_{\epsilon<|y|_{h}<1}|y| _{h}^{-\frac{Q}{p}-\epsilon}dy\] \[=\frac{\omega_{Q}^{1+1/\bar{p}_{2}}}{\Omega_{Q}}\left(\int_{r> \frac{1}{\epsilon}}r^{-p\epsilon-Q}dr\right)^{1/p}\int_{1}^{\epsilon}r^{Q-1- \frac{Q}{p}-\epsilon}dr\] \[=\epsilon^{\epsilon}\frac{1-\epsilon^{Q-\frac{Q}{p}-\epsilon}}{1- \frac{1}{p}-\frac{\epsilon}{Q}}\omega_{Q}^{1/\bar{p}_{2}-1/\bar{p}_{1}}\|f_{ \epsilon}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}}.\] Thus, we have obtained \[\left\|\mathcal{H}_{h}\right\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}( \mathbb{H}^{n})}_{\theta}\to L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}( \mathbb{H}^{n})}\geq\epsilon^{\epsilon}\frac{1-\epsilon^{Q-\frac{Q}{p}- \epsilon}}{1-\frac{1}{p}-\frac{\epsilon}{Q}}\omega_{Q}^{1/\bar{p}_{2}-1/\bar{p }_{1}}\|f_{\epsilon}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}}.\] Since \(\epsilon^{\epsilon}\to 1\) as \(\epsilon\to 0\), by letting \(\epsilon\to 0\), we have \[\left\|\mathcal{H}_{h}\right\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{ \theta}(\mathbb{H}^{n})} \geq\frac{p}{p-1}\omega_{Q}^{1/\bar{p}_{2}-1/\bar{p}_{1}}\] \[=\left(\frac{pQ^{1/\bar{p}_{2}-1/\bar{p}_{1}}}{p-1}\right)\left( \frac{2\pi^{n+\frac{1}{2}}\Gamma(n/2)}{(n+1)\Gamma(n)\Gamma((n+1)/2)}\right)^ {1/\bar{p}_{2}-1/\bar{p}_{1}}\|f\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}}. \tag{5}\] Combine (4) and (5), we can get \[\left\|\mathcal{H}_{h}f\right\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}( \mathbb{H}^{n})}=\left(\frac{pQ^{1/\bar{p}_{2}-1/\bar{p}_{1}}}{p-1}\right) \left(\frac{2\pi^{n+\frac{1}{2}}\Gamma(n/2)}{(n+1)\Gamma(n)\Gamma((n+1)/2)} \right)^{1/\bar{p}_{2}-1/\bar{p}_{1}}\|f\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{ \theta}}.\] This completes the proof of Theorem 1. Proof of Theorem 2.: The proof of Theorem 2 is similar to prove of Theorem 1, we omit the details. ## 3 Mixed radial-angular bounds for \(\mathcal{H}_{hw}\) and \(\mathcal{H}_{hw}^{*}\). **Theorem 3**.: _Let \(w:[0,1]\rightarrow(0,\infty)\) be a function, \(n\leq 2,1<p,\bar{p}_{1},\bar{p}_{2}<\infty\). Then the \(n\)-dimensional weighted Hardy operator on Heisenberg group \(\mathcal{H}_{hw}\) is bounded from \(L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n})\) to \(L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^{n})\). Moreover,_ \[\|\mathcal{H}_{hw}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}( \mathbb{H}^{n})\to L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^{n})}= Q^{1/\bar{p}_{2}-1/\bar{p}_{1}}\left(\frac{2\pi^{n+\frac{1}{2}} \Gamma(n/2)}{(n+1)\Gamma(n)\Gamma((n+1)/2)}\right)^{1/\bar{p}_{2}-1/\bar{p}_{1}}\] \[\times\int_{0}^{1}t^{-\frac{Q}{p}}w(t)dt.\] **Theorem 4**.: _Let \(w:[0,1]\rightarrow(0,\infty)\) be a function, \(n\leq 2,1<p,\bar{p}_{1},\bar{p}_{2}<\infty\). Then the \(n\)-dimensional weighted Cesaro operator on Heisenberg group \(\mathcal{H}^{*}_{hw}\) is bounded from \(L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n})\) to \(L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^{n})\). Moreover,_ \[\|\mathcal{H}^{*}_{hw}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta} (\mathbb{H}^{n})\to L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^{n})}= Q^{1/\bar{p}_{2}-1/\bar{p}_{1}}\left(\frac{2\pi^{n+\frac{1}{2}} \Gamma(n/2)}{(n+1)\Gamma(n)\Gamma((n+1)/2)}\right)^{1/\bar{p}_{2}-1/\bar{p}_{1}}\] \[\times\int_{0}^{1}t^{-Q(1-\frac{1}{p})}w(t)dt.\] The proof methods for Theorem 3 and Theorem 4 are the same, and similar to the proof method for Theorem 1. But as a special case, here we will give the proof of Theorem 4. Proof of Theorem 4.: Inspired by proof of Theorem 1, we have \[\|\mathcal{H}^{*}_{hw}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^ {n})}=\omega_{Q}^{1/\bar{p}_{2}}\left(\int_{0}^{\infty}|\mathcal{H}_{hw}(f)(r )|^{p}r^{Q-1}dr\right)^{1/p},\] where \(\mathcal{H}^{*}_{hw}(f)(r)\) can be defined as \(\mathcal{H}^{*}_{hw}(f)(r)=\mathcal{H}^{*}_{hw}(f)(x)\) for any \(|x|_{h}=r\). Using Minkowski's inequality, we can get that \[\|\mathcal{H}^{*}_{hw}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta} (\mathbb{H}^{n})}= \omega_{Q}^{1/\bar{p}_{2}}\left(\int_{0}^{\infty}\left|\int_{0}^{ 1}\frac{f(\delta_{1/r}t)}{t^{Q}}w(t)dt\right|^{p}r^{Q-1}dr\right)^{1/p}\] \[\leq \omega_{Q}^{1/\bar{p}_{2}}\int_{0}^{1}\left(\int_{0}^{\infty}|f( \delta_{1/t}r)|^{p}r^{Q-1}dr\right)^{1/p}t^{-Q}w(t)dt\] \[= \omega_{Q}^{1/\bar{p}_{2}}\int_{0}^{1}\left(\int_{0}^{\infty}|f(r )|^{p}r^{Q-1}dr\right)^{1/p}t^{-Q+Q/p}w(t)dt\] \[= \omega_{Q}^{1/\bar{p}_{2}-1/\bar{p}_{1}}\int_{0}^{1}\left(\int_{0 }^{\infty}\omega_{Q}^{p/\bar{p}_{1}}|f(r)|^{p}r^{Q-1}dr\right)^{1/p}t^{-Q+Q/p} w(t)dt\] \[= \omega_{Q}^{1/\bar{p}_{2}-1/\bar{p}_{1}}\int_{0}^{1}t^{-Q(1-\frac{ 1}{p})}w(t)dt\|f\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}}\] \[= Q^{1/\bar{p}_{2}-1/\bar{p}_{1}}\left(\frac{2\pi^{n+\frac{1}{2}} \Gamma(n/2)}{(n+1)\Gamma(n)\Gamma((n+1)/2)}\right)^{1/\bar{p}_{2}-1/\bar{p}_{1}}\] \[\times\int_{0}^{1}t^{-Q(1-\frac{1}{p})}w(t)dt\|f\|_{L^{p}_{|x|_{h}}L^{ \bar{p}_{1}}_{\theta}}.\] Therefore, we have \[\|\mathcal{H}^{*}_{hw}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}( \mathbb{H}^{n})}\leq Q^{1/\bar{p}_{2}-1/\bar{p}_{1}}\left(\frac{2\pi^{n+\frac{1}{2}} \Gamma(n/2)}{(n+1)\Gamma(n)\Gamma((n+1)/2)}\right)^{1/\bar{p}_{2}-1/\bar{p}_{1}}\] \[\times\int_{0}^{1}t^{-Q(1-\frac{1}{p})}w(t)dt\|f\|_{L^{p}_{|x|_{h} }L^{\bar{p}_{1}}_{\theta}}.\] On the other hand, taking \[C=\|\mathcal{H}^{*}_{hw}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H }^{n})\to L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n})}<\infty,\] and for \(f\in L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^{n})\), we obtain \[\|\mathcal{H}^{*}_{hw}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{\theta}(\mathbb{H}^ {n})}\leq C\|f\|L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n}).\] For any \(\epsilon>0\), take \[f_{\epsilon}(t)=\begin{cases}0,&t\leq 1,\\ t^{-\left(\frac{Q}{p}+\epsilon\right)}&t>1.\end{cases}\] Then \[\|f_{\epsilon}\|^{p}_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}}=\frac{\omega_{Q }^{p/\bar{p}_{1}}}{p\epsilon},\] and \[\mathcal{H}^{*}_{hw}(f_{\epsilon})(x)=\begin{cases}0,&|x|_{h}\leq 1,\\ |x|_{h}^{-\frac{Q}{p}-\epsilon}\int_{|x|_{h}^{-1}<t<1}t^{\frac{Q}{p}+\epsilon- Q}w(t)dt,&|x|_{h}>1\end{cases}.\] So we have \[C^{p}\|f_{\epsilon}\|^{P}_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}} \geq\|\mathcal{H}^{*}_{hw}\|^{p}_{L^{p}_{|x|_{h}}L^{\bar{p}_{2}}_{ \theta}}\] \[=\omega_{Q}^{1/\bar{p}_{2}}\left(\int_{r>1}\left|r^{-\frac{Q}{p} -\epsilon}\int_{r^{-1}<t<1}t^{\frac{Q}{p}+\epsilon-Q}dt\right|^{p}r^{Q-1}dr \right)^{1/p}\] \[\geq\omega_{Q}^{1/\bar{p}_{2}}\left(\int_{r>\frac{1}{\epsilon}} \left|r^{-\frac{Q}{p}-\epsilon}\int_{\epsilon<t<1}t^{\frac{Q}{p}+\epsilon-Q}w( t)dt\right|^{p}r^{Q-1}dr\right)^{1/p}\] \[=\omega_{Q}^{1/\bar{p}_{2}}\int_{r>\frac{1}{\epsilon}}r^{-p \epsilon-Q}dr\left(\int_{\epsilon<t<1}t^{\frac{Q}{p}+\epsilon-Q}w(t)dt\right)^ {p}.\] By changing of variable \(r=\delta_{1/\epsilon}y\), we have \[C^{p}\|f_{\epsilon}\|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}}^{P} \geq\omega_{Q}^{1/\bar{p}_{2}}\int_{|y|_{h}>1}|y|_{h}^{-p\epsilon- Q}\epsilon^{\epsilon p}dy\left(\int_{\epsilon<t<1}t^{\frac{Q}{p}+\epsilon-Q}w(t)dt \right)^{p}\] \[=\omega_{Q}^{1/\bar{p}_{2}-1/\bar{p}_{1}}\left(\epsilon^{\epsilon }\int_{1<t<\epsilon}t^{\frac{Q}{p}+\epsilon-Q}w(t)dt\right)^{p}\|f_{\epsilon} \|_{L^{p}_{|x|_{h}}L^{\bar{p}_{1}}_{\theta}(\mathbb{H}^{n})}.\] This implies that \[\epsilon^{\epsilon}\int_{1<t<\epsilon}t^{\frac{Q}{p}+\epsilon-Q}w(t)dt\leq C.\] Let \(\epsilon\to 0\), we have \[\int_{0}^{1}t^{\frac{Q}{p}-Q}w(t)dt\leq C.\] Thus, we have finished the proof of Theorem 4. ### Acknowledgements This work was supported by National Natural Science Foundation of China (Grant No. 12271232) and Shandong Jianzhu University Foundation (Grant No. X20075Z0101).
2307.08793
On Detecting Interstellar Scintillation in Narrowband Radio SETI
To date, the search for radio technosignatures has focused on sky location as a primary discriminant between technosignature candidates and anthropogenic radio frequency interference (RFI). In this work, we investigate the possibility of searching for technosignatures by identifying the presence and nature of intensity scintillations arising from the turbulent, ionized plasma of the interstellar medium (ISM). Past works have detailed how interstellar scattering can both enhance and diminish the detectability of narrowband radio signals. We use the NE2001 Galactic free electron density model to estimate scintillation timescales to which narrowband signal searches would be sensitive, and discuss ways in which we might practically detect strong intensity scintillations in detected signals. We further analyze the RFI environment of the Robert C. Byrd Green Bank Telescope (GBT) with the proposed methodology and comment on the feasibility of using scintillation as a filter for technosignature candidates.
Bryan Brzycki, Andrew P. V. Siemion, Imke de Pater, James M. Cordes, Vishal Gajjar, Brian Lacki, Sofia Sheikh
2023-07-17T19:23:43Z
http://arxiv.org/abs/2307.08793v1
# On Detecting Interstellar Scintillation in Narrowband Radio SETI ###### Abstract To date, the search for radio technosignatures has focused on sky location as a primary discriminant between technosignature candidates and anthropogenic radio frequency interference (RFI). In this work, we investigate the possibility of searching for technosignatures by identifying the presence and nature of intensity scintillations arising from the turbulent, ionized plasma of the interstellar medium (ISM). Past works have detailed how interstellar scattering can both enhance and diminish the detectability of narrowband radio signals. We use the NE2001 Galactic free electron density model to estimate scintillation timescales to which narrowband signal searches would be sensitive, and discuss ways in which we might practically detect strong intensity scintillations in detected signals. We further analyze the RFI environment of the Robert C. Byrd Green Bank Telescope (GBT) with the proposed methodology and comment on the feasibility of using scintillation as a filter for technosignature candidates. astrobiology -- technosignature -- SETI -- extraterrestrial intelligence 0000-0002-2880-8880]Bryan Brzycki 0000-0002-4882-7888]Andrew P. V. Siemion 0000-0002-4880-7888]Imke de Pater 0000-0002-4880-7888]James M. Cordes 0000-0002-0703-3473]Vishal Gajjar 0000-0002-4880-7888]Brian Lacki 0000-0002-0703-070X]Sofia Sheikh ## 1 Introduction The Search for Extraterrestrial Intelligence (SETI) aims to answer one of the most important scientific questions: are we alone in the universe? Complementing other subfields of astrobiology in the attempt to detect life outside our planet, radio SETI strives to detect and constrain the existence of technosignatures, signals that betray the presence of intelligent extraterrestrial civilizations. Radio and microwave astronomy has played an important role in modern SETI since the initial suggestion by Cocconi & Morrison (1959) to search near the neutral hydrogen line at 1.42 GHz for continuous narrowband emission. Out of the whole electromagnetic spectrum, radio frequencies are a strong candidate for searches since such emission is expected to arise from advanced civilizations for a portion of their technological activity1, radio photons are efficient to produce, and radio waves travel relatively unimpeded by the atmosphere, dust, and the ISM (Oliver & Billingham, 1971; Siemion et al., 2014). Narrowband emission is particularly tantalizing as a discriminant from natural astrophysical radio phenomena, whose emission bandwidth is usually, at minimum, hundreds of Hz at microwave frequencies due to broadening effects (Tarter, 2001). From the relative ease at which our own civilization produces continuous, Hz-width signals, we anticipate that extraterrestrial civilizations will similarly emit narrowband signals. Footnote 1: Judging from the technological development of our own civilization, we expect intelligent civilizations to emit radio waves as intentional transmissions or as unintentional leakage from normal activity. From the first dedicated radio search for technosignatures by Drake (1961), SETI experiments have vastly expanded along multiple axes to cover larger frequency bandwidths, higher resolutions, and additional signal types (Werthimer et al., 1985; Tarter, 2001; Siemion et al., 2013; Wright et al., 2014; MacMahon et al., 2018; Price et al., 2018; Gajjar et al., 2021). The Breakthrough Listen (BL) initiative began in 2016 as the most comprehensive SETI search program to date, observing with large instantaneous bandwidths at facilities across the world, including the Robert C. Byrd Green Bank Telescope (GBT) in West Virginia, USA and the CSIRO Parkes telescope in New South Wales, Australia (Worden et al., 2017; MacMahon et al., 2018; Price et al., 2018). While the technology used in radio SETI has developed and improved throughout the decades, the require ments for a theoretical technosignature detection have not changed significantly. Narrowband signals are assumed to be non-natural in origin, but there is yet an ever-present background of human-made radio interference (RFI), comprised of both ground and space-based transmissions. Having a robust way of differentiating technosignature candidates from RFI is paramount if we are to ever have a convincing detection (Horowitz and Sagan, 1993). The primary strategy for RFI rejection in radio SETI is sky localization. If a signal is detected in multiple telescope directions, it is considered RFI, since a bona fide extra-solar technosignature should originate from a single location on the sky. To this end, BL uses ON-OFF observations, in which different pointings on the sky are observed in a cadence according to a ABABAB or ABACAD pattern (Enriquez et al., 2017; Price et al., 2020). To further tighten the directional filter, we require that a signal must appear in all 3 ON (A) observations to be considered a candidate. For a directional filter to properly work, signals must be continuous throughout the observational cadence. Ideally, a candidate would be detected in repeat observations localized in the sky, requiring even longer signal durations. However, as in terrestrial emissions, extra-solar narrowband signals could appear pulsed and otherwise have low duty-cycles. In such cases, signals could appear in only one or two ON observations in a cadence and for a subsection of those observations, causing them to be missed by current filters. On the other hand, RFI can also appear in only ON observations. For example, RFI signals could exhibit intensity modulations that follow the observational cadence of 5 minutes a pointing, a false positive that would pass the directional filter. While we observe false positives like this in practice, having directional requirements still serves as an interpretable basis for determining candidates, which would induce follow-up observations for potential re-detection. This begs the question: can we differentiate narrowband signals as RFI based on morphology alone? Since ETI signals must travel to us through interstellar space, are there effects that would be observable and sufficiently unique compared to RFI modulations? One possibility is that radio frequency scattering effects, such as diffractive scintillation and spectra broadening, could imprint on extra-solar narrowband signals, altering them enough to be resolved and distinguished from terrestrial RFI. A signal filter based on astrophysical properties would be an important tool, when applicable, for evaluating candidate technosignatures. For signals that fail the directional filter, a scattering-based filter might preserve missed candidates; for those that pass, it would amplify the likelihood of a true detection. Radio wave scattering has been studied extensively since the onset of radio astronomy. Weak scattering from the ionosphere and solar wind or interplanetary medium (IPM) was observed to scintillate radio emission from stars (Smith, 1950; Hewish et al., 1964). Pulsars themselves were discovered during one such study, and subsequent pulsar observations revealed strong scattering from the ISM (Hewish et al., 1968; Scheuer, 1968; Roberts and Ables, 1982). Since then, much of our understanding of ISM scattering has come about by observing pulsars, especially by analyzing pulse broadening and intensity fluctuations in time-frequency space (Narayan, 1992). This observational work has led to models describing the stochastic nature of scintillation and broadening. Plasma effects on narrowband signals have been analyzed by Cordes and Lazio (1991) and Cordes et al. (1997). Spectral broadening from the IPM has been observed in the transmissions of artificial probes and studied extensively (Goldstein, 1969; Woo and Armstrong, 1979; Harmon and Coles, 1983; Woo, 2007). For the ISM, scintillation has been historically interesting to SETI as a factor that changes the detectability of a technosignature. Most of the time, the signal intensity is reduced, but occasionally the intensity will spike as a result of constructive interference. Cordes and Lazio (1991) recommend multiple observations spaced in time to maximize the chance of catching at least one detection. In this work, we investigate the parameter space of scattering relevant to narrowband radio SETI and investigate whether resolved scattering effects can be used to flag technosignature candidates in the proverbial haystack of RFI. In Section 2, we review scattering theory relevant to narrowband signals. In Section 3, we introduce methods for identifying the presence of scintillation in radio spectrogram data and for producing synthetic scintillated intensity time series. In Section 4, we present an approach for estimating likely scattering properties as a function of observation parameters using the NE2001 model. In addition to examining theoretical properties of scintillated narrowband signals, in Section 5, we perform a statistical analysis on detected narrowband signals in multiple radio bands using the GBT. We compare properties of real RFI signals with those of theoretical scintillated ETI signals to determine the conditions under which scattering effects can be used as effective SETI filters. Finally, we summarize our results, discuss limitations, and give recommendations on potential scintillation-based technosignature searches in Section 6. While examples in this paper use certain values for observational parameters, such as observation length and time resolution, the methods developed in this work are meant to be broadly applicable to various radio observations. As such, we provide a Python library blscint2 that implements many of the key components of our scintillation search methodology. [FOOTNO ## 2 Scattering Theory and Seti Observational and theoretical work on radio scattering have been done to characterize both the bulk power spectrum of electron density fluctuations as well as the effect of localized ionized scattering structures along the line of sight (Rickett, 2007). In this work, we limit our considerations to the wavenumber spectrum of ISM plasma fluctuations as a first order approximation of scattering along any line of sight. The dominant effect causing radio scattering in ionized plasma is refraction due to variations in electron density. The changes in refractive index give rise to changes in phase when a plane radio wave is passing through the scattering layer. These phase variations, along with path-induced phase delays, are propagated to the observer's plane, creating an interference pattern. Since ionized plasma is a complex, stochastic medium, it is most useful to describe the power spectrum of turbulent scales. In practice, it is common to use the phase structure function: \[D_{\phi}(x,y)=\langle[\phi(x+x^{\prime},y+y^{\prime})-\phi(x,y)]^{2}\rangle_{x^ {\prime},y^{\prime}}, \tag{1}\] where \(x,y\) are coordinates in the scattering plane. This equation can also be expressed in terms of a vector baseline \(\mathbf{r}=\langle x,y\rangle\), which is useful when describing interferometer measurements. For single dish measurements, this "baseline" is set by the relative transverse velocity \(V_{T}\) of the diffraction pattern during an observation of length \(\tau\), so that \(r=V_{T}\tau\). Here, we assume that the pattern is effectively "frozen," in that \(V_{T}\) dominates the intrinsic random motion of material in the scattering medium. The structure function is usually taken to be a power law in wavenumber (length scale), so that \[D_{\phi}(r)\propto r^{\alpha} \tag{2}\] for some power \(\alpha\)(Rickett, 1990; Narayan, 1992). The phase spectrum of the scattering medium determines the type of diffraction pattern seen by the observer, so it is important to constrain this at a high level. A common assumption is that ionized scattering media are isotropic and follow Kolmogorov turbulence, such that energy cascades from large turbulent structures with an outer length scale down to an inner length scale. Long-term pulsar observations show evidence that ISM scattering exhibits a Kolmogorov spectrum over many orders of magnitude (Ramachandran et al., 2006). Kolmogorov turbulence is described by \(\alpha=5/3\) in Equation 2. Another important case of turbulence is the square-law regime, for which \(\alpha=2\). This typically applies when the spatial wavenumber probed by the observation (i.e. \(r=V_{T}\tau\)) is smaller than the inner scale. This regime yields nice analytical expressions for scattering behavior, such as the spectral broadening function being a Gaussian. Some ISM scattering studies have accordingly used Gaussian models derived using \(\alpha=2\) as approximations for the Kolmogorov case (\(\alpha=5/3\); Roberts & Ables, 1982; Cordes, 1986; Gupta et al., 1994). ### Weak and Strong Scattering Since turbulence and scattering are inherently stochastic processes, it helps to compare characteristic scales to describe the underlying physics. The so-called diffractive length scale \(r_{\rm diff}\) is defined as the characteristic transverse distance over which the root mean square phase difference is 1 rad. This can be compared with the Fresnel radius \(r_{F}\), which describes the size of the largest cross-section along the observer-source path for which waves arrive coherently in free space, with path-induced phase delays less than \(\pi\). If \(r_{\rm diff}\gg r_{F}\), we are in the weak scattering regime, in which refractive phase changes are small compared to path-induced phase differences and the characteristic size of a coherent emission patch on the sky is \(r_{F}\)(Narayan, 1992). If \(r_{\rm diff}\ll r_{F}\), we are instead in the strong scattering regime, in which the characteristic coherent patch size becomes \(r_{\rm diff}\), and plasma-induced phase changes span many radians over the Fresnel radius. The strength of scattering depends on a variety of factors, such as the free electron number density, the strength of turbulence, the emission frequency, and the distance of the source. Along a given line of sight, the scattering strength increases and eventually transitions from weak to strong (Cordes & Lazio, 1991). The transition distance, for which \(r_{\rm diff}\sim r_{F}\), depends on the emission frequency. In the strong scattering regime, there are two types of scintillation. Diffractive scintillation is relatively fast (on order minutes to hours) and requires a compact source, such as a pulsar, while refractive scintillation is weaker and slower (on order days to years) (Narayan, 1992). Diffractive scintillation arises from multi-path propagation from emission across the scattering medium, while refractive scintillation is a larger-scale geometric effect that can itself modulate diffractive scintillation effects. Since potential narrowband ETI emission would have a compact source, we focus on strong diffractive scintillation in this paper. The "modulation index" \(m_{d}\) is the root mean square of the fractional flux variation due to scintillation. In weak scattering, \(m_{d}\ll 1\), whereas in strong scattering, \(m_{d}\sim 1\). ### Effects of Strong Scintillation on Narrowband Signals Pulsar observations are effective probes of intensity scintillations in time and frequency given their persistent, broadband signals. On the other hand, since narrowband signals are by definition restricted in spectral extent, we are mostly limited to studying temporal effects. To guide the discussion, we can write a basic model for the intensity of a scintillated narrowband sig nal: \[I_{\rm scint}(t)=g(t)S+N(t), \tag{3}\] where \(g(t)\) is the scintillation gain, \(S\) is the fixed intensity of the original signal, and \(N(t)\) is the background noise. One observable effect is that for independent observations, the detected signal intensity will follow an exponential probability density function (PDF): \[f_{g}(g)=\exp(-g)H(g), \tag{4}\] where \(H\) is the Heaviside step function (Cordes & Lazio, 1991; Cordes et al., 1997). If we assume a continuous-wave (CW) transmitter and think of radio waves as complex phasors, we start with signals of constant amplitude modulus. As the signal refracts at different points across the scattering medium, it picks up random phase changes. Due to multi-path propagation, many independent de-phased versions of the signal are summed together at the observing plane. The asymptotic result is that an ISM scintillated signal can be modeled as a random complex Gaussian variable, whose amplitude follows a Rayleigh distribution and whose intensity therefore follows an exponential distribution (Goodman, 1975). Another effect arising from the statistical power density spectrum of plasma turbulence is that the diffraction pattern at the observing plane has a spatial autocorrelation function (ACF) with a characteristic spatial scale \(r_{\rm diff}\). Though this work limits discussion to the effects on narrowband signals, strong diffractive scintillations also have a spectral ACF with a characteristic scintillation bandwidth (also known as the decorrelation bandwidth). For a single dish telescope taking a long radio observation, the diffraction pattern will sweep across the telescope at a relative transverse velocity, so that observations display a temporal ACF in diffracted intensity. In terms of the phase structure function, the temporal ACF of \(g\) is given by \[\Gamma_{I}(\tau)=|\Gamma_{E}(\tau)|^{2}=\exp\left[-D_{\phi}(V_{T}\tau)\right] \tag{5}\] in the Rayleigh limit (Cordes & Lazio, 1991; Coles et al., 2010). Note that in this work, we use the normalized autocorrelation. The ACF thus has a representative timescale \(\Delta t_{d}=r_{\rm diff}/V_{T}\) over which scintillation occurs. By convention, \(\Delta t_{d}\) is measured as the half-width at \(1/e\)-height of the ACF, which has been historically estimated to be a Gaussian function. In other words, \[\Gamma_{\rm sq}(\tau)=\exp\left[-\left(\frac{\tau}{\Delta t_{d}}\right)^{2} \right]. \tag{6}\] However, under the Kolmogorov assumption, it is more precise to use \[\Gamma_{\rm k}(\tau)=\exp\left[-\left|\frac{\tau}{\Delta t_{d}}\right|^{5/3} \right]. \tag{7}\] The Kolmogorov form is near-Gaussian, as shown in Figure 1. In this work, we use the Kolmogorov form \(\Gamma_{\rm k}\) throughout, but all methods can be performed with the square-law form as well. We note that an additional scattering effect on narrowband signals is spectral broadening. This causes power at a single frequency to spread over a bandwidth \[\Delta\nu_{sb}=C_{2}/(2\pi\Delta t_{d}), \tag{8}\] where \(C_{2}\) is a constant of order unity that depends on the scattering medium; \(C_{2}=2.02\) is used in Cordes & Lazio (1991). However, at microwave frequencies, spectral broadening is typically smaller than commonly used frequency resolutions in SETI, so this effect would be difficult to observe except in lines of sight with extreme scattering. ## 3 Identifying Strong Scintillation in Detected Signals Since scintillation is inherently stochastic, we have to use statistical indicators to identify its presence in a detected narrowband signal. Accordingly, we extract time series intensity data from signals in radio Stokes I spectrograms and identify several "diagnostic statistics" that probe the theoretical asymptotic behavior described in Section 2.2. For our scintillation analysis, we think of each signal detected within an observation of length \(\tau_{\rm obs}\) and spectrogram time resolution \(\Delta t\) as a sequence of \(N_{t}=\tau_{\rm obs}/\Delta t\) statistically dependent random intensity samples drawn from the asymptotic distributions. ### Diagnostic Statistics Given time series intensity data for a detected narrowband signal, we can compute _diagnostic statistics_ for the expected asymptotic behavior of a scintillated signal. This process is analogous to feature engineering in machine learning, where these statistical "features" are designed to have a physical basis behind them. The closer Figure 1: Comparison of the Kolmogorov and square-law ACF models. Both functions are computed using a scintillation timescale of \(\Delta t_{d}=30\:\rm s\) and a time resolution of \(\Delta t=4.65\:\rm s\). The \(1/e\)-height is shown as a dotted line. a given diagnostic statistic is to the expected asymptotic value, the higher likelihood the original signal is scintillator. As such, we can create thresholds using these statistics to function as filters for interesting candidate signals. In this paper, we offer a few examples of useful diagnostic statistics, but note that the list is in no way exhaustive and that there may be other interesting statistical features that help determine whether a given signal may be exhibiting scintillations. These can be found in Table 1, as well as asymptotic values in the absence of noise. First, we want statistics that can probe the expected exponential distribution of intensities. For this discussion, assume that the time series for an idealized scintillated signal is normalized to mean 1. The standard deviation of intensity samples lends itself naturally to evaluating the degree of scintillation and tends to 1 for a normalized exponential distribution. In other words, \(m_{d}=(\langle g(t)^{2}\rangle/\langle g(t)\rangle^{2}-1)^{1/2}\sim 1\) for strong diffractive scintillation. For a strongly scintillated signal, we expect to see complete destructive interference, leading to a minimum intensity near 0. In reality, signals are embedded in random voltage noise, so that during periods of destructive interference, measured intensities can actually be below the mean noise level. As a necessary pre-processing step to help isolate signal intensities (Section 5.1), we subtract the noise mean from data spectrograms, which can result in minimum signal "intensities" that are negative. Another statistical measure that addresses this directly is the Kolmogorov-Smirnoff (K-S) statistic, which is used to compare a sample distribution to a target ideal distribution using the empirical cumulative distribution function (ECDF). In this case, we compute the K-S statistic against an ideal exponential distribution with rate \(\lambda=1\), keeping in mind that our time series have an assumed mean of 1. In practice, we do not know the actual mean intensities of our signals, so we can only estimate a sample mean as we normalize the time series to mean 1. So, instead of using established tables of statistic values to determine p-values, we use the statistic itself to set thresholds. The lower the K-S statistic for an intensity time series, the closer the intensities are to being exponentially distributed. We must note that the assumption of an unmodulated CW signal, or at least a high-duty cycle signal, is important for these statistics. For example, radio transmissions on Earth are usually modulated, so for such signals, the exponential intensity distribution arising from scintillation would be convolved with the distribution of the modulation. If the modulation is faster than the spectrogram time resolution \(\Delta t\), then the modulation averages out within time bins, essentially giving us a CW signal. However, if the timescale of modulation is in between \(\Delta t\) and \(\tau_{\rm obs}\), it is likely that the intensities of the scintillated modulated signal would no longer be exponential at the observer. A scintillated signal will yield a flux time series with a characteristic ACF width equal to \(\Delta t_{d}\). From time series signal data, we can compute the ACF at all lags \(k\), normalized to 1 at lag 0. We can then compare the empirical ACF with the theoretical model \(\Gamma_{\rm k}\) by using raw values or by fitting with least squares. In the presence of noise, the ACF spikes at lag 0 compared to non-zero lags, since the random fluctuations add in quadrature. This is especially significant for low intensity signals. Instead of only using raw (normalized) ACF values, it is therefore more reliable to fit \(\Gamma_{\rm k}\) and the noise spike in one shot using least squares and to derive the corresponding scintillation timescale \(\Delta t_{d}\). Following the treatment in (Reardon et al., 2019), we fit the following expression to the empirical ACF: \[\Gamma_{\rm k,n}(\tau)=A\Gamma_{\rm k}(\tau)\Lambda(\tau,\tau_{\rm obs})+W \delta(\tau), \tag{9}\] where \(A\), \(W\) are multiplicative factors, \(\delta\) is the Kronecker delta or discrete unit impulse function, and \(\Lambda\) is the triangle function with zeros at \(\pm\tau_{\rm obs}\) used to model the sample autocorrelation. The least squares fit gives values for \(A\), \(W\), and \(\Delta t_{d}\) within \(\Gamma_{\rm k}\). This process yields consistent results as if we first excluded lag 0 from the fit, which is also commonly done (Rickett et al., 2014). Since detected signals may be RFI and have complex ACFs, having values for \(A\) and \(W\) can help us identify and exclude poor fits (i.e. if \(A\) is close to 0, it is unlikely that the signal's ACF truly matches \(\Gamma_{\rm k}\)). \begin{table} \begin{tabular}{|c|c|c|c|} \hline Statistic & Data Type & Theoretical Behavior & Asymptotic Value \\ \hline \hline Standard Deviation (RMS) & Intensity & Exponential & 1 \\ Minimum & Intensity & Exponential & 0 \\ Kolmogorov-Smirnoff Statistic & Intensity & Exponential & 0 \\ Autocorrelation Function ACF(\(\tau\)) & Autocorrelation & Near-Gaussian & \(\Gamma_{I}(\tau)\) \\ Least Squares Fit for \(\Delta t_{d}\) & Autocorrelation & Near-Gaussian & \(\Delta t_{d}\) \\ \hline \end{tabular} Note. – For each statistic, we list the type of data used for computation, the theoretical behavior of that data type, and the asymptotic value of the statistic (in the absence of noise) as the observation length goes to infinity. \end{table} Table 1: Diagnostic statistics chosen to probe theoretical scintillation effects ### Constraints on Identifying Scintillation There are various factors at play that affect the possibility of detecting scintillation. The first is that the time resolution must be high enough to sufficiently resolve scintles (scintillation maxima). Similarly, the integration time per observation has to be long enough to collect enough scintles for better convergence to the theoretical ACF. However, the observation length should be short enough that the receiver gain is stable. Gain fluctuations would change the underlying noise as well as the detected signal intensities over time. While this is an effect that can theoretically be corrected for using data at signal-free frequencies, for practical purposes, it is simpler to limit the observation length such that we can assume gain stability. This further avoids the potential problem of basing calculations on a "signal-free" region in time-frequency space that in actuality is occupied by dim RFI that escaped detection. The detected narrowband signal must be bright enough to compute accurate statistics while embedded in noise. Noise fluctuations in the time series representation of a scintillated signal's intensity will move the empirical distribution away from exponential and mask the ACF structure. Note that since the ACF of white noise is an impulse at lag 0 and that the ACF operation is linear for uncorrelated functions, we can still fit a scaled version of the ideal profile \(\Gamma_{\mathrm{k}}\) for a scintillated signal's ACF, adding an additional term to fit for the noise impulse. However, for signals with low signal-to-noise ratios (S/N), the impulse will be the overwhelming part of the extracted ACF, which can make it harder to make an accurate fit. As one might expect in radio SETI, the RFI environment is a significant obstacle for detection. Our present tools for detecting narrowband signals make simplifying assumptions as to the kinds of signals that we hope to be sensitive to. Broadband RFI can be modulated at different frequencies, so sometimes a bright enough broadband signal passes our S/N thresholds and is falsely flagged as a "narrowband" detection. Broadband RFI can also overlap real narrowband signals, majardly distorting the extracted intensity time series data. It is also possible that certain modulation schemes in narrowband RFI present confounding factors for scintillation detection; perhaps some forms of RFI already appear to be scintillated (at least according to the theoretical properties identified). In Section 5, we perform an initial analysis of the narrowband RFI environment at the GBT, computing the various diagnostic statistics and comparing them with those predicted for scintillated signals. ### Synthesizing Scintillated Signals with Autoregressive-to-Anything (ARTA) Since observations are necessarily limited in time, we have a finite number of samples per target. Furthermore, we work with large search parameter spaces for which there is a trade-off between the length of time per target and the number of targets searched. Unless a specific pointing is otherwise scientifically interesting, it may be more useful to spend a shorter integration time on a larger number of pointings. Taken together, Figure 2: Synthetic scintillated intensities (\(N=10^{5}\)) generated using ARTA, using a sample interval of \(\Delta t=4.65\) s and scintillation timescale \(\Delta t_{d}=30\) s. **Top**: Synthetic intensity time series data, showing first 1000 samples. **Bottom left**: Histogram of intensities, showing the expected exponential distribution. **Bottom right**: Sample ACF plotted up to lag 64, with the target ACF \(\Gamma_{\mathrm{k}}\) shown overlaid. in most cases, we will be working with a low number of time samples per observation, which implicitly adds measurement error to each diagnostic statistic. We would like to better understand the relationship between observation parameters, the scintillation timescale, and the expected natural error in our diagnostic statistics. Since there are a number of factors involved, it is difficult to quantify the expected errors analytically. Instead, we designed a method to create synthetic scintillated time series data, allowing us to compute the empirical distribution for each diagnostic statistic and observe the corresponding spread from the asymptotic values. Theoretical studies have created models of scintillation phase screens and simulated light waves passing through each screen as a function of space and frequency, such as Coles and Filice (1984), Hamidouche and Lestrade (2007), Coles et al. (2010), and Ravi and Deshpande (2018). While this gives the best physical intuition for a given set of parameters, for our work, we need to be able to quickly produce a large quantity of synthetic scintillated narrowband signals over different scintillation and observation parameters. Since we are specifically interested in asking when scintillation might be detectable for SETI, we choose to rely on predictions from established theory to more efficiently create synthetic data rather than to generate our own rigorous simulations, although this may be a valuable direction for the future. One method to produce synthetic scintillated data is to first compute the power spectrum \(S\) of scintillations using a Fast Fourier Transform (FFT) of the target autocorrelation (in the voltage domain, \(\Gamma_{\rm k}^{1/2}\)). One may then produce a complex voltage time series by taking the inverse FFT of complex Gaussian noise multiplied by \(S^{1/2}\). Finally, taking the squared magnitude of the voltage series yields an intensity time series following an exponential distribution and ACF of \(\Gamma_{\rm k}\). While this method is relatively straightforward and satisfies asymptotic scintillation properties, we would like to present an alternative synthesis technique that may have broader uses in SETI for future applications. Synthetic time series data following overarching statistical distributions can be produced using autoregressive models. Cario and Nelson (1996) developed a model called the "autoregressive to anything" (ARTA) process for generating time series data with arbitrary marginal distribution and autocorrelation structure (up to a specified number of lags). While this work focuses on the effects of scintillation on CW narrowband signals, having the ability to match arbitrary target distributions for first and second-order statistics could be useful for SETI applications that aim to model other astrophysical effects or even certain types of RFI. In our case, the target marginal distribution is exponential and the autocorrelation structure is the near-Gaussian curve \(\Gamma_{\rm k}\). We construct ARTA processes to model the noise-free scintillation gain \(g(t)\) of a 100% modulated narrowband signal over time. In the style of Equation 3, we can produce synthetic intensities with \(I(t)=g(t)S\), for any choice of signal intensity \(S\). Figure 2 shows an example of synthetic scintillated intensities generated in this way with \(S=1\), along with a histogram and ACF plot demonstrating the asymptotic behavior. To construct an ARTA process \(Y_{t}\), we provide a marginal distribution with cumulative distribution function (CDF) \(F_{Y}\) and an autocorrelation structure \(\rho_{Y}=(\mathrm{Corr}[Y_{t},Y_{t+1}],\ldots,\mathrm{Corr}[Y_{t},Y_{t+p}])\), where \(p\) is the number of lags specified (Cario and Nelson, 1996). Since the model is computed numerically, \(\rho_{Y}\) is finite, and the model will only attempt to match the ACF up to lag \(p\). The computation involves solving the Yule-Walker equations for a \(p\times 1\) vector of autoregressive process parameters, which in turn requires inverting a \(p\times p\) matrix. This limits the number of lags out to which we can effectively compute, but for scintillation analysis, this will rarely be an issue. While this procedure results in an ARTA process with correlations close to \(\rho_{Y}\), Cario and Nelson (1996) describe methods to improve convergence to the target correla Figure 3: Histograms of diagnostic statistics computed using \(N=1000\) ARTA-produced intensity time series realizations for representative scintillation timescales of 10, 30, and 100 s. Each time series is produced using \(\Delta t=4.65\) s and \(\tau_{\rm obs}=600\) s and does not include additive background noise. We plot histograms of the standard deviation, minimum, Kolmogorov-Smirnoff statistic, and least squares fit for the scintillation timescale, computed for each time series realization. tions. By perturbing the input correlations to the model and doing a grid search in the parameter space, we can arrive numerically at final correlations that have higher accuracy. In this work and in blscint routines, we choose to forego this additional step, since it increases computational time significantly without much reward. Since using a finite observation length means that, by definition, we are performing small sample experiments, any marginal increase in the asymptotic correlation accuracy is quickly overshadowed by intrinsic sampling error. With this tool, for any set of parameters \((\Delta t,\tau_{\mathrm{obs}},\Delta t_{d})\), we can create datasets with many time series realizations to analyze the measurement error implicit in our limited-length observations. Note that we control the observational parameters, such as \(\Delta t\) and \(\tau_{\mathrm{obs}}\), but not the scintillation timescale \(\Delta t_{d}\). This implies that we should choose observational parameters in such a way that we minimize our measurement error with respect to the most likely scintillation timescales. So to make this process most useful, we should attempt to estimate the most likely or most detectable scintillation timescales; this is addressed in more detail in Section 4. The parameter spaces involved are vast, but we can focus on representative values close to those commonly used in radio SETI today. In other words, we try to only make slight perturbations to observational parameters used by modern spectrogram searches and similarly limit the range of scintillation timescales to practically consider. Ideally, it will be possible to directly analyze SETI observations taken for other purposes for evidence of scintillation using the methods developed in this paper. For example, suppose we want to evaluate our sensitivity to scintillation timescales in the range of 10-100 s. The high spectral resolution data format used by BL has 2.79 Hz and 18.3 s resolution for 5 minutes, resulting in 16 time samples per observation. If we instead take observations for 10 minutes at 4.65 s resolution, yielding 128 time samples, our diagnostic statistics are more accurate and sensitive to a larger range of scintillation timescales. With these parameters, we create synthetic noise-free time series observations with ARTA, compute the diagnostic statistics, and plot histograms of each as a function of scintillation timescale as shown in Figure 3. The different scintillation timescales yield observable differences in the empirical probability density function for each diagnostic statistic. Panels 1-3 all show diagnostic statistics that target the asymptotic exponential distribution of intensities. As the scintillation timescale decreases and approaches the time resolution, each scintle will generally be covered by individual time samples. As \(\Delta t_{d}\sim\Delta t\), the ACF structure becomes irrelevant and the observed intensity samples better match the theoretical intensity distribution. In each of Panels 1-3, the 10 s histogram is the tightest around the asymptotic statistic value, whereas the 100 s histogram has the largest spread and general deviation from the asymptotic value. As the scintillation timescale increases relative to the time resolution, more samples cover individual scintles, and so the ACF structure reduces the apparent exponentiality of the intensities within a single observation or time series realization. Panel 4 shows the least squares fit for the scintillation timescale; this similarly has the largest error for the largest scintillation timescales, since there are fewer scintles during the same observation length. Once again, note that here, the diagnostic statistics are calculated for time series intensities with no additive background noise to observe how a low sample count effects the measurement error. ## 4 Exploring the parameter space of ISM scintillation with NE2001 The likelihood of detecting scintillation depends heavily on our physical location in our Galaxy and the lines of sight at which we observe. To determine the best targets for detecting scintillation, we need to estimate the quantitative effects of scintillation on narrowband signals in various directions on the sky. This depends on the plasma free electron number density and strength of turbulence along the line of sight. Cordes & Lazio (2002) developed the NE2001 free electron density model for our Galaxy, based on pulsar observations and scattering studies. NE2001 models Figure 4: Comparison between methods for distance sampling, including uniformly, by stellar number density, and by stellar mass density. We use a line of sight of \((l,b)=(1,0)\) out to a distance of 20 kpc. Bottom panel shows NE2001-produced scintillation timescales as a function of distance. various Galactic features and estimates the dispersion measure (DM) and characteristic scattering scales to distance \(d\) along any given line of sight through the Galaxy. The scattering scales computed include the scintillation timescale, spectral broadening, scintillation bandwidth, and temporal broadening. This allows us to uniquely estimate the asymptotic statistical properties of scintillation, which can help decide promising targets for scintillation analysis. Given a distance \(d\) and Galactic coordinates \((l,b)\), the publicly-available code for NE2001 model estimates the expected scintillation timescale and bandwidth at frequency \(\nu=1\:\mathrm{GHz}\) and transverse velocity \(V_{T}=100\:\mathrm{km/s}\). From this point, we have the scaling relation: \[\Delta t_{d}\propto\nu^{2/\alpha}V_{T}^{-1}, \tag{10}\] where \(\alpha=5/3\) for Kolmogorov turbulence and \(\alpha=2\) for square-law turbulence (Cordes et al., 1997; Coles et al., 2010). With Equation 10, we can scale raw NE2001 values to estimate scintillation properties for specific observational setups. We would like to narrow the parameter space of possible observing configurations and scintillation timescales to those that are most amenable to detection with current facilities. With the NE2001 model, we can estimate scintillation properties for a given set of input parameters, including the sky direction, distance, frequency, and transverse velocity. However, these inputs constitute an enormous parameter space, with no clear _a priori_ preference from a SETI perspective. Even with bounds for each individual parameter, it would be prohibitively computationally expensive to calculate properties across each combination of potential parameters. Instead, we choose to use Monte Carlo sampling over the parameter space, using enough samples to sufficiently capture the core statistics of the distribution of scintillation properties. For sampling, we fix a sky direction \((l,b)\) and a target radio frequency band. We then sample the frequency \(\nu\) uniformly within that band (as a narrowband signal could be found anywhere in the band). In this paper, we will refer to common radio bands used with the GBT, including L (1.15-1.73 GHz), S (1.73-2.6 GHz), C (3.95-8.0 GHz), and X (8.0-11.6 GHz) (GBT Support Staff, 2017; MacMahon et al., 2018). For the distance \(d\), we have to specify a maximum distance \(d_{\mathrm{max}}\), but the minimum distance \(d_{\mathrm{tr}}\) is that at which weak scattering transitions to strong scattering. We can sample uniformly from \([d_{\mathrm{tr}},d_{\mathrm{max}}]\), but we can also attempt to match the potential distribution of dis Figure 5: Set of Monte Carlo-sampled distributions of scintillation parameters at C-band, using \(N=10000\) realizations. We use a line of sight of \((l,b)=(1,0)\) out to a distance of 20 kpc, and transverse velocities are uniformly sampled between 10 to 150 km/s. Dashed line shows median value, dotted lines show interquartile range (IQR). tances that ETI would actually occur. For example, we can sample distances based on the expected distribution of stellar number densities along the line of sight through the Galaxy. For this, we use model parameters from Gowanlock et al. (2011), who adapted a model from Carroll & Ostlie (2007) that matches the observed density in the solar neighborhood. To see the effects on our sampling, we can also sample by stellar mass density, though this is less precise, since we typically expect ETI to reside around less massive stars. We use the model provided in McMillan (2016) to compute stellar mass density along a line of sight. In Figure 4, we compare these models as a function of distance along Galactic coordinates \((l,b)=(1,0)\), showing them alongside NE2001-generated scintillation timescales. As expected, the mass density profile is significantly sharper than the number density, but both more heavily weight the Galactic center region compared to uniform distance sampling. Finally, the transverse velocity \(V_{T}\) is perhaps the hardest to constrain in general. For scintillation, \(V_{T}\) depends on the relative transverse velocities of the source, observer, and scattering screen, each of which is difficult to predict. A representative transverse velocity for Galactic pulsars is about 100 km/s (Cordes, 1986). The transverse velocity for an ETI source, especially in our solar neighborhood, might be on order 10 km/s instead (Cordes & Lazio, 1991; Cordes & Rickett, 1998). Depending on the line of sight, for sources far across the Galaxy (i.e. 10 kpc or so), differential Galactic rotation can add components to the transverse velocity on order of 100 km/s as well. An emitter's orbital velocity and spin velocity can also contribute. Since all of these independent effects are non-trivial and stochastic, we can at best set heuristic transverse velocity ranges and sample uniformly between them, understanding that even the limits themselves are only useful to an order of magnitude. Taking all these parameters together, we can create sampled distributions for each scintillation scale. Figure 5 shows a realization of Monte Carlo simulations for C-band in the \((1,0)\) direction with \(N=10000\) realizations, using a number density-based weighting on distance samples. We use a maximum distance of 20 kpc and a transverse velocity range of 10 to 150 km/s. It is readily apparently that the resultant distributions are significantly skewed. For example, short distances from the observer will lead to long scintillation timescales. Since the goal of the parameter space analysis is to evaluate the observational setup that gives us the best likelihood for detecting scintillation in narrowband signals, we focus on the central statistics. For skewed distributions, we choose to calculate the median and interquartile ranges (IQR) as representative values for each scale. From Figure 5, we conclude that signals at C-band in the direction \((1,0)\) are likely to have scintillation timescales ranging between 10-28 s. Indeed, since this is the IQR, only half of the sampled timescales lie in that range, and there is an implicit bias towards the lower end of that range and below. What this really tells us is that if we are searching in that sky direction and at that frequency, we should make sure to choose observational parameters so that we are sensitive to scintillation timescales between 10-28 s. Also, note that spectral broadening is on order 0.01 Hz, which is negligible compared to typical spectral resolutions used in modern radio SETI. With this tool, we can estimate which range of scintillation timescales to target for a given sky direction and frequency band. ## 5. Temporal Analysis of Detected Narrowband RFI To evaluate whether it is viable to detect scattering effects like scintillation in detected narrowband signals, we must characterize the standard RFI environment within which SETI observations are taken. The majority of narrowband RFI is generated from communication applications, therefore it is common for RFI to show intensity modulation in frequency or time. Depending on the nature of this modulation and the free electron column density along a line of sight, RFI could confound the detection of actual scintillated extra-solar signals. We must therefore analyze the RFI environment, regardless of sky direction, with respect to temporal statistics that can be used to identify the presence of ISM scintillation. In this paper, we focus on RFI present in GBT observations, which comprise a significant fraction of BL data. We must note that it is technically possible that any given detected signal in this "RFI" analysis is actually a technosignature. However, we can confidently say that the overwhelming majority of signals encountered will be anthropogenic in origin. Furthermore, in this analysis, we take observations in a direction where \(\Delta t_{d}\) is long compared to \(\tau_{\rm obs}\). This way, detected signals will not be modulated by ISM scintillation within a single observation, so whether or not a given signal is a technosignature is irrelevant to our analysis. ### Finding and Characterizing Signals In this section, we outline the general process for detecting signals and extracting intensity time series data, from which we can compute diagnostic statistics and run our scintillation analysis. Figure 6 demonstrates the step-by-step process on a real GBT RFI signal. The first step in analyzing the RFI environment is curating a dataset of detected signals. We need some form of energy detection to pinpoint the frequencies and preferably the drift rates of narrowband signals. The most common method for detection used by BL is the tree deDoppler code turboSEIT3, which efficiently im pplements a matched filter for linearly drifting narrowband signals (Enriquez et al., 2017; Enriquez and Price, 2019). turboSETI gives us the signal frequency at the beginning of the observation and the best-fit drift rate. However, to extract intensity data for scintillation analysis, we additionally need the frequency bandwidth that the signal occupies. Ultimately, we aim to construct a "bounding box" of sorts around each narrowband signal. Since narrowband signals can have an overarching Doppler drift rate, these bounding boxes are defined by a starting central frequency, a drift rate, and a signal bandwidth. In time-frequency space, these become bounding parallelograms, since we take the signal bandwidth to follow the extracted drift rate at each time step. Given a fit for the drift rate, we can de-drift a spectrogram containing the signal by shifting each individual spectrum accordingly, reducing the problem to finding the frequency bandwidth that overwhelmingly captures the signal's power. There is no singular correct way to bound radio signals found in spectrogram data. There are many morphologies of narrowband signals, such as those with unstable oscillator frequencies or varying intrinsic bandwidths. Signal leakage also affects bright signals and spreads the power into neighboring spectral bins. Background noise and nearby spurious signals can additionally complicate the bandwidth calculation. Signal bound estimation has been done before in radio astronomy. For pulsars, van Straten et al. (2012) measures the size of individual pulses as the width at a user-specified fraction of the peak intensity. In one of the rare instances of bandwidth estimation in narrowband SETI, Pinchuk et al. (2019) calculates signal bounds at the 5\(\sigma\)-level, regardless of the detected signal's peak S/N. Our goal is to find the tightest frequency bounds that do not exclude a significant amount of signal power, so that we can accurately represent the intensity behavior over time. If our bounds are too tight, we risk excluding and distorting information; if they are too loose, noise fluctuations can take over and wash out the signal. In this work, we choose to bound signals at 1% of their maximal intensity. First, we de-drift and integrate a spectrogram along the time axis to get a spectrum centered on the signal. To make a fit of the noise background, we first exclude most of the bright data points with sigma clipping up to 3\(\sigma\). Then, we fit a straight line to the remaining points and obtain the final corrected spectrum by subtracting this fit from the original spectrum. The signal bounds are calculated as the frequency bins on the left and right of the signal center whose intensities dip below 1% of the maximum intensity in the corrected spectrum. This method is balanced, capturing most of the power from signals that have apparent bandwidths ranging from a few Hz to a kHz. Figure 6B shows an example of such a fit. Figure 6: Steps used in signal intensity analysis. **A**: Detected narrowband signal, in GBT data. **B**: De-drifted signal from panel A, with computed bounding frequencies in dashed white lines. **C**: Frame from panel B, normalized using the background noise along the frequency axis. **D**: Time series intensities computed by integrating power in panel C between the bounding frequencies and normalized to a mean intensity of 1. **E**: Sample ACF computed from panel D. To analyze the properties of a signal's intensity over time, we need to isolate the signal as best as possible from the noise background. To estimate the noise background, we use sigma clipping along the frequency axis to calculate the mean and standard deviation of noise at each timestep. We then normalize the de-drifted spectrogram at every sub-spectrum by subtracting the according noise mean and dividing by the according noise standard deviation. Theoretically, this standardizes the instrument response over the course of the observation and centers the background intensity to 0. It also serves as a crude way of filtering out simple broadband interference. Figure 6C shows the resulting spectrogram. To get the intensity time series for a signal, we integrate the normalized spectrogram along the frequency axis between the computed frequency bounds, resulting in a 1D array of length \(N_{t}\). To standardize the analysis, we additionally normalize this time series to have a mean of 1, as shown in Figure 6D. From the normalized time series, we compute the ACF (Figure 6E). With these two together, we can calculate all the diagnostic statistics to compare with theoretical scintillation properties. It is important to note that since we attempt to normalize the noise background of the spectrogram to a mean of 0 via subtraction, we may end up with negative values in our final extracted time series. Since we cannot remove the noise fluctuation entirely, the time series intensities will always be affected by noise in this way. Normalizing the time series to a mean of 1 can have the additional effect of making the negative "intensities" even more negative. Nevertheless, we choose to compute diagnostic statistics using the normalized time series. ### Observation Details In this exploration of RFI properties, we are investigating the distribution of diagnostic statistics in real, detected RFI signals to evaluate whether these statistics can be used to identify the presence of scintillation. We must therefore ensure that our observations are unlikely to contain any scintillated signals. For this reason, and for additional convenience, we choose to observe towards the north celestial pole (NCP). We verified with NE2001 that the expected scintillation timescales are long compared to desired observation parameters. For instance, at 1 GHz (L-band), a signal at 1 kpc with \(V_{T}=100\) km/s would show a scintillation timescale of 702 s. The other bands we use at the GBT (S, C, and X) correspond to even longer expected timescales due to the frequency scaling. The process for identifying scintillation can be performed over many observational timescales. In our case, we focus our analysis on data resolutions close to those used typically by BL. BL normally runs analysis on 5 minute integrations at a frequency resolution of 2.79 Hz and a time resolution of 18.2 s, for 16 pixels or time samples per observation. We use the same frequency resolution, but extend the data by taking 10 minute integrations at a resolution of 4.65 s, so that we get 128 samples per observation. Having more time samples leads to better diagnostic statistics and better time resolution but requires significantly more data storage. For this work, we used the GBT to take 10 minute observations of the NCP each at L and C-band on 2022 May 16. To find narrowband signals, we use turboSETI with a detection threshold of S/N=10 to search up to maximum drift rates of \(\pm 5\) Hz/s. As an additional step, we exclude detections of the so-called "DC bin" in each coarse node, a vertical artifact of the FFT performed during fine channelization. ### Empirical Results Using the procedure described in Section 5.1, we compute diagnostic statistics for detected signals in GBT observations taken at L and C-bands. For convenience, in this discussion, we will refer to detected GBT signals as "RFI". While these observations are very unlikely to contain scintillated signals, we cannot necessarily rule out the presence of technosignatures in our data. Nevertheless, we can comfortably say that the vast majority of signals are human-created interference. To best compare with our expectations for scintillated narrowband signals, we create synthetic GBT observations with scintillated signals produced using the methods in Section 3.3 and run them through the same analysis pipeline. For the synthetic signals, we construct separate datasets using \(\Delta t_{d}=\) 10, 30, and 100 s, as in Figure 3. The synthesis process described in Section 3.3 does not take noise into consideration. In this work, we treat narrowband signals as additional power that is present on top of the noise background. As such, we assume that the effects of ISM scintillation are imprinted on the signal independently from the noise background. To construct a synthetic observation, we compute a realization of a scintillated signal's intensity over time using ARTA and inject a signal with those intensities onto a radio spectrogram with a realistic noise background, following Equation 3. We use the Python package setigen4 to inject artificial signals and compare directly with real GBT observations (Brzycki et al., 2022). For each scintillation timescale, we generate \(N=1000\) signals with zero drift rate and the same S/N that matches our turboSETI detection threshold. We calculate diagnostic statistics for the artificial signals in the same way that we do for detected RFI. Footnote 4: [https://github.com/bbrzycki/setigen](https://github.com/bbrzycki/setigen) The histogram comparisons for each diagnostic statistic at L and C-bands are shown in Figures 7 and 8. The bold, black histograms show the non-DC RFI samples in the respective frequency band, whereas the thinner histograms represent the synthetic signal datasets. The less the RFI distributions intersect with the scintillated signal distributions, the better our methodology can distinguish a true scintillated signal. At a glance, C-band RFI has better separation than L-band RFI from the scintillated signal distributions, across all diagnostic statistics. In particular, for C-band, the statistics pertinent to the exponential distribution of scintillated intensities (standard deviation, minimum, K-S statistic) have relatively well-defined separations. These can be used to set thresholds (or target ranges) for each statistic, which can be combined to help filter detected signals for scintillation candidates. While the fitted scintillation timescale distributions intersect appreciably, in practice, thresholds can still be set using synthetic signal distributions and used as filters. Comparatively, a significant portion of the L-band RFI occupies the same ranges of statistics as the synthetic signals. This means that existing RFI would confound the detection of real scintillated signals with these methods. From our observations, we observe that lower frequencies (such as L and S bands) have a relatively higher density of RFI with many morphologies, and this could be causing the distributions of statistics looking broader and more irregular than those for C-band RFI. ## 6 Discussion ### Observational Recommendations for Scintillated Technosignature Searches The empirical RFI distributions suggest that at the GBT, higher frequencies will be better for creating statistics-based thresholds.5 The RFI environment at C and X-bands is less dense and less diverse than that at L and S-bands. However, scintillation effects decrease inversely with increasing frequency, lengthening the scintillation timescales (Equation 10). There is also a trade-off in choosing which frequencies to search: higher frequencies have more favorable RFI properties but require either longer observations or pointings with more scattering. Footnote 5: For other telescope sites, a similar RFI analysis would need to be conducted in order to draw similar insights about RFI vs. frequency. For each observing band, the RFI environment sets unavoidable statistics thresholds. At L-band, for instance, it is possible that there is no sky direction and no target scintillation timescale amenable for a scintil Figure 8: Histograms of diagnostic statistics for detected C-band signals with S/N\(\geq\)25. For each statistic, the distribution from detected RFI is shown in black. Plotted for comparison are distributions from synthetic scintillated signals at S/N=25 with scintillation timescales of 10 s (blue), 30 s (orange), and 100 s (green). It could be possible to distinguish a true scintillated signal from RFI given the C-band RFI distributions. Figure 7: Histograms of diagnostic statistics for detected L-band signals with S/N\(\geq\)25. For each statistic, the distribution from detected RFI is shown in black. Plotted for comparison are distributions from synthetic scintillated signals at S/N=25 with scintillation timescales of 10 s (blue), 30 s (orange), and 100 s (green). Across all diagnostic statistics, it would be difficult to distinguish a true scintillated signal from RFI given the L-band RFI distributions. lated technosignature search. While the properties of the local RFI environment certainly vary as a function of time and location, our observations suggest that lower frequencies may always be difficult to use. Specifically, the empirical L-band RFI distributions covered the ideal asymptotic value for each diagnostic statistic, implying that no variation of observational parameters could unambiguously distinguish an appreciable fraction of RFI from real scintillated signals. On the other hand, for C-band and above, we must tend towards longer observing lengths or point towards regions of higher scattering, such as the Galactic center, in order to capture enough scintles. As discussed by Gajjar et al. (2021), there are a multitude of reasons that an ETI detection might be most likely towards the Galactic center, making this an attractive option for a scintillated technosignature search. As the field of radio SETI grows and as new technosignature candidates are found, more work is being done in signal verification and follow-up analysis (Sheikh et al., 2021; Tao et al., 2022). To this end, beyond dedicated searches for scintillation, the methods introduced in this paper may also be used as supplementary analysis for other radio SETI searches. For example, given an interesting narrowband detection that passes some SETI filters, one might ask additionally whether the signal is ISM-scintillated. Following the steps in this work and using blscint, one could estimate likely scintillation timescales along the observation's line of sight at the detected signal frequency. Then, one could generate synthetic ARTA datasets to set diagnostic statistic thresholds and compare how the statistics for the detected signal measure up. Assuming the signal was still compelling after these steps, it would be prudent to do a similar detected RFI analysis using the same telescope, frequency band, observation length, and time resolution to check for RFI with confounding modulation. While emission from distant sources along the Galactic plane has the best chance of exhibiting detectable scintillation within individual observations, these methods constitute a concrete framework for evaluating the likelihood of scintillation in signals from any observational radio SETI campaign. ### The Impact of Models on Designing Observational Campaigns The effectiveness of a designated search for scintillated technosignatures will depend on how well we can estimate the most likely values for \(\Delta t_{d}\) as a function of sky direction and frequency. The fewer unknown degrees of freedom in our Monte Carlo sampling procedure (Section 4), the better the timescale estimates will be. For example, if we wanted to estimate what timescales are possible for emission near a particular known star, we would already begin with the location \((l,b)\) and distance \(d\). The only major parameters left would be the target frequency range (which we can control) and the effective transverse velocity. By constraining sampling parameters, one can get tighter bounds for scintillation timescales and tune observation parameters accordingly. Our Monte Carlo procedure for scattering strength estimates relies on the NE2001 electron density model. While NE2001 remains a popular choice, the YMW16 model from Yao et al. (2017) has emerged as another prominent Galactic electron density model. There have been studies comparing both, such as Deller et al. (2019) and Price et al. (2021), particularly with regards to DM and distance estimation applied to new pulsar datasets. While YMW16 benefits from more recent data, when compared to independent pulsar measurements, both models have their own systematic estimation biases that depend on the location in the Galaxy (Price et al., 2021). The key difference for this work is that NE2001 uses scattering measurements in its fit and estimates scattering properties throughout the Galaxy (Cordes and Lazio, 2002). YMW16 specifically avoids using scattering measurements, arguing that the majority of scattering arises from relatively thin features along the line of sight and therefore cannot be used to appropriately describe the large-scale distribution of scattering (Yao et al., 2017). However, the YMW16 model still attempts to estimate pulse broadening timescales by using an empirical \(\tau\)-DM relation simplistically, resulting in unreliable scattering values, especially for fast radio bursts (Ocker et al., 2021). While it may be difficult to develop a model that robustly constrains the effects of scattering along any line-of-sight in the Galaxy, doing so to even an order-of-magnitude would be crucial for designing scintillation search strategies for SETI, as well as for evaluating whether existing narrowband detections could benefit from scintillation analysis. As new pulsars are discovered and new Galactic electron density models are produced, we suggest that attention should still be given to scattering measurements and predictions. ### Building on the Analysis Pipeline While it involves many steps, the method for search and intensity extraction described in this paper is relatively straightforward. We rely on standard deDoppler search methods (e.g. turboSETI) to both find and characterize signal paths in one shot. Since we are searching for a stochastic effect, keeping the processing simple is not necessarily a detriment. However, our pipeline will still flag bright broadband signals that are able to exceed our S/N threshold. The philosophical question on whether a broadband impulse that contains sharp spectral features could be considered narrowband notwithstanding, using additional pre-processing to detect broadband signal features could better standardize the types of signals passing through the intensity extraction pipeline. Machine learning (ML) could be used to aid scintillated searches, such as for creating initial classifications of signal type and eventually even for doing final candidate analysis. In particular, deep learning techniques, such as convolutional neural networks (CNNs) have been used effectively on a variety of tasks using radio spectrograms (Zhang et al., 2018; Harp et al., 2019; Brzycki et al., 2020; Pinchuk and Margot, 2022; Ma et al., 2023). CNNs could be used to filter out spectrograms with clear broadband emission and would be relatively straightforward to integrate into the pipeline. There is certainly an avenue for complementing domain-based statistical features with computer vision methods, as is done in time-domain SETI (Giles and Walkowicz, 2019). ML techniques could also be applied to the extracted time series or even to the raw signal spectrogram to directly classify likely scintillation candidates. From the standpoint of interpretability, having a set of diagnostic statistics with direct links to the expected theoretical behavior of scintillated narrowband signals provides us with intuitive filter thresholds, whereas a direct ML approach might not. However, used in tandem with our methods for producing synthetic scintillated signals, supervised ML algorithms such as random forest classifiers could be used to rank each of our diagnostic statistics in their importance towards correctly distinguishing scintillated signals from RFI (Breiman, 2001). This could be a valuable future direction for scintillation-based searches and may very well be a function of each observatory's unique RFI environment. ### Implications and Future Directions In this work, we only focus on searching for strong scintillation on high duty-cycle narrowband signals. Since the ionosphere and IPM will tend to vary intensity relatively slightly in most cases, we identified strong scintillation from the ISM as detectable from 100% intensity modulations. Analysis of the RFI environment at the GBT suggests that weakly scintillated extra-solar signals would be difficult to distinguish from existing interference, while strongly scintillated signals can be separated along multiple diagnostic statistics. A common procedure during signal verification of an interesting candidate is to search for other signals close in frequency that are similar in morphology (Sheikh et al., 2021). Along these lines, the possibility of simultaneous ETI signals at multiple frequencies is interesting from the perspective of a scintillation analysis. For signals separated by less than the scintillation bandwidth, we should see the same intensity modulation over time. However, for signals separated by more than the scintillation bandwidth, we would receive different intensity time series that still have the same overall scintillation timescale. With our tool to estimate scintillation timescales and bandwidths, if we were to detect multiple spectrally-nearby scintillation candidates within the same observation, we would have yet another way to contextualize the detected signals and determine whether they might actually be technosignatures. We limit our search methodology to high duty-cycle signals, so that any fluctuations in intensity is purely due to scintillation. If an ETI transmitter is attempting to send information, the initial signal will already be modulated. This could also confound the presence of scintillation. However, we argue that along the lines of sight and distances for which we would expect narrowband signals to be scintillated, the identification of scintillation is itself a message. An ETI civilization advanced enough to transmit a message through interstellar space should understand the effects of plasma on radio emission, since it would distort the initial transmission and hinder communication. With this in mind, an ETI beacon might instead transmit a pure, unmodulated signal, expecting that other civilizations could detect the presence of scintillation in an artificial, narrowband signal. Instead of explicitly encoding a message in the narrowband signal, the mere presence of scintillation would communicate the message: "we are here." Radio scattering from ionized plasma presents in other ways, such as broadband modulation and dispersion. While broadband SETI searches are relatively less common, as we explore new regions of the potential SETI signal parameter space, scintillation could be searched along the frequency axis analogously to our search along the time axis. The scintillation bandwidth, the spectral analogue of the scintillation timescale, does not vary as a function of transverse velocity, so parameter estimation may be less uncertain (Cordes and Lazio, 1991). Broadband signal searches are also able to use coarser frequency resolutions than narrowband searches, though they would likely have to use much finer time resolutions. We hope that this work will lead to more discussion and theoretical work on other ways in which the actual radio emission that we receive can be used to identify the extra-solar origin of technosignatures. Beyond scattering, there are still properties of radio emission, such as polarization, that are only beginning to be considered in depth from a SETI perspective (Tao et al., 2022). Whether it is because certain effects are stochastic or because human radio emission exploits every facet of radio light possible for communication, extracting non-trivial information from a radio signal's detailed morphology has been and will remain difficult. We may need to push the limits of detectability along hitherto unexplored axes to discover the first technosignature. ## 7 Acknowledgements Breakthrough Listen is managed by the Breakthrough Initiatives, sponsored by the Breakthrough Prize Foundation. The Green Bank Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. We thank the staff at the Green Bank Observatory for their operational support. S.Z.S. acknowledges that this material is based upon work supported by the National Science Foundation MPS-Ascend Postdoctoral Research Fellowship under Grant No. 2138147.
2310.02117
Symmetric Single Index Learning
Few neural architectures lend themselves to provable learning with gradient based methods. One popular model is the single-index model, in which labels are produced by composing an unknown linear projection with a possibly unknown scalar link function. Learning this model with SGD is relatively well-understood, whereby the so-called information exponent of the link function governs a polynomial sample complexity rate. However, extending this analysis to deeper or more complicated architectures remains challenging. In this work, we consider single index learning in the setting of symmetric neural networks. Under analytic assumptions on the activation and maximum degree assumptions on the link function, we prove that gradient flow recovers the hidden planted direction, represented as a finitely supported vector in the feature space of power sum polynomials. We characterize a notion of information exponent adapted to our setting that controls the efficiency of learning.
Aaron Zweig, Joan Bruna
2023-10-03T14:59:00Z
http://arxiv.org/abs/2310.02117v1
# Symmetric Single Index Learning ###### Abstract Few neural architectures lend themselves to provable learning with gradient based methods. One popular model is the single-index model, in which labels are produced by composing an unknown linear projection with a possibly unknown scalar link function. Learning this model with SGD is relatively well-understood, whereby the so-called information exponent of the link function governs a polynomial sample complexity rate. However, extending this analysis to deeper or more complicated architectures remains challenging. In this work, we consider single index learning in the setting of symmetric neural networks. Under analytic assumptions on the activation and maximum degree assumptions on the link function, we prove that gradient flow recovers the hidden planted direction, represented as a finitely supported vector in the feature space of power sum polynomials. We characterize a notion of information exponent adapted to our setting that controls the efficiency of learning. ## 1 Introduction Quantifying the advantage of neural networks over simpler learning systems remains a primary question in deep learning theory. Specifically, understanding their ability to discover relevant low-dimensional features out of high-dimensional inputs is a particularly important topic of study. One facet of the challenge is explicitly characterizing the evolution of neural network weights through gradient-based methods, owing to the nonconvexity of the optimization landscape. The single index setting, long studied in economics and biostatistics [25] offers the simplest setting where non-linear feature learning can be characterized explicitly. In this setting, functions of the form \(x\mapsto f(\langle x,\theta^{*}\rangle)\) where \(\theta^{*}\in\mathcal{S}_{d-1}\) represents a hidden direction in high-dimensional space, and \(f\) a certain non-linear link function, are learned via a student with an identical architecture \(x\mapsto f(\langle x,\theta\rangle)\), under certain data distribution assumptions, such as Gaussian data. Gradient flow and gradient descent [4, 13, 29] in this setting can be analyzed by reducing the high-dimensional dynamics of \(\theta\) to dimension-free dynamics of appropriate _summary statistics_, given in this case by the scalar correlation \(\langle\theta,\theta^{*}\rangle\). The efficiency of gradient methods in this setting, measured either in continuous time or independent samples, is controlled by two main properties. First, the correlation initialization, which typically scales as \(\frac{1}{\sqrt{d}}\) for standard assumptions. Second, the information exponent \(s_{f}\) of \(f\)[2, 4, 7, 10, 11, 13], which measures the number of effective vanishing moments of the link function -- leading to a sample complexity of the form \(O(d^{s-1})\) for generic values of \(s\). While this basic setup has been extended along certain directions, e.g. relaxing the structure on the input data distribution [9, 29], considering the multi-index counterpart [1, 2, 3, 11], or learning the link function with semi-parametric methods [7, 21], they are all fundamentally associated with fully-connected shallow neural networks. Such architecture, for all its rich mathematical structure, also comes with important shortcomings. In particular, it is unable to account for predefined symmetries in the target function that the learner wishes to exploit. This requires specialized neural architectures enforcing particular invariances, setting up novel technical challenges to carry out the program outlined above. In this work, we consider arguably the easiest form of symmetry, given by permutation invariance. The primary architecture for this invariance is DeepSets [30], which is necessarily three layers by definition and therefore not a simple extension of the two layer setting. In order to quantify the notion of'symmetric' feature learning in this setting, we introduce a symmetric single index target, and analyze the ability of gradient descent over a DeepSet architecture to recover it. Under appropriate assumptions on the model, initialization and data distribution, we combine the previous analyses with tools from symmetric polynomial theory to characterize the dynamics of this learning problem. Our primary theorem is a proof of efficient learning under gradient flow, with explicit polynomial convergence rates controlled by an analogue of information exponent adapted to the symmetric setting. Combined with other contemporary works, this result solidifies the remarkable ability of gradient descent to perform feature learning under a variety of high-dimensional learning problems. ## 2 Setup ### Notation For \(z\in\mathbb{C}\), we will use \(\overline{z}\) to denote the complex conjugate, with the notation \(z^{*}\) always being reserved to denote a special value of \(z\) rather than an operation. For complex matrices \(A\) we will use \(A^{\dagger}\) to denote the conjugate transpose. The standard inner product on \(\mathbb{C}^{N}\) is written as \(\langle\cdot,\cdot\rangle\), whereas inner products on \(L^{2}(\gamma)\) spaces for some probability measure \(\gamma\) will be written as \(\left\langle\cdot,\cdot\right\rangle_{\gamma}\). Furthermore, for \(h\) a vector and \(p(x)\) a vector-valued function, we will use \(\left\langle h,p\right\rangle_{\gamma}\) as shorthand for the notation \(\left\langle h,p(\cdot)\right\rangle_{\gamma}\). ### Regression setting and Teacher function We consider a typical regression setting, where given samples \((x,y)\in\mathcal{X}\times\mathbb{C}\) with \(y=F(x)\), we seek to learn a function \(F_{w}\) with parameter \(w\in\mathbb{C}^{M}\) by minimizing some expected loss \(E_{x\sim\nu}\left[L(F(x),F_{w}(x))\right]\). Note that we consider complex-valued inputs and parameters because they greatly simplify the symmetric setting (see Proposition 2.3), hence we will also assume \(\mathcal{X}\subseteq\mathbb{C}^{N}\). Both \(F\) and \(F_{w}\) will be permutation invariant functions, meaning that \(F(x_{\pi(1)},\ldots x_{\pi(N)})=F(x_{1},\ldots,x_{N})\) for any permutation \(\pi:\ \{1,N\}\to\{1,N\}\). Typically the single index setting assumes that the trained architecture will exactly match the true architecture (e.g. as in [4]), but below we will see why it's necessary to consider separate architectures. For that reason, we'll consider separately defining the teacher \(F\) and the student \(F_{w}\). The first ingredient are the power sum polynomials: **Definition 2.1**.: _For \(k\in\mathbb{N}\) and \(x\in\mathbb{C}^{N}\), the normalized powersum polynomial is defined as_ \[p_{k}(x)=\frac{1}{\sqrt{k}}\sum_{n=1}^{N}x_{n}^{k}\.\] Let \(p(x)=[p_{1}(x),p_{2}(x),\ldots]\) be an infinite dimensional vector of powersums, and consider a fixed vector \(h^{*}\in\mathbb{C}^{\infty}\) of unit norm. Then our teacher function \(F\) will be of the form \[F:\mathcal{X} \to\mathbb{C} \tag{1}\] \[x \mapsto F(x):=f(\left\langle h^{*},p(x)\right\rangle) \tag{2}\] for some scalar link function \(f:\mathbb{C}\to\mathbb{C}\). \(F\) may thus be understood as a single-index function in the feature space of powersum polynomials. ### DeepSets Student Function Let us remind the typical structure of a DeepSets network [30], where for some maps \(\Phi:\mathcal{X}\to\mathbb{C}^{M}\) and \(\rho:\mathbb{C}^{M}\to\mathbb{C}\), the standard DeepSets architecture is of the form: \[x\mapsto\rho\left(\Phi_{1}(x),\ldots,\Phi_{M}(x)\right). \tag{3}\] The essential restriction is that \(\Phi\) is a permutation invariant mapping, typically of the form \(\Phi_{m}(x)=\sum_{n=1}^{N}\phi_{m}(x_{n})\) for some map \(\phi_{m}:\mathbb{C}\to\mathbb{C}\). In order to parameterize our student network as a DeepSets model, we will make the simplest possible choices, while preserving its non-linear essence. To define our student network, we consider the symmetric embedding \(\Phi\) as a one-layer neural network with no bias terms: \[\Phi_{m}(x)=\sum_{n=1}^{N}\sigma(a_{m}x_{n})\, \tag{4}\] for i.i.d. complex weights sampled uniformly from the complex circle \(a_{m}\sim S^{1}\) and some activation \(\sigma:\mathbb{C}\rightarrow\mathbb{C}\). And given some link function \(g:\mathbb{C}\rightarrow\mathbb{C}\), we'll consider the mapping \(\rho\) as: \[\rho_{w}(\cdot)=g(\langle w,\cdot\rangle)\, \tag{5}\] where \(w\in\mathbb{C}^{M}\) are our trainable weights. Putting all together, our student network thus becomes \[F_{w}:\mathcal{X} \rightarrow\mathbb{C}\] \[x \mapsto F_{w}(x):=g(\langle w,\Phi(x)\rangle). \tag{6}\] In other words, \(F_{w}\) corresponds to a DeepSets network where the first and third layer weights are frozen, and only the second layer weights (with no biases) are trained. The first fact we need is that, through simple algebra, the student may be rewritten in the form of a single-index model. **Proposition 2.2**.: _There is matrix \(A\in\mathbb{C}^{\infty\times M}\) depending only on the activation \(\sigma\) and the frozen weights \(\{a_{m}\}_{m=1}^{M}\) such that_ \[g(\langle w,\Phi(x)\rangle)=g(\langle Aw,p(x)\rangle). \tag{7}\] ### Hermite-like Identity In the vanilla single index setting, the key to giving an explicit expression for the expected loss (for Gaussian inputs) is a well-known identity of Hermite polynomials [15, 22]. If \(h_{k}\) denotes the Hermite polynomial of degree \(k\), this identity takes the form \[\langle h_{k}(\langle\cdot,u\rangle),h_{l}(\langle\cdot,v\rangle) \rangle_{\gamma_{n}}=\delta_{kl}k!\langle u,v\rangle^{k}\, \tag{8}\] where \(u,v\in\mathbb{R}^{n}\) and \(\gamma_{n}\) is the standard Gaussian distribution on \(n\) dimensions. In our setting, as it turns out, one can establish an analogous identity, by considering a different input probability measure, and a bound on the degree of the link function. We will choose our input domain \(\mathcal{X}=(S^{1})^{N}\), and the input distribution we will consider is the set of eigenvalues of a Haar-distributed unitary matrix in dimension \(N\)[12], or equivalently the squared Vandermonde density over \(N\) copies of the complex unit circle [20]. We'll interchangeably use the notation \(\mathbb{E}_{x\sim V}[f(x)\overline{g(x)}]=\langle f,g\rangle_{V}\). **Proposition 2.3**.: _Consider \(h,\tilde{h}\in\mathbb{C}^{\infty}\) with bounded \(L_{2}\) norm. For exponents \(k,l\) with \(k\leq\sqrt{N}\), if \(h\) is only supported on the first \(\sqrt{N}\) elements, then:_ \[\langle\langle h,p\rangle^{k},\langle\tilde{h},p\rangle^{l}\rangle_{V}=\delta_ {kl}k!\langle h,\tilde{h}\rangle^{k}. \tag{9}\] The crucial feature of this identity is that the assumptions on support and bounded degree only apply to \(\langle h,p\rangle^{k}\), with no restrictions on the other term. In our learning problem, we can use this property to make these assumptions on the teacher function, while requiring no bounds on the terms of the student DeepSets architecture. In order to take advantage of the assumptions on the support of \(h\) and the degree in the above proposition, we need to make the following assumptions on our teacher link function \(f\) and our true direction \(h^{*}\): **Assumption 2.4**.: _The link function \(f\) is analytic and only supported on the first \(\sqrt{N}\) degree monomials, i.e._ \[f(z)=\sum_{j=1}^{\sqrt{N}}\frac{\alpha_{j}}{\sqrt{j}!}z^{j} \tag{10}\] _Furthermore, the vector \(h^{*}\) is only supported on the first \(\sqrt{N}\) elements._ Although this assumption is required to apply the orthogonality property for our loss function in the following sections, we note that in principle, including exponentially small terms of higher degree in \(f\) or higher index in \(h^{*}\) should have negligible effect. Moreover, one should interpret this assumption as silently disappearing in the high-dimensional regime \(N\to\infty\). For simplicity, we keep this assumption to make cleaner calculations and leave the issue of these small perturbations to future work. ### Information Exponent Because Proposition 2.3 takes inner products of monomials, it alludes to a very simple characterization of information exponent. Namely: **Definition 2.5**.: _Consider an analytic function \(f:\mathbb{C}\to\mathbb{C}\) that can be written in the form_ \[f(z)=\sum_{j=0}^{\infty}\frac{\alpha_{j}}{\sqrt{j}!}z^{j} \tag{11}\] _Then the information exponent is defined as \(s=\inf\{j\geq 1:\alpha_{j}\neq 0\}\)._ Similar to the Gaussian case [4, 7], the information exponent \(s\) will control the efficiency of learning. Assuming \(|\alpha_{s}|\) is some non-negligible constant, the value of \(s\) will be far more important in governing the convergence rate. ### Choosing a learnable loss There are two subtleties to choosing an appropriate loss function. Namely, the necessity of a correlational loss (with regularization), and the necessity of choosing the student and teacher link functions to be distinct. At first glance, it is tempting to simply define a loss of the form \[\tilde{L}(w)=\mathbb{E}_{x\sim V}|F(x)-F_{w}(x)|^{2}=\mathbb{E}_{x\sim V}\left[|f (\langle h^{*},p(x)\rangle)-f(\langle Aw,p(x)\rangle)|^{2}\right]. \tag{12}\] However, the Deepsets student model is not degree limited, that is the support of \(Aw\) is not restricted to the first \(\sqrt{N}\) terms of the powersum expansion. In other words, expanding this loss will require calculating the term \(\|f(\langle Aw,p\rangle)\|_{V}^{2}\), which will contain high degree terms that cannot be controlled with Proposition 2.3. One could avoid this issue by choosing the activation such that \(Aw\) only contains low-index terms, but we want to consider larger classes of activations and enforce fewer restrictions. One can instead consider a correlational loss. In this case, in order to make the objective have a bounded global minimum, it's necessary to either regularize \(w\), or project at every step of SGD, which is the strategy taken in Damian et al. [10]. In our setting, this projection would correspond to projecting \(w\) to the ellipsoid surface \(\|Aw\|=1\). This projection would require solving an optimization problem at every timestep [23]. To avoid this impracticality, we instead consider regularization. Then with complete knowledge of the link function \(f\), specifically its monomial coefficients, we can now define the correlational loss \[\hat{L}(w)=\mathbb{E}_{x\sim V}\left[-\operatorname{Re}\Bigl{\{}f(\langle h^ {*},p(x)\rangle\overline{f(\langle Aw,p(x)\rangle)}\Bigr{\}}\Bigr{]}+\sum_{i=j }^{\sqrt{N}}\frac{|\alpha_{j}|^{2}}{2}\|Aw\|^{2j}. \tag{13}\] This loss enjoys benign optimization properties, as shown by the following proposition: **Proposition 2.6**.: _If there exist coprimes \(k,l\) with \(\alpha_{k},\alpha_{l}\neq 0\), and \(h^{*}\) is in the range of \(A\), then \(\hat{L}\) exclusively has global minima at all \(w\) such that \(Aw=h^{*}\)._ However, unlike the real case, complex weights causes issues for learning this objective. Namely, this objective can be written as a non-convex polynomial in \(\cos\theta\) where \(\theta\) is the angle of \(\langle Aw,h^{*}\rangle\) in polar coordinates. Therefore, we consider a different choice of student link function that will enable a simpler analysis of the dynamics. For the choice of \(g(z)=\frac{\alpha_{s}}{|\alpha_{s}|\sqrt{s!}}z^{s}\), we instead consider the loss: \[L(w) =\mathbb{E}_{x\sim V}\left[-\operatorname{Re}\Bigl{\{}f(\langle h ^{*},p(x)\rangle\overline{g(\langle Aw,p(x)\rangle)}\Bigr{\}}\Bigr{]}+\frac{| \alpha_{s}|}{2}\|Aw\|^{2s}\right. \tag{14}\] \[=-|\alpha_{s}|\operatorname{Re}\{\langle Aw,h^{*}\rangle^{s}\}+ \frac{|\alpha_{s}|}{2}\|Aw\|^{2s}. \tag{15}\] We note that Dudeja & Hsu [13] used a similar trick of a correlational loss containing a single orthogonal polynomial in order to simplify the learning landscape. The global minima of this loss, and in fact the dynamics of gradient flow on it, will be explored in the sequel. ## 3 Related Work ### Single Index Learning The conditions under which single-index model learning is possible have been well-explored in previous literature. The main assumptions that enable provably learning under gradient flow / gradient descent are monotonicity of the link function [16, 17, 27, 29] or Gaussian input distribution [4]. The former assumptions essentially corresponds to the setting where the information exponent \(s=1\), as it will have positive correlation with a linear term. Under the latter assumption, the optimal sample complexity was achieved in Damian et al. [10], with study of learning when the link function is not known in Bietti et al. [7]. When both assumptions are broken, the conditions on the input distribution of rotation invariance or approximate Gaussianity are nevertheless sufficient for learning guarantees [9]. But more unusual distributions, especially in the complex domain that is most convenient for symmetric networks, are not well studied. ### Symmetric Neural Networks The primary model for symmetric neural networks was introduced in Zaheer et al. [30] as the DeepSets model. There are many similar models that enforce permutation invariance [19, 24, 26], though we focus on DeepSets because of its relationship with the power sum polynomials and orthogonality [31]. We are not aware of any other works that demonstrates provable learning of symmetric functions under gradient-based methods. ## 4 Provably Efficient Recovery with Gradient Flow ### Defining the Dynamics The gradient methods considered in Arous et al. [4], Ben Arous et al. [6] are analyzed by reducing to a dimension-free dynamical system of the so-called summary statistics. For instance, in the vanilla single-index model, the summary statistics reduce to the scalar correlation between the learned weight and the true weight. In our case, we have three variables, owing to the fact that the correlation is complex and represented by two scalars, and a third variable controlling the norm of the weight since we aren't using projection. Note that although our weight vector \(w\) is complex, we still apply regular gradient flow to the pair of weight vectors \(w_{R},w_{C}\) where \(w=w_{R}+iw_{C}\). Furthermore, we use the notation \(\nabla:=\nabla_{w}=\nabla_{w_{R}}+i\nabla_{w_{C}}\). With that in mind, we can summarize the dynamics of our gradient flow in the following Theorem. **Theorem 4.1**.: _Given a parameter \(w\), consider the summary statistics \(m=\langle Aw,h^{*}\rangle\in\mathbb{C}\) and \(v=\|P_{h^{*}}^{\perp}Aw\|^{2}\) where \(P_{h^{*}}^{\perp}\) is projection onto the orthogonal complement of \(h^{*}\). Let the polar decomposition of \(m\) be \(re^{i\theta}\)._ _Then given the preconditioned gradient flow given by_ \[\dot{w}=-\frac{1}{s|\alpha_{s}|}(A^{\dagger}A)^{-1}\nabla L(w)\, \tag{16}\] _the summary statistics obey the following system of ordinary differential equations:_ \[\dot{r} =(1-\delta)r^{s-1}\cos s\theta-(v+r^{2})^{s-1}r\, \tag{17}\] \[\frac{d}{dt}\cos s\theta =(1-\delta)sr^{s-2}(1-\cos^{2}s\theta)\,\] (18) \[\dot{v} =2\delta r^{s}\cos s\theta-2(v+r^{2})^{s-1}v\, \tag{19}\] _where \(\delta:=1-\|P_{A}h^{*}\|^{2}\) and \(P_{A}\) is the projection onto the range of \(A\)._ The proof is in Appendix D. The main technical details come from using Wirtinger calculus to determine how the real and imaginary parts of \(w\) evolve under the flow. Additionally, the correct preconditioner (intuitive from the linear transform of \(w\)) is crucial for reducing the dynamics to only three summary statistics, and converting to dynamcis on \(\cos s\theta\) rather than \(\theta\) itself simplifies the description of the learning in the next section dramatically. ### Provable Learning These dynamics naturally motivate the question of learning efficiency, measured in convergence rates in time in the case of gradient flow. Our main result is that, under some assumptions on the initialization of the frozen weights \(\{a_{m}\}_{m=1}^{M}\) and the initialized weight vector \(w_{0}\), the efficiency is controlled by the initial correlation with the true direction and the information exponent, just as in the Gaussian case. **Theorem 4.2**.: _Consider a fixed \(\epsilon>0\). Suppose the initialization of \(w_{0}\) and \((a_{m})_{m=1}^{M}\) are such that:_ 1. _Small correlation and anti-concentration at initialization:_ \(0<r_{0}\leq 1\)_,_ 2. _Initial phase condition:_ \(\cos s\theta_{0}\geq 1/2\)_,_ 3. _Initial magnitude condition for_ \(Aw\)_:_ \(v_{0}=1-r_{0}^{2}\)_,_ 4. _Small Approximation of optimal error:_ \(\delta\leq\min(\epsilon/2,O(s^{-s}r_{0}^{4}))\)_._ _Then if we run the gradient flow given in Theorem 4.1 we have \(\epsilon\) accuracy in the sense that:_ \[r_{T}\geq 1-\epsilon\,\ \cos s\theta_{T}\geq 1-\epsilon\,\ v_{T}\leq\epsilon \tag{20}\] _after time \(T\), where depending on the information exponent \(s\):_ \[T\leq\begin{cases}O\left(\log\frac{1}{\epsilon}\right)&s=1\,\\ O\left(2^{s^{2}}r_{0}^{-4s}+\log\frac{1}{\epsilon}\right)&s>1\.\end{cases} \tag{21}\] **Remark 4.3**.: _We note that we only recover \(\cos s\theta\approx 1\), rather than a guarantee that \(\theta\approx 0\), and so the hidden direction is only determined up to scaling by a \(s\)th root of unity. This limitation is may appear to be an issue with the choice of the student link function \(g\), but it is unavoidable: if the teacher link function \(f(z)=\frac{1}{\sqrt{s!}}z^{s}\), one can calculate that for any choice of \(g\), \(L(w)\) is invariant to scaling \(w\) by an \(s\)th root of unity._ ### Initialization Guarantees In order to apply the gradient flow bound proved in Theorem 4.2, it only remains to understand when the assumptions on initialization are met. Unlike the single-index setting with Gaussian inputs, the initial correlation is not guaranteed to be on the scale of \(\frac{1}{\sqrt{N}}\), but will depend on the activation function and the random weights in the first layer. Let us introduce the assumptions we'll need: **Assumption 4.4**.: _We assume an analytic activation \(\sigma(z)=\sum_{k=0}^{\infty}c_{k}z^{k}\), with the notation \(\sigma_{+}:=\max_{1\leq k\leq N}|c_{k}|\sqrt{k}\) and \(\sigma_{-}:=\min_{1\leq k\leq\sqrt{N}}|c_{k}|\sqrt{k}\). We further assume:_ 1. \(c_{k}=0\) _iff_ \(k=0\)_,_ 2. \(\sigma\) _analytic on the unit disk,_ 3. \(1/\sigma_{-}=O(\mathrm{poly}(N))\)_,_ 4. \(\sum_{k=N+1}^{\infty}k|c_{k}|^{2}\leq e^{-O(\sqrt{N})}\)_._ The first two conditions are simply required for the application of Proposition 2.3, as the powersum vector \(p\) is built out of polynomials induced by the activation and does not include a constant term. The second two conditions concern the decay of the coefficients of \(\sigma\), in the sense that the decay must start slow but eventually become very rapid. These conditions are necessary mainly for ensuring the Small Approximation of optimal error condition: **Lemma 4.5**.: _Let \(\sigma\) satisfy Assumption 4.4, and assume \(M=O(N^{3})\). Then for any unit norm \(h^{*}\in\mathbb{C}^{\infty}\) that is only supported on the first \(\sqrt{N}\) elements, with probability \(1-2\exp(-O(N))\):_ \[1-\|P_{A}h^{*}\|^{2}\leq e^{-O(\sqrt{N})}\.\] Lastly, we can choose an initialization scheme for \(w\) which handily ensures the remaining assumptions we need to apply Theorem 4.2. The crucial features of \(\sigma\) are similar to the previous result. Namely, we want the initial correlation \(r_{0}\) to be non-negligible because this directly controls the runtime of gradient flow. Slow initial decay with fast late decay of the \(\sigma\) coefficients directly implies that \(Aw_{0}\) has a lot of mass in the first \(\sqrt{N}\) indices and very little mass past the first \(N\) indices. These requirements rule out, say, \(\exp\) as an analytic activation because the coefficients decay too rapidly. **Lemma 4.6**.: _Suppose \(w\) is sampled from a standard complex Gaussian on \(M\) variables. It follows that if we set \(w_{0}=\frac{w}{\|Aw\|}\), and use the summary statistics from Theorem 4.1, then with probability \(1/3-2\exp(-O(N))\) and any \(h^{*}\) as in Lemma 4.5_ 1. \(1\geq r_{0}\geq c\frac{\sigma_{-}}{\sigma_{+}\sqrt{M}}\) _for some universal constant_ \(c>0\)_,_ 2. \(\cos s\theta_{0}\geq 1/2\)_,_ 3. \(v_{0}=1-r_{0}^{2}\)_._ Finally, we consider a straightforward choice of \(\sigma\) that meets Assumption 4.4 so that we can arrive at an explicit complexity bound on learning: **Corollary 4.7** (Non-asymptotic Rates for Gradient Flow).: _And Consider \(\xi=1-\frac{1}{\sqrt{N}}\) and the specific choice of activation_ \[\sigma(z)=\arctan\xi z+\xi z\arctan\xi z\.\] _Suppose we initialize \(w\) from a standard complex Gaussian in dimension \(M\) with \(M=O(N^{3})\), and \(\{a_{m}\}_{m=1}^{M}\sim S^{1}\) iid. Furthermore, treat \(s\) and \(\epsilon\) as constants relative to \(N\). Then with probability \(1/3-2\exp(-O(N))\), we will recover \(\epsilon\) accuracy in time_ \[T\leq\begin{cases}O\left(\log\frac{1}{\epsilon}\right)&s=1\\ O\left(2^{s^{2}}N^{7s}+\log\frac{1}{\epsilon}\right)&s>1\.\end{cases} \tag{22}\] Proof.: By Proposition H.5, the activation \(\sigma\) given in the corollary statement satisfies Assumption 4.4, so we can apply Lemma 4.6 and Lemma 4.5 to satisfy the requirements of Theorem 4.2. In particular, the fourth condition is given by assuming \(e^{-O(\sqrt{N})}\leq\min(\epsilon/2,O(s^{-s}r_{0}^{4}))\) which is true when \(s\) is constant, and \(\epsilon\) and \(r_{0}\) are at most polynomial compared to \(N\). Note that \(\sigma_{+}=O(1)\) and \(\sigma_{-}=O\left(\frac{1}{N^{1/4}}\right)\), so it follows that \(r_{0}\geq O\left(\frac{1}{N^{7/4}}\right)\) with probability \(1/3-2\exp(-O(N))\). Conditioning on this bound gives the desired bound on the time for \(\epsilon\) accuracy. Hence, we have a rate that, for \(s=O(1)\), is not cursed by dimensionality to recover the true hidden direction \(h^{*}\). As mentioned above, there are two caveats to this recovery: \(w\) is only recovered up to an \(s\)th root of unity, and to directly make predictions of the teacher model would require using the teacher link function rather than using the student model directly. Since this result concerns gradient flow over the population loss, a natural question is what barriers exist that stymie the SGD analysis of recent single index papers [4, 9, 10]. These works treat the convergence of SGD by a standard drift and martingale argument, where the drift follows the population gradient flow, and the martingales are shown to be controlled via standard concentration inequalities and careful arguments around stopping times. Applying these tactics to a discretized version of the dynamics given in Theorem 4.1 mainly runs into an issue during the first phase of training. Unlike in Arous et al. [4] where the drift dynamics have the correlation monotonically increasing towards \(1\), at the start of our dynamics the correlation magnitude \(r\) and the "orthogonal" part of the learned parameter \(v\) are both decreasing (with high probability over the initialization). Showing that this behavior doesn't draw the model towards the saddle point where \(r=0\) requires showing that \(v\) decreases meaningfully faster than \(r\), i.e. showing that \(\frac{d}{dt}\log\frac{r^{2}}{v}\) is positive. It's not clear what quality of bounds the martingale concentration inequalities would provide for this quantity, and we leave for future work if the six stage proof of the dynamics behavior could be successfully discretized. ## 5 Experiments To study an experimental setup for our setting, we consider the student-teacher setup outlined above with gradient descent. We consider \(N=25\), \(M=100\), and approximate the matrix \(A\) by capping the infinite number of rows at \(150\), which was sufficient for \(1-\|P_{A}h^{*}\|^{2}\leq 0.001\) in numerical experiments. For the link function \(f\), we choose its only non-zero monomial coefficients to be \(\alpha_{3}=\alpha_{4}=\alpha_{5}=\frac{1}{\sqrt{3}}\). And correspondingly, \(g\) simply has \(\alpha_{3}=1\) and all other coefficients at zero. We choose for convenience an activation function such that \(A_{km}=\left(\frac{N-1}{N}\right)^{k}a_{m}^{k}\). We make this choice because, while obeying all the assumptions required in Assumption 4.4, this choice implies that the action of \(A\) on the elementary basis vectors \(e_{j}\) for \(1\leq j\leq\sqrt{N}\) is approximately distributed the same. This choice means that \(\|P_{A}h^{*}\|\) is less dependent on the choice of \(h^{*}\), and therefore reduces the variance in our experiments when we choose \(h^{*}\) uniformly among unit norm vectors with support on the first \(\sqrt{N}\) elements, i.e. uniformly from the complex sphere in degree \(\sqrt{N}\). Under this setup, we train full gradient descent on \(50000\) samples from the Vandermonde \(V\) distribution under \(20000\) iterations. The only parameter to be tuned is the learning rate, and we observe over the small grid of \([0.001,0.0025,0.005]\) that a learning rate of \(0.0025\) performs best for the both models in terms of probability of \(r\) reaching approximately \(1\), i.e. strong recovery. As described in Theorem 4.1, we use preconditioned gradient descent using \((A^{\dagger}A)^{-1}\) as the preconditioner, which can be calculated once at the beginning of the algorithm and is an easy alteration to vanilla gradient descent to implement. We use the pseudoinverse for improved stability in calculating this matrix, although we note that this preconditioner doesn't introduce stability issues into the updates of our summary statistics, even in the case of gradient descent. Indeed, even if one considers the loss \(L(w)\) under an empirical expectation rather than full expectation, the gradient \(\nabla L(w)\) can still be seen to be written in the form \(A\dagger v\) for some vector \(v\). If one preconditions this gradient by \((A^{\dagger}A)^{-1}\), and observes that the summary statisics \(m\) and \(v\) both depend on \(Aw\) rather than \(w\) directly, it follows that the gradient update on these statistics is always of the form \(A(A^{\dagger}A)^{-1}A^{\dagger}=P_{A}\), so even in the empirical case this preconditioner doesn't introduce exploding gradients. Figure 1: The learning trajectory, over ten independent runs, of the three summary statistics in the case of our chosen loss function \(L\), and the trajectory of the \(r\) statistic for the more complicated loss function \(\hat{L}\) Discussion ### Experimental Results The outcomes of our experiments are given in Figure 1. We observe very high rates of strong recovery using the loss \(L\). For the loss \(\hat{L}\), we note that \(r\) often becomes stuck, indicating the model has reached a local minima. We note that our analysis is somewhat pessimistic, as the experimental gradient descent on \(L(w)\) will often achieve near perfect accuracy even if \(\cos s\theta_{0}<0\). This is mainly an issue of proof technique: although \(\cos s\theta\) is always increasing under the dynamics, \(r\) is necessarily decreasing for as long as \(\cos s\theta\) is negative. It is quite subtle to control whether \(\cos s\theta\) will become positive before \(r\) becomes extremely small, and the initialization of \(r\) is the main feature that controls the runtime of the model. However the empirical results suggest that a chance of success \(>1/2\) is possible under a more delicate analysis. However, the analysis given in the proof of Theorem 4.2 does accurately capture the brief dip in the value of \(r\) in the initial part of training, when the regularization contributes more to the gradient than the correlation until \(\cos s\theta\) becomes positive. Because we can only run experiments on gradient descent rather than gradient flow, we observe the phenomenon of search vs descent studied in Arous et al. [4], where the increase in the corrleation term \(r\) is very slow and then abruptly increases. For the model trained with \(\hat{L}\), we observe that there is much greater likelihood of failure in the recovery, as \(r\) appears to become stuck below the optimal value of \(1\). ### Extensions The success of this method of analysis depends heavily on the Hermite-like identity in Proposition 2.3. In general, many of the existing results analyzing single index models need to assume either Gaussian inputs, or uniformly distributed inputs on the Boolean hypercube (see for example Abbe et al. [2]). In some sense, this works cements the inclusion of the Vandermonde distribution in this set of measures that enable clean analysis. The proof techniques for these three measures are quite disparate, so it remains open to determine if there is a wider class of "nice" distributions where gradient dynamics can be succcessfully analyzed. Additionally, the success of the multi-layer training in Bietti et al. [7], Mahankali et al. [21] suggests that simultaneously training the frozen first layer weights may not prohibit the convergence analysis. The matrix \(A\) depends on the first layer weights through a Vandermonde matrix (see \(X\) in the proof of Lemma 4.5), and the simple characterization of the derivative of a Vandermonde matrix alludes to further possibilities for clean analysis. ### Limitations A first limitation is the focus of this work on complex inputs, analytic activations, and fixed input distribution (namely the squared Vandermonde density). Although complex analytic functions are less commonly studied in the literature, they do still appear in settings like quantum chemistry [5, 18]. Regarding the focus on the Vandermonde distribution, we note this is similiar to the vanilla single-index setting in the restriction to Gaussian inputs, under which the theory is particularly powerful, simplest and understanding of non-Gaussian data is still nascent. A second limitation is that this work focuses on input distributions over sets of scalars, whereas typically symmetric neural networks are applied to sets of high-dimensional vectors. Proposition 2.3 does not work out of the box for these settings without a high-dimensional analogue of the inner product \(\langle\cdot,\cdot\rangle_{V}\) with similar orthogonality properties. It is possible to define such an inner products on the so-called multisymmetric powersums with similar orthogonality [31], and we leave to future work the question of whether such inner products could grant similar guarantees about the learning dynamics in this more realistic setting. ## 7 Conclusion In this work we've shown a first positive result that quantifies the ability of gradient descent to perform symmetric feature learning, by adapting and extending the tools of two-layer single index models. In essence, this is made possible by a'miracle', namely the fact that certain powersum expansions under the Vandermonde measure enjoy the same semigroup structure as Hermite polynomials under the Gaussian measure (Proposition 2.3) -- leading to a dimension-free summary statistic representation of the loss. Although the resulting dynamics are more intricate than in the Euclidean setting, we are nonetheless able to establish quantitative convergence rates to 'escape the mediocrity' of initialization, recovering the same main ingredients as in previous works [1, 4], driven by the information exponent. To our knowledge, this is the first work to show how learning with gradient based methods necessarily succeeds in this fully non-linear (i.e. not in the NTK regime) setting. Nevertheless, there are many lingering questions. As discussed, one limitation of the analysis is the reliance on gradient flow rather than gradient descent. We hope that in future work we'll be able to effectively discretize the dynamics, made more challenging by the fact that one must track three parameters rather than simply the correlation. Still, we observe theoretically and empirically that the symmetric single index setting demands a number of unusual choices, such as a correlation loss and distinct student and teacher link function, in order to enable efficient learning. And in a broader scheme, if one remembers the perspective of DeepSets as a very limited form of a three-layer architecture, the issue of provable learning for deeper, more realistic architectures stands as a very important and unexplored research direction -- and where Transformers with planted low-dimensional structures appear as the next natural question.
2303.05998
Combining visibility analysis and deep learning for refinement of semantic 3D building models by conflict classification
Semantic 3D building models are widely available and used in numerous applications. Such 3D building models display rich semantics but no fa\c{c}ade openings, chiefly owing to their aerial acquisition techniques. Hence, refining models' fa\c{c}ades using dense, street-level, terrestrial point clouds seems a promising strategy. In this paper, we propose a method of combining visibility analysis and neural networks for enriching 3D models with window and door features. In the method, occupancy voxels are fused with classified point clouds, which provides semantics to voxels. Voxels are also used to identify conflicts between laser observations and 3D models. The semantic voxels and conflicts are combined in a Bayesian network to classify and delineate fa\c{c}ade openings, which are reconstructed using a 3D model library. Unaffected building semantics is preserved while the updated one is added, thereby upgrading the building model to LoD3. Moreover, Bayesian network results are back-projected onto point clouds to improve points' classification accuracy. We tested our method on a municipal CityGML LoD2 repository and the open point cloud datasets: TUM-MLS-2016 and TUM-FA\c{C}ADE. Validation results revealed that the method improves the accuracy of point cloud semantic segmentation and upgrades buildings with fa\c{c}ade elements. The method can be applied to enhance the accuracy of urban simulations and facilitate the development of semantic segmentation algorithms.
Olaf Wysocki, Eleonora Grilli, Ludwig Hoegner, Uwe Stilla
2023-03-10T16:01:30Z
http://arxiv.org/abs/2303.05998v1
Combining Visibility Analysis and Deep Learning for Refinement of Semantic 3D Building Models by Conflict Classification ###### Abstract Semantic 3D building models are widely available and used in numerous applications. Such 3D building models display rich semantics but no facade openings, chiefly owing to their aerial acquisition techniques. Hence, refining models' facades using dense, street-level, terrestrial point clouds seems a promising strategy. In this paper, we propose a method of combining visibility analysis and neural networks for enriching 3D models with window and door features. In the method, occupancy voxels are fused with classified point clouds, which provides semantics to voxels. Voxels are also used to identify conflicts between laser observations and 3D models. The semantic voxels and conflicts are combined in a Bayesian network to classify and delineate facade openings, which are reconstructed using a 3D model library. Unaffected building semantics is preserved while the updated one is added, thereby upgrading the building model to LoD3. Moreover, Bayesian network results are back-projected onto point clouds to improve points' classification accuracy. We tested our method on a municipal CityGML LoD2 repository and the open point cloud datasets: TUM-MLS-2016 and TUM-FACADE. Validation results revealed that the method improves the accuracy of point cloud semantic segmentation and upgrades buildings with facade elements. The method can be applied to enhance the accuracy of urban simulations and facilitate the development of semantic segmentation algorithms. 3D reconstruction, MLS point clouds, Semantic 3D building models, CityGML, Deep learning, LoD3 building models, Window and door reconstruction, Building models refinement. ## 1 Introduction Semantic 3D building models at levels of detail (LoD)1 and 2 are widespread 1 and commonly applied in urban-related studies [11]. Such 3D models are frequently reconstructed using a combination of 2D building footprints and multi-view stereo (MVS) or airborne laser scanning (ALS) techniques, as in the example of more than eight million reconstructed buildings in Bavaria, Germany [14]. This reconstruction strategy enables detailed modeling of roof surfaces but renders generalized facades neglecting openings such as windows and doors, as shown in Figure 1b. Footnote 1: [https://github.com/OloOcki/awesome-citygml](https://github.com/OloOcki/awesome-citygml) Reconstructing facade elements becomes a key factor enabling automatic 3D building modeling at LoD3, for which an increasing demand has been expressed by numerous applications including estimating heating demand [20], preserving cultural heritage [15], calculating solar potential [16], and testing automated driving functions [21]. Since point clouds are deemed as one of the best data sources for 3D modeling, dense, street-level mobile laser scanning (MLS) point clouds appear to be especially suitable for at-scale facade reconstruction [22]. For this purpose, however, point clouds require semantic classification, which has been recently approached using machine and deep learning methods yielding promising results [15, 16]. Yet, these methods can have limited accuracy when classifying objects that are translucent (e.g., windows) or have an inadequate amount of training data (e.g., doors). On the other hand, points' rays intersecting with 3D models can provide geometrical cues about possible facade openings, but without differentiating between classes, such as window, door, or underpass [17]. In this paper, we present a strategy that combines both ray- and region-based methods for conflict classification [23]. This approach leads to refinement of both 3D building models and segmented point clouds' accuracy; our contributions are as follows: * a CityGML-compliant strategy for upgrading LoD2 to LoD3 models by model-driven 3D window and door reconstruction; * a method classifying conflicts between laser observations and 3D building models using deep learning networks; * a method improving the semantic segmentation results of Figure 1: Facade in a photo and a 3D building model: a) Oblique image [1], b) semantic building model at LoD2 [1]. deep learning networks by analyzing ray-traced points and 3D building models. ## 2 Related Work The internationally used CityGML standard establishes the LoD of semantic 3D city objects (Groger et al., 2012). One of the chief differences between LoD2 and LoD3 is the presence of facade openings in the latter. In our case, we search for absent facade elements in the input models using point clouds and then carry out their reconstruction. Therefore, we deem methods as related if they deal with detecting missing features (Section 2.1) and facade reconstruction using point clouds (Section 2.2). ### Visibility analysis using point clouds Hebel et al. (2013) employ visibility analysis to detect changes between different point cloud epochs. The method addresses the uncertainty of ALS measurements using the Dempster-Shafer theory (DST). Ray tracing on a voxel grid is introduced to identify _occupied_, _empty_, and _unknown_ states per epoch. Based on the epochs comparison, they distinguish: _consistent_, _disappeared_, and _appeared_ states. Visibility analysis is utilized to remove dynamic objects from point clouds, too (Gehrung et al., 2017). MLS observations' rays are traced on an efficient octree grid structure introduced by Hornung et al. (2013). Each traced ray provides occupancy probabilities, which are accumulated per voxel using the Bayesian approach. Moving objects are removed based on decreasing occupancy probability of ray-traversed voxels. A multimodal approach to visibility analysis is proposed by Tuttas et al. (2015). They investigate how to monitor the progress of a construction site using photogrammetric point clouds and building information modeling (BIM) models. The Bayesian approach and the octree grid structure is employed to analyze the points' rays and vector models. The as-is (point cloud) to as-planned (3D model) comparison differentiates between _potentially built_, _not visible_, and _not built_ model parts. In our previous work (Wysocki et al., 2022a), we introduced visibility analysis to refine semantic 3D building models with underpasses using MLS point clouds. The method compares ray-traced points with building objects on an octree grid in a probabilistic fashion. Contours of underpasses are identified based on an analysis of conflicts between laser observations and building models, supported by vector road features. ### Facade openings reconstruction using point clouds Substantial research effort has been devoted to methods using images for facade segmentation (Szeliski, 2010). Nevertheless, 2D images require additional processing to enable semantic 3D reconstruction. 3D point clouds, however, provide an immediate 3D environment representation, which makes them one of the best datasets for urban mapping (Xu and Stilla, 2021). When analyzing laser observations, openings are often assumed to represent holes due to their translucent characteristic or face-intruded position (Tuttas and Stilla, 2013; Fan et al., 2021). For example, windows are detected based on building interior points, which imply opening existence (Tuttas and Stilla, 2013). Borders of openings are delineated based on the ray tracing of interior points and the detected facade plane in point clouds. Zolanvari et al. (2018) propose a slicing method to identify openings using horizontal or vertical cross-sections. The method finds facade planes using the RANSAC algorithm and removes noisy points based on their deviations from the planes. Gaps occurring in horizontal or vertical cross-sections delineate possible openings. Layout graphs are proposed by Fan et al. (2021) to identify facade structures. Spatial relations among detected objects are encoded and exploited by the Bayesian framework to deduce the whole facade layout. Recently, however, data-driven methods based on machine and deep learning approaches have provided promising results for classifying point clouds, especially when using the self-attention mechanism (Zhao et al., 2021). These great strides have influenced facade segmentation of point clouds, too (Grilli and Remondino, 2020; Martone et al., 2020). Modified versions of the DGCNN deep learning architecture are proposed to classify facade elements in point clouds (Grilli and Remondino, 2020). The method employs features stemming from machine learning approaches to improve deep learning network accuracy. Little research attention has been given to investigating the automatic upgrade of LoD2 to LoD3 building models using point clouds, except, to the best of our knowledge, our previous works refining overall facade geometry (Wysocki et al., 2021a,b) and reconstructing underpasses (Wysocki et al., 2022a). However, related work is proposed by Hensel et al. (2019) for detecting and reconstructing openings, not by point clouds but by exploiting the textures of semantic city models. They apply the Faster R-CNN deep neural network to identify the bounding boxes of windows and doors on textured CityGML building models. To minimize inaccuracies in the alignment of openings, they apply mixed-integer linear programming. Then, bounding boxes serve as reconstructed opening elements in LoD3 building models. ## 3 Methodology In contrast to our previous work devoted to refining building models with underpasses (Wysocki et al., 2022a), in this paper we focus on detecting and reconstructing outstanding facade openings, such as windows and doors. Moreover, our method refines point cloud segmentation by back-projecting classified conflicts onto the input point clouds. As presented in Figure 2, the method evaluates and assigns uncertainties to the input datasets (Section 3.1). While a neural network is trained on points representing facade elements (Section 3.2), the points ray tracing process performs probabilistic classification of a scene into _occupied_, _empty_, and _unknown_ voxels (Section 3.3). Subsequently, labeled voxels are compared to segmented points to derive _static_ and remove _dynamic_ points in voxels (Section 3.5). The voxels are also compared to vector 3D models to identify _confirmed_, _empty_, and _unknown_ voxel labels (Section 3.4). If _conflicted_ and _static_ features exist, probabilistic classification is carried out, where a Bayesian network identifies _unmodeled openings_ and _other objects_ (Section 3.6). These are back-projected to the point cloud, refining its segmentation accuracy. If the Bayesian network detects windows or doors, shape extraction is conducted (Section 3.7); otherwise, another module can be triggered, such as the underpass reconstruction (Wysocki et al., 2022a). Opening shape extraction is followed by shape generalization, which delineates fitting borders for 3D reconstruction (Section 3.7). Window and door 3D models are automatically fitted to shapes based on the respective geometry and opening class (Section 3.9). Afterward, unchanged and new semantics are assigned to geometries, following the CityGML standard for LoD3 (Groger et al., 2012). ### Data with uncertainties Uncertainties in laser measurements and vector objects can stem from various sources, such as imprecise metadata, data transformations, and acquisition techniques. Uncertainties are application-dependent, too. Therefore, the proposed facades refinement involves uncertainties concerning the global positioning accuracy of point clouds and building models. To quantify these uncertainties, we introduce the confidence interval (CI), which is estimated using the confidence level (CL), its associated z value (\(z\)), standard deviation (\(\sigma\)), and mean (\(\mu\)). Let \(\sigma_{1}\) be the location uncertainty of point clouds, and \(\sigma_{2}\) the location uncertainty of 3D model walls. These are estimated based on the assumed point cloud global registration error \(e_{1}\) and the global location error of 3D model walls \(e_{2}\). Then, the facade's CI is calculated based on \(\sigma=\sqrt{\sigma_{1}^{2}+\sigma_{2}^{2}}\). The maximum upper and lower bounds are given by [\(\mu_{i}-2\sigma_{i},\mu_{i}+2\sigma_{i}\)], when assuming operating in the L1 norm and Gaussian distribution (Suegg and Vosselman, 2000). \(CL_{1}\) and \(CL_{2}\) quantify the operator's confidence level in true-value deviations for laser measurements and 3D model walls, respectively. Depending on the CL value, corresponding \(z_{i}\) values are assumed. The division of \(\mu_{i}\) by \(z_{i}\) estimates the standard deviation \(\sigma_{i}\) value (Hazra, 2017). ### Semantic segmentation The goal of semantic segmentation is to divide a point cloud into several subsets based on the semantics of the points. Following Wysocki et al. (2022) and as shown in Figure 3, eight relevant classes for facade segmentation and reconstruction tasks are considered: _arch_ (dark blue), _column_ (red), _molding_ (purple), _floor_ (green), _door_ (brown), _window_ (blue), _wall_ (beige), and _other_ (gray). The segmentation is performed using a modified Point Transformer self-attention network (Zhao et al., 2021) extended by the use of geometric features improving the network performance, such as _height of the points, roughness, volume density, verticality, omnivariance, planarity_, and _surface variation_. The last three mentioned features are based on the normalized eigenvalues \(\lambda_{i}\) (\(\lambda_{1}>\lambda_{2}>\lambda_{3}\)), which are derived from the 3D point coordinates within a considered spherical neighborhood \(r_{i}\)(Weinmann et al., 2013; Grilli and Remondino, 2020). Finally, using a softmax output layer, we obtain an output vector of probabilities for each predicted class, which becomes fundamental for running our conflict classification approach (Section 3.5). ### Ray tracing Points ray tracing is performed to identify absent structures in existing 3D building models (Figure 4). To enable comparison between these modalities, we employ a 3D occupancy grid. The grid adapts its size to the input data since it utilizes an octree structure. 3D voxels are the octree structure's leaves, and their size \(v_{s}\) is selected based on the relative accuracy of laser observations. Every laser observation is traced from the sensor position \(s_{i}\), following the orientation vector \(r_{i}\), to the reflecting point \(p_{i}=s_{i}+r_{i}\). Yoxels containing \(p_{i}\) are labeled as _occupied_ (blue), those traversed by a ray as _empty_ (pink), and the untraversed ones as _unknown_ (gray). The labels are assigned based on a probability score that considers multiple laser observations \(z_{i}\), which are updated using prior probability \(P_{(}n)\) and previous estimate \(L(n|z_{1:i-1})\). Final score is controlled using log-odd Figure 4: Points ray tracing on a vector-populated octree grid from the sensor position \(s_{i}\) to the hit point \(p_{i}\). Adapted from (Wysocki et al., 2022). Figure 3: Semantic segmentation result for the facade in Figure 1. Figure 2: Workflow of the presented method values \(L(n)\) and clamping thresholds \(l_{min}\) and \(l_{max}\)(Hornung et al., 2013; Tuttas et al., 2015): \[L(n|z_{1:i})=max(min(L(n|z_{1:i-1})+L(n|z_{i}),l_{max}),l_{min}) \tag{1}\] where \[L(n)=log[\frac{P_{n}}{1-P(n)}] \tag{2}\] The grid is vector-populated by inserting 3D model faces and their quantified uncertainties (Section 3.1). Hence, each face has an assigned facade's maximal deviation range (upper CD) and its confidence level (CL). Ultimately, the grid's 3D voxels include attributes such as location, size, as well as state probability stemming from laser observations and a building model. ### Voxels to model comparison As shown in Figure 4, each voxel is analyzed in relation to its intersection with a facade: _Occupied_ voxels that intersect with facades are labeled as _confirmed_ (green); _empty_ voxels that intersect with facades are labeled as _conflicted_ (red); _unknown_ voxels hold their status, as they represent unmeasured space. Voxels are projected onto the intersected facade, forming the _model comparison_ texture map layer with the respective voxel labels: _confirmed_, _conflicted_, and _unknown_ (Figure 5). The cell spacing of a texture map follows the projection of the voxel grid to the plane. ### Voxels to point cloud comparison Ray tracing provides physical, per-voxel occupancy indicators, while semantic segmentation yields educated, per-point semantic classes. Both of these sources provide their semantic information with a probability measure. The fusion of voxels and points is conducted to transfer per-point semantic classes to occupancy voxels and suppress the impact of dynamic points (Figure 6). The rationale behind this fusion is that _static_, occupied voxels (yellow) are building-related, _dynamic_, unoccupied voxels (gray) represent moving objects, such as pedestrians or cars, and can be suppressed by multiple laser observations, as shown by Gehrung et al. (2017) and in Figure 7. Semantic points are inserted into the voxel grid to enable comparison between the two representations. Then, the median probability score \(P(B)\) is derived from point classes within each voxel. The occupancy probability \(P(A)\) and the median probability of each class \(P(B)\) are two dependent events, for which the existence probability score \(P_{ex}\) is calculated \(P_{ex}(A\cap B)=P(A)\cdot P(B|A)\). Voxels are deemed as _static_ if the existence probability score \(P_{ex}\) is greater than the static threshold probability: \(P_{ex}>=P_{static}\); otherwise, voxels represent the _dynamic_ state. Points within _dynamic_ voxels are related to the _other_ class and are back-projected to the input point cloud. The _static_ voxels obtain the point class that scores the greatest probability \(P(B)\) within a voxel (Figure 7). _Static_ voxels with semantics are projected onto the facade, forming the _points comparison_ texture map layer with labels corresponding to the classes, as shown in the example of windows (orange) in Figure 8. As in the _model comparison_ layer (Section 3.4), the cell spacing of a texture map follows the projection of the voxel grid to the plane. ### Probabilistic classification: the Bayesian approach _Model comparison_ and _points comparison_ textures are utilized to identify facade openings using a Bayesian network (BayNet). The network estimations are also back-projected onto semantic point clouds to enhance their segmentation accuracy. As shown in Figure 9, the designed BayNet comprises: one target (red), two input (yellow), one decision (blue), and two output nodes (green). Each directed link represents a causal relationship between the \(X\) and the \(Y\) nodes. The conditional probability table (CPT) prescribes weights for each state and node combination (gray). The target, _opening_ state is calculated using the joint probability distribution \(P(X,Y)\) and the CPT. The marginalization process is used to calculate the probability of the target node \(Y\) being in the _opening_ state \(y\). The process sums conditional probabilities of the states \(x\) stemming from parent nodes \(X\)(Strithi et al., 2020). Since the network consists of texture layers with state probabilities, the data evidence Figure 5: Texture representing _confirmed_, _conflicted_, and _unknown_ areas identified on a facade Figure 8: Texture showing one of the _static_ voxel classes _window_ on a facade Figure 6: Fusion of voxels with per-point semantic information (yellow), while suppressing dynamic points (gray) using probability score and measurements accumulation. Figure 7: _Dynamic_, noisy points (gray) separated from _static_, building-related points (yellow). represents the so-called soft evidence (Stritith et al., 2020). In an inference process, soft evidence is added to update the joint probability distribution. This process provides the most likely node states by estimating the posterior probability distribution (PPD). Pixel classes from the _model comparison_ and _points comparison_ textures form clusters if they have a neighbor in any of the eight directions of the pixel. The co-occurring _conflicted_, _window_, and _door_ cluster classes, lead to a high probability of unmodeled openings. This output is used for further opening 3D modeling and is back-projected onto segmented point clouds as either the _window_ or _door_ class. On the other hand, co-occurring _confirmed_, _window_, and _door_ clusters, lead to a low probability of existing openings. These clusters are also back-projected to improve the accuracy of semantically segmented point clouds: either as the _molding_ class, if close to an opening; or otherwise as the _wall_ class. The low probability \(P_{low}\) and the high probability \(P_{high}\) labels are assigned to clusters based on the probability threshold \(P_{t}\): \(P_{high}>P_{t}>=P_{low}\). ### Openings shape extraction The high probability clusters \(P_{high}\) are extracted from a Bayesian probability texture as opening shape candidates. Adding to existing shape indices (Basaraner and Cetinkaya, 2017), we introduce the completeness index, which measures the \(r_{cp}\) ratio of outer shape area to inner-holes area. The candidates are rejected if their area is smaller than the chosen area threshold value \(b_{s}\) and if their completeness index score \(r_{cp}\) is smaller than \(r_{cp_{t}}\). ### Openings shape generalization Yet, the extracted candidates can still display distorted, noisy shapes. Morphological opening operation is applied to minimize the effect of spiky and weakly connected contours. Subsequently, these shapes are generalized to minimum bounding boxes, for which a modified rectangularity index (Basaraner and Cetinkaya, 2017) is calculated. The modification considers relation of the bounding box sides \(a\) to \(b\), where outliers are rejected based on the upper \(PE_{up}\) and lower \(PE_{lo}\) percentiles of the index score. ### Model-driven 3D reconstruction Identified bounding boxes are used as fitting boundaries for window and door 3D models, which are loaded from a predefined library. The opening models' coordinate origin is erased and then placed in the bottom left corner of a model. The offset to global coordinates is calculated between the opening model origin and the bottom left corner of the respective bounding box. After the shift, the rotation is performed as a difference between the facade's face orientation and opening model orientation. Aligned 3D models are scaled to fit bounding box boundaries, as presented in Figure 10 and in Figure 11. ### Semantic modeling Since 3D solid libraries of openings are employed for 3D reconstruction, we opt to model them as solid geometries, too, following the CityGML encoding recommendation (Special Interest Group 3D, 2020). Based on the identified opening class, windows and doors are assigned to the respective CityGML _Window_ and _Door_ classes; as such, they link to the building entity (Groger et al., 2012). The unchanged semantics of input elements is preserved, except for the LoD which is upgraded to LoD3. ## 4 Experiments ### Datasets The method was tested using MLS point clouds and governmental CityGML building models at LoD2 representing the Technical University of Munich (TUM) main campus, Munich, Germany. The acquired LoD2 building models were created using 2D cadastre footprints and aerial measurements (Roschlaub and Batscheider, 2016). LoD3 door and window models were extracted from the manually modeled, open LoD3 city model of Figure 11: Reconstructed 3D windows and doors for facade B Figure 10: Reconstructed 3D windows for facade A Figure 9: Input nodes (yellow) and CPT estimate the probability of opening space (red) in BayNet: if (blue) the probability is high, doors or windows are unmodeled; otherwise, areas indicate other objects (green). Ingoldstadt, Germany 2. The open TUM-MLS-2016 dataset (Zhu et al., 2020) was transformed into the global coordinate reference system (CRS) and used to perform point cloud ray tracing. The TUM-FACADE dataset was deployed for training, as it comprises facade-annotated point clouds (Wysocki et al., 2022b). For computational reasons, we subsampled the original dataset removing all the redundant points within a 5 \(cm\) distance. In this way, we compressed an initial dataset of about 118 million points to a still resonute but lightweight version of about 10 million points. The subsampled point cloud was divided into 70% training and 30% validation sets (Figure 12). Footnote 2: [https://github.com/savenw/lod3-road-space-models](https://github.com/savenw/lod3-road-space-models) Additionally, 17 available classes were consolidated into seven representative facade classes: _molding_ was merged with _decoration_; _wall_ included _drainpipe_, _outer ceiling surface_, and _stairs_; _floor_ comprised _terrain_ and _ground surface_; _other_ was merged with _interior_ and _roof_; _blinds_ were added to _window_; while _door_ remained intact. ### Parameter settings The uncertainties of the true facades location were estimated considering the global registration error of MLS point clouds and building models: For point clouds these were set to \(e_{1}=0.3\)\(m\), \(\mu=0.15\)\(m\), \(CL_{1}=90\%\), and \(z_{1}=1.64\); for building models were set to \(e_{2}=0.03\)\(m\), \(\mu=0.015\)\(m\), \(CL_{2}=90\%\), and \(z_{1}=1.64\). This yielded the facades' upper CI score of 0.2 \(m\) and \(CL=90\%\). Ray casting was employed on a grid with the voxel size set to \(v_{s}=0.1\)\(m\) considering: opening size, the point clouds density, and their relative accuracy. The voxels were initialized with a uniform prior probability of \(P=0.5\). Log-odd values were set to \(l_{occ}=0.85\) for _occupied_ and \(l_{emp}=-0.4\) for _empty_ states, corresponding to \(P_{occ}=0.7\) and \(P_{emp}=0.4\), respectively. Clumping parameters were set to \(l_{min}=-2\) and \(l_{max}=3.5\), corresponding to \(P_{min}=0.12\) and \(P_{max}=0.97\), respectively, following (Tuttas et al., 2015; Hornung et al., 2013); an exemplary implementation is provided in our repository 3. For the fusion of voxels and points, the static threshold was set to \(P_{static}=0.7\), while the _empty_ voxels occupancy probability was fixed to 0.4 for processing acceleration. Footnote 3: [https://github.com/OloOcki/conflict-mls-citymml-detection](https://github.com/OloOcki/conflict-mls-citymml-detection) As regards the semantic segmentation procedure, taking into consideration the main characteristic of the buildings, the classes to be detected, and following Grilli et al. (2019), we identified 0.8 \(m\) as optimal neighborhood search radius \(r_{i}\) for the features _roughness, volume density,__minariance,__planarity_, and _surface variation_, while 0.4 \(m\) for _verticality_. The proposed BayNet has two input soft evidence layers: _points comparison_ and _model comparison_ textures. These had associated confidence levels, which scored 70% and 90% for _point_ and _model comparison_ layers, respectively. The opening state probability was defined by the probability threshold: \(P_{t}=0.7\). The opening candidates' area threshold value \(b_{s}\) was set to 0.3 \(m^{2}\), while completeness threshold score \(r_{com_{t}}\) was set to 0.1, to suppress noisy, patchy clusters. The over-elongated bounding boxes were suppressed by calculating the modified rectangularity index, where the upper \(PE_{up}\) and lower \(PE_{lo}\) were set to the 95th and 5th percentile, respectively. ### Validation of improved semantic segmentation Semantic segmentation results were validated on unseen ground-truth point clouds of the TUM-FACADE dataset. For evaluation, we use the overall accuracy (OA); F1 score per class; and average: precision (\(\mu\)P), recall (\(\mu\)R), F1 score (\(\mu\)F1), and intersection over union (\(\mu\)IoU). The _arch_ and _column_ classes were omitted in the validation, since they were absent in the ground-truth building. As shown in Table 1, for the baseline of the validation served the Point Transformer (PT) network (Zhao et al., 2021). The presented feature-extended version of the PT network (PT+Ft.) served as an input for the proposed conflict classification (CC) method. ### Validation of openings reconstruction Reconstructed openings were validated using manually modeled ground-truth building openings (Table 3). Detection rate was calculated based on the on-site inspection of all existing facade openings (AO) and measured openings (MO) by laser scanner (Table 2). The validation was performed for facades A, B, and C, shown in Figure 10, Figure 11, and Figure 15, respectively. Table 2, facade openings were correctly detected with an estimated 92% detection rate to total measured openings (DR-MO) and 79% detection rate for all openings (DR-AO). Roughly 1% false alarm rate for both measured (FR-MO) and all openings (FR-AO) was noted. The experiments corroborate that DR was dependent on the density of measurements per facade: for the densely covered facade A it estimated 90% DR-AO and detected 100% of measured openings (DR-MO); for the highly occluded side-facade C it estimated 28% and 50%, respectively (see Table 2 and Figure 15). The method improves significantly reconstruction performance in comparison to the one conducted only on segmented point clouds of the baseline PT architecture, as shown in Figure 13. When compared to the ground-truth openings, the proposed reconstruction reached roughly 90% accuracy (Table 3); yet, the method is limited when windows are partially measured (e.g., blinds before windows), as exemplified by several windows in the third row in Figure 13b. The back-projected, classified conflicts increased accuracy of semantic point cloud segmentation by approximately 12% (Table 1). Note that the precision and intersection over union score for CC remained similar to the PT+Ft. score, while F1 score for _floor_ dropped by about 6%. Remarkably, the proposed CC method improves segmentation of _window_, _door_, and _other_ classes by approximately 11%, 12%, and 21%, respectively. ## 6 Conclusion Our work has led us to the conclusion that refinement is a promising alternative to a from-scratch reconstruction. The refinement preserves input semantics, minimizes model-specific planarity issues, and enables consistent city model updates. Moreover, existing LoD3 elements can be extracted and directly employed as refinement features for buildings at lower LoDs. The validation presents that the method reaches a high accuracy of 92% in detecting observable windows and a low false alarm rate score of approximately 1%. Refined point clouds also score low false negative rate, which is indicated by a high recall score of 79%. This trait of our method could be of particular importance for feature-dependent applications, where robustness is favored over visualization, such as in simulations of automated driving functions (Schwab and Kolbe, 2019). On the other hand, facade occlusions and a laser range could limit the method's applicability for visualization-oriented purposes, where a further prediction of unseen objects could be employed. Experiments corroborate that combining visibility analysis with a region-based approach improves segmentation accuracy. In the future, we plan to embed occupancy information directly into the training of the deep neural network. Furthermore, including radial point cloud features (e.g., intensity) in training datasets could facilitate detecting windows covered by blinds. Tested facades presented challenging, varying measuring conditions; for similar facade and opening styles, the method is expected to provide comparable results. Yet, testing sample size implies that caution must be exercised. It is worth noting that _static_ objects, which do not contribute to facades elements Figure 14: Semantic segmentation based on: a) only a neural network, b) the proposed method. Figure 13: Windows reconstructed using: a) only neural networks output, b) the proposed method. \begin{table} \begin{tabular}{l|c c c|c} \hline Facade & A & B & C & **Total** \\ \hline \(m\)IoU & 88.4\% & 94.8\% & 97.2\% & **89.6\%** \\ \(\mu\)IoU & 79.1\% & 83.4\% & 85.7\% & **80.3\%** \\ \hline \end{tabular} \end{table} Table 3: Validation of reconstructed openings using median (\(m\)IoU) and average intersection over union (\(\mu\)IoU). Figure 15: Openings reconstruction for facade C in the presence of occluding objects: a) Photo, b) refined facade. and are adjacent (e.g., traffic signs, bus shelters), can negatively influence the semantic back-projection results. To further our research, we plan to test the method on a higher number of fagades. ###### Acknowledgements. This work was supported by the Bavarian State Ministry for Economic Affairs, Regional Development and Energy within the framework of the IuK Bayern project _MoFa3D - Mobile Erfassung von Fassaden mittels 3D Punknvolken_, Grant No. IUK643/001. Moreover, the work was conducted within the framework of the Leonhard Obermeyer Center at the Technical University of Munich (TUM). We gratefully acknowledge the Geoinformatics team at the TUM for the valuable insights and for providing the CityGML datasets.
2310.14775
Many-body quantum interference route to the two-channel Kondo effect: Inverse design for molecular junctions and quantum dot devices
Molecular junctions -- whether actual single molecules in nanowire break junctions or artificial molecules realized in coupled quantum dot devices -- offer unique functionality due to their orbital complexity, strong electron interactions, gate control, and many-body effects from hybridization with the external electronic circuit. Inverse design involves finding candidate structures that perform a desired function optimally. Here we develop an inverse design strategy for generalized quantum impurity models describing molecular junctions, and as an example, use it to demonstrate that many-body quantum interference can be leveraged to realize the two-channel Kondo critical point in simple 4- or 5-site molecular moieties. We show that remarkably high Kondo temperatures can be achieved, meaning that entropy and transport signatures should be experimentally accessible.
Sudeshna Sen, Andrew K. Mitchell
2023-10-23T10:23:27Z
http://arxiv.org/abs/2310.14775v2
# Many-body quantum interference route to the two-channel Kondo effect: ###### Abstract Molecular junctions - whether actual single molecules in nanowire break junctions or artificial molecules realized in coupled quantum dot devices - offer unique functionality due to their orbital complexity, strong electron interactions, gate control, and many-body effects from hybridization with the external electronic circuit. Inverse design involves finding candidate structures that perform a desired function optimally. Here we develop an inverse design strategy for generalized quantum impurity models describing molecular junctions, and as an example, use it to demonstrate that many-body quantum interference can be leveraged to realize the two-channel Kondo critical point in simple 4- or 5-site molecular moieties. We show that remarkably high Kondo temperatures can be achieved, meaning that entropy and transport signatures should be experimentally accessible. Nanoelectronics circuits are quantum devices featuring a nanostructure with a few confined and typically strongly correlated degrees of freedom coupled to source and drain metallic leads [1; 2; 3; 4; 5]. For molecular junctions, a single molecule can bridge the gap in a nanowire [6]. The electrical conductance of such a junction is controlled by the structure and chemistry of the molecule, through which a current must pass [7]. A range of physics can be realized in such systems - including Coulomb blockade and various Kondo effects [8; 9; 10; 11; 4; 12], quantum interference [13; 14; 15; 16; 17], and phase transitions [18; 19]. This presents the tantalizing possibility of devices at the limit of miniaturization that leverage inherently quantum effects to provide enhanced functionality as switches [20; 21; 22; 23], transistors [4; 5], diodes and rectifiers [24; 25; 26; 27; 28; 29], and even as tools for chemical analysis [30]. A grand challenge in the field is to find molecular species that can form robust junctions to perform a desired function optimally [31]. Simple artificial molecular junctions can also be fabricated in semiconductor coupled quantum dot (QD) devices [32; 33]. The design of such systems need not obey chemical structure principles [34], and they benefit from _in-situ_ tunability [35]. They can also be integrated with other components to realize more exotic effects, such as fractionalization at the two-channel Kondo (2CK) quantum critical point [36], which results from the frustration of screening when a single spin-\(\frac{1}{2}\) degree of freedom is coupled to two independent conduction electron channels [37]. The 2CK effect has gained prominence recently as a route to engineer many-body Majorana zero modes in nanostructures [38; 39; 40; 41]. Spectacular experimental realizations of 2CK physics in QD systems [42; 43; 44] have however required the use of a 'quantum box' or metallic island to provide a reservoir of many interacting electrons [45; 46]. Can the 2CK effect be realized in simpler QD systems without the use of these components? If so, what is the minimum number of interacting sites needed? Can we find molecular moieties that realize 2CK physics when placed in a junction? _Model.-_ Molecular junctions and QDs are described by generalized quantum impurity models [47] of the form \(\hat{H}=\hat{H}_{\text{mol}}+\hat{H}_{\text{leads}}+\hat{H}_{\text{hyb}}+\hat {H}_{\text{gate}}\). Here we formulate the isolated molecule as an extended Hubbard Hamiltonian, \[\hat{H}_{\text{mol}}=\sum_{\alpha=\uparrow,\downarrow}\sum_{m,n}t_{mn}d^{ \dagger}_{m\alpha}d_{n\sigma}+\tfrac{1}{2}\sum_{m,n}U_{mn}\hat{n}_{m}\hat{n}_{n} \tag{1}\] where \(d^{(\dagger)}_{m\sigma}\) annihilates (creates) an electron on molecule orbital \(m\) with spin \(\sigma\) and \(\hat{n}_{m}=\sum_{\sigma}d^{\dagger}_{m\sigma}d_{m\sigma}\) is a number operator. Single-particle processes are parameterized by \(t_{mn}\) whereas \(U_{mn}\) embodies electronic interactions. The gate voltage \(V_{g}\) controls the charge on the molecule via \(\hat{H}_{\text{gate}}=V_{g}\sum_{m}\hat{n}_{m}\). The leads are described by continua of free fermions, \(\hat{H}_{\text{leads}}=\sum_{\alpha,\sigma}\epsilon_{k}^{\dagger}c_{\alpha \sigma k}^{\dagger}\) with \(\alpha=s,d\) for source and drain. The molecule frontier orbital \(d_{r\alpha,\sigma}\) couples to a local orbital \(c_{\alpha\sigma}\) of lead \(\alpha\) via \(\hat{H}_{\text{hyb}}=\sum_{\alpha,\sigma}V_{\alpha}(d^{\dagger}_{r,\sigma}c_{ \alpha\sigma}+\text{H.c.})\), where \(c_{\alpha\sigma}=\frac{1}{V_{\alpha}}\sum_{k}V_{k}c_{\alpha\alpha k}\). Strong electron interactions[3; 4] produce rich many-body physics but also preclude brute force solutions [47]. Inverse design is challenging because physical properties then depend in a highly nontrivial way on the delicate interplay of many microscopic parameters. It is a formidable task to find a set of model parameters that yield specific device functionalities. However, if only the _low-temperature_ behavior is of interest, then simpler low-energy effective models may be used [48; 49]. The connection between effective model parameters and low-temperature physical properties is more transparent. Here we focus on one such scenario, where the low-temperature physics that we seek is that of the 2CK critical point [50]. The condition for obtaining this behavior in molecular junctions is simply stated in terms of the low-energy effective 2CK model. Inverse design then consists of finding the set of microscopic model parameters satisfying this condition. We show that this is achievable in remarkably simple systems, with just a few interacting degrees of freedom, and without the interacting electron reservoirs used previously in experiments [42; 43; 44]. _Effective models.-_ An odd number of electrons can be accommodated on the molecule by tuning gate voltages, such that the ground state of \(\hat{H}_{\text{mol}}\) is a unique spin-doublet state. At low temperatures, effective spin-flip Kondo exchange interactions and potential scattering are generated, described by a generalized 2CK model [51], \[\hat{H}_{\text{eff}}=\hat{H}_{\text{leads}}+\sum_{\alpha,\beta}\left[J_{\alpha \beta}\,\hat{\vec{\hat{S}}}\cdot\hat{\vec{s}}_{\alpha\beta}+W_{\alpha\beta}\sum _{\sigma}c^{\dagger}_{\beta\sigma}c_{\alpha\sigma}\right] \tag{2}\] where \(\hat{\vec{\hat{S}}}\) is a spin-\(\frac{1}{2}\) operator for the molecule ground state doublet and \(\hat{\vec{s}}_{\alpha\beta}=\frac{1}{2}\sum_{ss^{\prime}}c^{\dagger}_{\beta \sigma}\vec{\sigma}_{s^{\prime}s}c_{\alpha s}\) are conduction electron spin operators. We refer to the \(J_{\alpha\beta}\) and \(W_{\alpha\beta}\) terms as exchange and potential scattering, respectively. The form of Eq. 2 is guaranteed by \(SU(2)\) spin symmetry if only the most RG relevant terms are considered [17]. Since \(J_{sd}=J_{ds}\) and \(W_{sd}=W_{ds}\) by hermiticity, the low-energy behavior of such molecular junctions is controlled by just six effective parameters. The 2CK critical point arises for equal antiferromagnetic Kondo interactions \(J_{ss}=J_{dd}>0\), but when the source-drain mixing terms vanish, \(J_{sd}=W_{sd}=0\)[50; 52]. In molecular junctions or coupled QD devices, the 2CK effect should be realizable when the molecule or QD has a net spin-\(\frac{1}{2}\) ground state and when the effective model parameters satisfy these conditions. \(W_{ss}\) and \(W_{dd}\) are RG irrelevant and play no role in the following. _Quantum interference (QI) and conductance nodes.-_ Single-molecule junctions often exhibit QI phenomena, with the most dramatic effect being electrical conductance nodes due to the destructive interference of competing transport pathways through the molecule [13; 14; 15; 16]. However, such a description of the QI and transport is typically on the single-particle level encoded by the real-space hopping matrix \(t_{nm}\)[53], and is inapplicable for interacting systems displaying Coulomb blockade or Kondo effects. Many-body QI [17; 54] is naturally richer since it applies in Fock space, which has a much higher complexity, and provides new channels for QI (e.g. between particles and holes). On the level of the effective 2CK model Eq. 2, many-body QI can cause any of the parameters \(J_{\alpha\beta}\) and \(W_{\alpha\beta}\) to vanish. \(J_{sd}=W_{sd}=0\) must produce a conductance node because then the charge in the leads is separately conserved. The 2CK critical point therefore arises at a conductance node, which can be driven by many-body QI. We dub this the QI-2CK effect. _Perturbative solution.-_ We consider first the perturbative derivation of the effective 2CK parameters from those of the bare model by means of a generalized Schrieffer-Wolff transformation (SWT) [48]. This is done by projecting the full model for the junction onto the spin-doublet molecule ground states by eliminating virtual excitations to second order in \(\hat{H}_{\text{hyb}}\). In the Supplementary Information (SI) [51] we formulate this problem in an efficient way that does not require full diagonalization of \(\hat{H}_{\text{mol}}\), but only uses information on the ground state energy and wavefunction of the isolated molecule. Comparatively large systems can then be treated by using methods that target ground state properties [55; 56; 57]. Eq. 2 is obtained by SWT with effective parameters \(J_{\alpha\beta}\equiv V_{\alpha}V_{\beta}j_{\alpha\beta}\) and \(W_{\alpha\beta}\equiv V_{\alpha}V_{\beta}w_{\alpha\beta}\) that can be calculated from many-body scattering amplitudes \(A^{\sigma\alpha\beta}=p^{\sigma\alpha\beta}-h^{\sigma\alpha\beta}\) which involve the tunneling of both particles (\(p\)) and holes (\(h\)) with spin-\(\sigma\) through the molecule from lead \(\alpha\) to lead \(\beta\). We may write \(j_{\alpha\beta}=2(A^{\uparrow\alpha\beta}-A^{\downarrow\alpha\beta})\) and \(w_{\alpha\beta}=\frac{1}{2}(A^{\uparrow\alpha\beta}+A^{\downarrow\alpha\beta})\), with the \(p\) and \(h\) amplitudes obtainable in closed form as detailed in the SI [51]. Many body QI can appear here in different ways: through the vanishing of individual \(p\) or \(h\) processes due to interference of competing Fock space propagators, by a cancellation of terms with different spin, or by a cancellation of \(p\) and \(h\) amplitudes for a given process. In fact, particle-hole (\(ph\)) symmetry guarantees the latter, since then \(p^{\sigma\alpha\beta}=h^{-\sigma\alpha\beta}\) and hence \(W_{ss}=W_{dd}=W_{sd}=0\) in Eq. 2. A system is \(ph\)-symmetric when its Hamiltonian is invariant to the \(ph\) transformations \(d_{n\sigma}\to e^{i\phi_{n\sigma}}d^{\dagger}_{n\sigma}\) for all \(n\sigma\) (with suitable phases \(\phi_{n\sigma}\)). The celebrated Coulson-Rushbrooke pairing theorem [58] is a statement about \(ph\) symmetry, with \(p\) and \(h\) excitations appearing symmetrically around the ground state for molecules satisfying the'starring rule' [54; 59]. A system may exhibit \(ph\) symmetry if the molecular structure encoded by the single-particle adjacency matrix \(t_{mn}\) can be accommodated on a bipartite graph. Therefore \(ph\)-symmetric systems must _not_ have odd loops. _Satisfying the 2CK condition.-_ Since \(ph\) symmetry implies \(W_{sd}=0\), we search for \(ph\)-symmetric systems in which \(J_{sd}=0\) can also be achieved. In addition we want \(J_{ss}=J_{dd}\) for the 2CK effect so we consider only \(sd\) Figure 1: The simplest molecular moiety to exhibit the 2CK effect with 5 interacting active orbitals. The effective molecule-lead Kondo interactions \(j_{ss}\) and \(j_{dd}\) are equal and antiferromagnetic (blue line), while source-drain mixing terms vanish due to many-body QI. Potential scattering \(w_{sd}\) (red) vanishes at gate voltage \(V_{g}=0\) by particle-hole symmetry, whereas exchange cotunneling \(j_{sd}\) (black) vanishes on tuning the couplings \(t^{\prime}/t\). Obtained here via SWT and plotted for \(U/t=1\). symmetric molecular moieties. As a simple starting point we study \(M\)-site Hubbard chains with constant nearest neighbour hopping \(t\), local Coulomb repulsion \(U\), and local potential \(\epsilon=-U/2\). Leads \(s\) and \(d\) are connected to molecule sites \(1\) and \(M\). For odd \(M\) the ground state around \(V_{g}=0\) is a unique spin-doublet and we numerically perform the SWT as shown in the SI [51]. The system is \(ph\)-symmetric at \(V_{g}=0\) such that \(W_{\alpha\beta}=0\). We also find \(J_{ss}=J_{dd}>0\). Although \(J_{sd}\) is always finite, we find that its sign alternates for \(M=1,3,5,7,...\). In particular, \(J_{sd}<0\) for \(M=3\) but \(J_{sd}>0\) for \(M=5\). One might anticipate that interpolating between \(M=3\) and \(M=5\) might yield a sweet spot solution where \(J_{sd}=0\). Avoiding odd loops and preserving \(sd\) symmetry, this can be achieved by connecting sites \(1\) to \(4\) and \(2\) to \(5\), viz: \[H_{\rm mol} = \frac{U}{2}\sum_{m=1}^{5}\left(\hat{n}_{m}-1\right)^{2}+t\sum_{ \sigma}\sum_{m=1}^{4}\left(d_{m\sigma}^{\dagger}d_{m+1\sigma}+{\rm H.c.}\right) \tag{3}\] \[+t^{\prime}\sum_{\sigma}\left(d_{1\sigma}^{\dagger}d_{4\sigma}+d _{2\sigma}^{\dagger}d_{5\sigma}+{\rm H.c.}\right)\;.\] For small \(t^{\prime}/t\) we expect small perturbations to the \(M=5\) chain solution, whereas for large \(t^{\prime}/t\) the next-next-nearest-neighbour tunneling provides a shortcut through the chain so that only \(3\) sites are needed to connect \(s\) and \(d\) leads. Numerical results of the SWT are presented in Fig. 1, together with a schematic illustration of the junction. At \(ph\) symmetry \(V_{g}=0\), the effective model parameters are plotted as a function of \(t^{\prime}/t\) in the right panel. We indeed confirm that \(j_{sd}=0\) at a special value \(t^{\prime}=t^{\prime}_{c}\) (black line). In the left panel we show the gate evolution of the same parameters at \(t^{\prime}=t^{\prime}_{c}\), with the 2CK conditions being satisfied here at \(V_{g}=0\). _Non-perturbative solution: NRG.-_ To confirm the existence of a 2CK critical point in this simple 5-site molecular cluster, we turn to the non-perturbative solution of the full molecular junction involving Eq. 3 using NRG [60], where we set \(t=\frac{1}{2}\) and the conduction electron bandwidth \(D=1\) from now on. Numerical results are presented in Fig. 2. In panel (a) we compare SWT predictions for the critical \(t^{\prime}_{c}\) with those obtained by NRG for different interaction strengths \(U\), showing excellent agreement. In particular, we note that the 2CK critical point can be realized for _any_ finite \(U\). Interestingly, we find that \(t^{\prime}_{c}\to t\) as \(U\to 0\). The \(U=0\) limit of Eq. 3 is studied in the SI [51]: we find \(t^{\prime}=t\) is a singular point of the non-interacting model with strictly decoupled molecular degrees of freedom that give a finite \(T=0\) entropy and a QI-driven conductance node. With interactions switched on, the critical \(t^{\prime}_{c}\) is no longer at \(t\) but we still find a residual \(T=0\) entropy and a conductance node - now characterizing the 2CK critical fixed point. Panel (c) shows the molecular contribution to the entropy \(S_{\rm mol}\) as a function of \(T\) at the critical point for different \(V\). The critical point can be realized for any combination of \(V\) and \(U\) (in panel (c) we take fixed \(U\)), and in all cases we find \(S_{\rm mol}=\frac{1}{2}\ln(2)\) for \(T\ll T_{\rm K}\), with \(T_{\rm K}\) the critical Kondo temperature. This unusual value for the entropy is a hallmark of the free Majorana fermion localized on the molecule at low temperatures at the 2CK critical point [38; 50]. For small molecule-lead coupling, \(T_{\rm K}\) is small and we have an extended intermediate \(\ln(2)\) plateau corresponding to the local moment regime of Eq. 2. Remarkably however, at larger \(V\) the Kondo temperature can be boosted to large (non-universal) values and local moment physics is entirely eliminated. This scenario lies outside of the regime described by Eq. 2, suggesting that the interference giving rise to criticality is a topological feature of the geometry in Eq. 3. In Fig. 2(b) we plot the evolution of the Kondo temperature with \(8V^{2}/U\equiv J_{K}\) (where \(J_{K}\) is the SWT Kondo coupling for a single Anderson impurity [47]), showing that a maximum value \(T_{\rm K}\sim 10^{-2}U\) can be realized for all values of \(U\) considered when \(J_{K}\sim 1\). A weak-strong coupling duality [61] is found on further increasing \(J_{K}\) - see dashed and dotted lines in Fig. 2(b). Note that the critical point is a non-Fermi liquid and as such is not perturbatively connected to the \(U=0\) limit: even though the critical point can be realized at small \(U\), we find that \(T_{\rm K}\to 0\) as \(U\to 0\). _Gate control and entropy measurement.-_ With \(t^{\prime}\) tuned to the 2CK critical point at \(t^{\prime}_{c}\), we can vary the gate voltage \(V_{g}\) in the vicinity of \(V_{g}=0\). This perturbation drives the system away from the 2CK fixed point and towards a standard Kondo strong-coupling Fermi liquid Figure 2: 2CK critical point driven by QI. (a) Critical coupling \(t^{\prime}_{c}\) as a function of \(U/t\), with NRG results (points) validating SWT predictions (line). (b) 2CK Kondo temperature \(T_{\rm K}\) vs \(8V^{2}/U\equiv J_{K}\) for different \(U/t\) obtained by NRG. Dashed line is \(T_{\rm K}/U\sim\exp[-4/J_{K}]\) valid for \(J_{K}<1\) whereas the dotted lines show \(T_{\rm K}/U\sim\exp[-aJ_{K}]\) with \(a\equiv a(U)\sim\mathcal{O}(1)\) for \(J_{K}>1\). (c) Entropy \(S_{\rm mol}\) vs \(T/U\) for different \(V/U\) at the 2CK critical point for \(U/t=10\), showing a residual \(\frac{1}{2}\ln(2)\). (FL) state on the scale of \(T^{*}\). From NRG we find [51], \[T^{*}\sim V_{g}^{4}\qquad:\quad T^{*}\ll T_{\rm K} \tag{4}\] which holds in the universal critical regime. Along this FL crossover, physical properties are universal scaling functions of \(T^{*}/T\) and hence \(V_{g}/T^{1/4}\). For the pure 2CK model in this regime, bosonization methods give an exact result for the entropy change from the critical point [38], \[\Delta S\left(\frac{T^{*}}{T}\right)=\frac{T^{*}}{T}\left[\psi\left(\frac{1}{2 }+\frac{T^{*}}{T}\right)-1\right]\!-\!\ln\left[\frac{1}{\sqrt{\pi}}\Gamma\left( \frac{1}{2}+\frac{T^{*}}{T}\right)\right] \tag{5}\] with \(\Gamma\) (\(\psi\)) the gamma (digamma) function. The form of this crossover is entirely characteristic of the 2CK critical point [52]. Using Eq. 4, this crossover can be achieved by fixing \(T\) (\(\ll T_{K}\)) and detuning \(V_{g}\) (which controls \(T^{*}\)). This is shown in the top panel of Fig. 3, which compares NRG results for the junction (line) to Eq. 5 (points). Recent progress has been made in observing entropic signatures in nanoelectronics experiments, by exploiting local Maxwell relations which connect the entropy change for a process to measureable changes in the charge [62; 63; 64]. Since the gate voltage \(V_{g}\) couples to the total molecule charge \(\hat{N}=\sum_{m}\hat{n}_{m}\), the change in entropy induced by scanning \(V_{g}\) as in Fig. 3(a) follows as \(\Delta S=-\int dV_{g}\ dN/dT\). The quantity \(dN/dT\) is shown in Fig. 3(b). Application of the Maxwell relation yields the blue-dashed line in Fig. 3(a), which agrees perfectly with the direct entropy calculation. We argue that the molecular system is well suited to this because \(T_{\rm K}\) can be boosted to large values, meaning that the universal critical regime should be experimentally accessible. _Transport.-_ At the 2CK critical point, the series conductance through the molecular junction vanishes due to the many-body QI node. However, a nontrivial transport signature is picked up along the FL crossover by detuning the gate voltage. NRG results for the junction conductance \(G_{c}(T)\) as a function of \(T\) at fixed detuning \(V_{g}\) are shown in Fig. 4(a). The maximum conductance of \(2e^{2}/h\) for a single electron transistor is recovered at low temperatures \(T\ll T^{*}\) in all cases. Fig. 4(b) shows the gate evolution of the conductance \(G_{c}(V_{g})\) at fixed \(T\) (\(\ll T_{\rm K}\)), and is the analogous plot to Fig. 3(a). The exact solution of the pure 2CK model along the FL crossover [38] yields a prediction for conductance [65], \[G_{c}\left(\frac{T^{*}}{T}\right)=\frac{2e^{2}}{h}\times\left(\frac{T^{*}}{T} \right)\psi^{\prime}\left(\frac{1}{2}+\frac{T^{*}}{T}\right)\, \tag{6}\] where \(T^{*}\) depends on \(V_{g}\) via Eq. 4 and \(\psi^{\prime}\) is the trigamma function. This expression matches essentially perfectly with NRG data for the full molecular junction in Fig. 4. Finally, from the Maxwell relation \(dN/dT=-dS/dV_{g}\) we can use Eqs. 4-6 to prove the exact conductance-charge relation [63] in the universal FL crossover regime, \[\frac{dN}{dT}\sim\frac{V_{g}^{3}}{T}\left(1-\frac{Gc(V_{g},T)}{2e^{2}/h} \right)\, \tag{7}\] meaning that experimental conductance data can be translated into \(dN/dT\) (see Fig. 3(b), dotted line) and then integrated to extract the entropy. _Inverse design.-_ The above results establish the existence of the QI-2CK effect in a simple molecular moiety with exact \(ph\) and \(sd\) symmetry. In a more general setting, however, we can use inverse design to search for candidate systems that satisfy the 2CK conditions. This can be done by setting up a loss function, for example \(\mathcal{L}=j_{sd}^{2}+w_{sd}^{2}+(j_{ss}-j_{dd})^{2}\), which is minimum when the 2CK conditions on the effective model parameters are met. We then minimize this function with respect to the bare model parameters by gradient descent (GD). In practice this involves finding the derivatives of \(j_{\alpha\beta}\) and Figure 3: (a) Entropy change \(\Delta S\) as the molecular junction is driven away from the critical point by increasing gate voltage \(V_{g}\). NRG results (line) compared with analytic result Eq. 5 (points). (b) \(dN/dT\) from NRG (line), compared with prediction via conductance from Eq. 7 (dotted line). Dashed line in the top panel obtained by integrating \(dN/dT\) over \(V_{g}\). Plotted for \(U/t=10\), \(V/U=0.15\), \(t^{\prime}=t^{\prime}_{c}\), \(T=10^{-6}\ll T_{\rm K}\). Figure 4: Series conductance along the FL crossover (a) as a function of temperature for different gate voltages; and (b) as a function of gate voltage at fixed \(T=10^{-6}\ll T_{\rm K}\); compared with Eq. 6. Shown for \(U/t=10\), \(V/U=0.15\), \(t^{\prime}=t^{\prime}_{c}\). \(w_{\alpha\beta}\) with respect to \(t_{mn}\) and \(U_{mn}\), which can be achieved using differentiable programming techniques [66]. In the SI [51] we show that this can be implemented very efficiently within our improved SWT scheme. Using this methodology, we could find a family of low-symmetry molecular junctions involving just 4 interacting sites [51], a representative example of which is shown in Fig. 5. By fine-tuning the gate voltage \(V_{g}\) in this structure we predict 2CK criticality. We did not find any 2CK critical systems involving 1, 2, or 3 sites. A non-perturbative extension utilizing 'differentiable NRG' [67] to optimize bare model parameters directly via GD could be used to bypass the SWT approximation. _Conclusion.-_ The 2CK critical point can be realized by exploiting many-body QI effects in simple molecular junctions or coupled quantum dot devices, featuring a few tunnel-coupled, interacting orbitals. QI effects can be manipulated by tuning gate voltages to switch between a perfect node and perfect Kondo resonant transmission. Inverse design can be used to search automatically for systems displaying desired functionality. The molecular moieties we identified are not intended to be atomistic models of any real molecule. However, the inverse design approach could be integrated with chemical databases to search for realistic candidate molecular junctions [68]. Our results open the door to designer devices utilizing many-body QI effects. For example, simple structures exhibiting three-channel Kondo [69] or two-impurity Kondo [70; 71] effects, or lattice extensions describing non-Fermi liquid materials [72]. Inverse design could be used to optimize performance of nanoscale transistors, rectifiers, spintronics devices and other quantum devices. _Acknowledgments.-_ This work was supported by the Irish Research Council through the Laureate Award 2017/2018 grant IRCLA/2017/169 (AKM) and Science and Engineering Research Board, India (SRG/2022/000495), (MTR/2022/000638), and IIT(ISM) Dhanbad [FRS(175)/2022-2023/PHYSICS] (SS). We thank Jonas Rigo for enlightening discussions.
2303.03331
A self-gravitating system composed of baryonic and dark matter analysed from the post-Newtonian Boltzmann equations
We study the Jeans gravitational instability for a mixture of baryonic and dark matter particles, in the post-Newtonian approximation. We adopt a kinetic model consisting of a coupled system of post-Newtonian collisionless Boltzmann equations, for each species, coupled to the post-Newtonian Poisson equations. We derive the stability criterion, accounting for both post-Newtonian corrections and the presence of dark matter. It is shown that both effects give rise to smaller Jeans masses, in comparison with the standard Jeans criterion, meaning that a smaller mass is needed to begin the gravitational collapse. Taking advantage of that, we confront the model with the observational stability of Bok globules, and show that the model correctly reproduces the data.
Gilberto M. Kremer, Kamel Ourabah
2023-03-06T18:01:44Z
http://arxiv.org/abs/2303.03331v1
A self-gravitating system composed of baryonic and dark matter analysed from the post-Newtonian Boltzmann equations ###### Abstract We study the Jeans gravitational instability for a mixture of baryonic and dark matter particles, in the post-Newtonian approximation. We adopt a kinetic model consisting of a coupled system of post-Newtonian collisionless Boltzmann equations, for each species, coupled to the post-Newtonian Poisson equations. We derive the stability criterion, accounting for both post-Newtonian corrections and the presence of dark matter. It is shown that both effects give rise to smaller Jeans masses, in comparison with the standard Jeans criterion, meaning that a smaller mass is needed to begin the gravitational collapse. Taking advantage of that, we confront the model with the observational stability of Bok globules, and show that the model correctly reproduces the data. ## I Introduction The analysis of instabilities of self-gravitating fluids is an old subject in the literature which goes back to the pioneer work of Jeans [1], who determined from the hydrodynamic equations coupled with the Newtonian Poisson equation a dispersion relation where one solution is interpreted as a growth of mass density perturbations in time. One is referred to the books [2; 3; 4] for a description of the mass density perturbations which grow exponentially with time - known as Jeans instability - which is associated with the gravitational collapse of self-gravitating interstellar gas clouds, where the outwards pressure force becomes smaller than the inwards gravitational force. Although formulated 120 years ago, the process of Jeans instability still constitutes an active area of research, revisited from various modern standpoints. One may cite for instance its generalisation to general relativity [5] and to an expanding universe background [5; 6; 7], its formulation in the language of kinetic theory [8; 9], and its generalisation to alternative theories of gravity [10; 11; 12; 13], where it is regularly used to constraint the free parameters of the theory. Presently the matter content of the Universe is known to be composed by baryonic matter - which consists of all categories of atoms - and dark matter - a still unknown component which does not interact with electromagnetic radiation. Cold dark matter is important in the structure formation, since it interacts only with gravity, collapses earlier and forms the seeds where baryons fall later. The formation of structures would occur later if the cold dark matter was not present. The Jeans instability for a system composed by baryonic and dark matter particles was investigated within the framework of a coupled system of collisionless Boltzmann equations and the Newtonian Poisson equation in [14; 15; 16] and by using a hybrid quantum-classical fluid approach in [17]. The Jeans instability was also investigated on the basis of the first post-Newtonian hydrodynamic equations [18; 19] and of the first [20] and second [21] post-Newtonian Boltzmann equations coupled with the Newtonian and the post-Newtonian Poisson equations. In the present work the Jeans instability for a system consisted of baryonic and cold dark matter is analysed within the framework of the system of first post-Newtonian collisionless Boltzmann equations which are coupled with the Newtonian and post-Newtonian Poisson equations. From a perturbation analysis of the one-particle distribution functions and gravitational potentials in terms of plane wave representations a dispersion relation is obtained and the post-Newtonian Jeans mass is derived. As an application of the post-Newtonian Jeans mass for the system of baryons and dark matter the observational stability data of Bok globules is investigated. The paper has the following structure: in Section II the coupled system of post-Newtonian Boltzmann and Poisson equations are introduced as well as the equilibrium Maxwell-Juttner distribution functions. The perturbations from equilibrium background states of the one-particle distribution functions and of the gravitational potentials are the subject of Section III. In Section IV the perturbations are represented as plane waves of small amplitudes from which a dispersion relation is derived and the influence of the post-Newtonian approximation in the Jeans mass of the baryonic and dark matter system is investigated. The application of the theoretical prediction for the post-Newtonian Jeans mass with the observational stability data of Bok globules is the topic of Section V. In Section VI the conclusions of the work are stated. ## II The system of Boltzmann equations We are interested in analysing a system composed of baryonic and dark matter particles within the framework of collisionless post-Newtonian Boltzmann equations, since dark matter interacts with other particles only through gravity. For that end we introduce the subscripts \(b\) and \(d\) to denote the baryonic and dark matter, respectively, so that the post-Newtonian Boltzmann equation for the one-particle distribution function \(f_{\alpha}=f(\mathbf{x},\mathbf{v}_{\alpha},t)\) of the constituent \(\alpha=b,d\) - defined in the phase space spanned by the spatial coordinates \(\mathbf{x}\) and particle three-velocity \(\mathbf{v}_{\alpha}\) - reads [20; 22; 23; 24] \[\frac{\partial f_{\alpha}}{\partial t}+v_{i}^{\alpha}\frac{ \partial f_{\alpha}}{\partial x^{i}}+\frac{\partial U}{\partial x^{i}}\frac{ \partial f_{\alpha}}{\partial v_{i}^{\alpha}}+\frac{1}{c^{2}}\bigg{[}\left(v_{ \alpha}^{2}-4U\right)\frac{\partial U}{\partial x^{i}}-4v_{i}^{\alpha}v_{j}^{ \alpha}\frac{\partial U}{\partial x^{j}}-3v_{i}^{\alpha}\frac{\partial U}{ \partial t}+2\frac{\partial\Phi}{\partial x^{i}}+\frac{\partial\Pi_{i}}{ \partial t}\] \[+v_{j}^{\alpha}\left(\frac{\partial\Pi_{i}}{\partial x^{j}}- \frac{\partial\Pi_{j}}{\partial x^{i}}\right)\bigg{]}\frac{\partial f_{ \alpha}}{\partial v_{i}^{\alpha}}=0. \tag{1}\] Here the Newtonian \(U\) and the post-Newtonian \(\Phi\) and \(\Pi_{i}\) gravitational potentials satisfy the Poisson equations \[\nabla^{2}U = -\frac{4\pi G}{c^{2}}\left(\overset{\circ}{T}_{b}^{00}+\overset{ \circ}{T}_{d}^{00}\right), \tag{2}\] \[\nabla^{2}\Phi = -2\pi G\left(\overset{\circ}{T}_{b}^{00}+\overset{\overset{ \circ}{T}}{i}^{ii}+\overset{\circ}{T}_{d}^{00}+\overset{\overset{\circ}{T}}{i} ^{i}\right),\] (3) \[\nabla^{2}\Pi^{i} = -\frac{16\pi G}{c}\left(\overset{\circ}{T}_{b}^{0i}+\overset{ \circ}{T}_{d}^{0i}\right)+\frac{\partial^{2}U}{\partial t\partial x^{i}}. \tag{4}\] Above \(G\) is the universal gravitational constant and we have denoted the \(1/c^{n}\)-order of the energy-momentum tensor of constituent \(\alpha\) by \(\overset{\circ}{T}_{\alpha}^{\mu\nu}\). The energy-momentum tensor of constituent \(\alpha\) is defined in terms of the one-particle distribution function \(f_{\alpha}\) and of the particle four-velocity \(u_{\alpha}^{\mu}\) by [23; 26] \[T_{\alpha}^{\mu\nu}=m_{\alpha}^{4}c\int u_{\alpha}^{\mu}u_{\alpha}^{\nu}f_{ \alpha}\frac{\sqrt{-g}\,d^{3}u_{\alpha}}{u_{0}^{0}}, \tag{5}\] where \(m_{\alpha}\) denotes the rest mass of a particle of constituent \(\alpha\). The components of the \(\alpha\) constituent particle four-flow in the first post-Newtonian approximation read [23; 25] \[u_{\alpha}^{0}=c\left[1+\frac{1}{c^{2}}\left(\frac{v_{\alpha}^{2}}{2}+U \right)\right],\qquad u_{\alpha}^{i}=\frac{u_{\alpha}^{0}v_{\alpha}^{i}}{c}. \tag{6}\] For a relativistic gas the one-particle distribution function at equilibrium is determined by the Maxwell-Juttner distribution function (see e.g. [26]). In a stationary equilibrium background where the hydrodynamic velocity vanishes the Maxwell-Juttner distribution function for the constituent \(\alpha\) in the first post-Newtonian approximation is given by [27] \[f_{MJ}^{\alpha}=f_{0}^{\alpha}\left\{1-\frac{\sigma_{\alpha}^{2}}{c^{2}} \left[\frac{15}{8}+\frac{3v_{\alpha}^{4}}{8\sigma_{\alpha}^{4}}+\frac{2Uv_{ \alpha}^{2}}{\sigma_{\alpha}^{4}}\right]\right\}, \tag{7}\] where \(f_{0}^{\alpha}\) is the Maxwellian distribution function \[f_{0}^{\alpha}=\frac{\rho_{0}^{\alpha}}{m_{\alpha}^{4}(2\pi\sigma_{\alpha}^{ 2})^{\frac{3}{2}}}e^{-\frac{\frac{v_{\alpha}^{2}}{2\sigma_{\alpha}^{2}}}{2 \sigma_{\alpha}^{2}}}, \tag{8}\] which is given in terms of the mass density \(\rho_{0}^{\alpha}\), the gas particle three-velocity \({\bf v}_{\alpha}\), and the dispersion velocity \(\sigma_{\alpha}\) of constituent \(\alpha\). The expression for the invariant integration element of the energy-momentum tensor (5) in the first post-Newtonian approximation reads [27] \[\frac{\sqrt{-g}\,d^{3}u_{\alpha}}{u_{0}^{\alpha}}=\left\{1+\frac{1}{c^{2}} \left[2v_{\alpha}^{2}+6U\right]\right\}\frac{d^{3}v_{\alpha}}{c}. \tag{9}\] ## III Field perturbations In this section we shall consider perturbations from equilibrium background states of the one-particle distribution functions and of the gravitational potentials. The subscripts zero will denote the background states and the subscripts one the perturbed states, namely \[f({\bf x},{\bf v}_{\alpha},t)=f^{\alpha}_{MJ}({\bf x},{\bf v}_{ \alpha},t)+f^{\alpha}_{1}({\bf x},{\bf v}_{\alpha},t),\quad\alpha=b,d \tag{10}\] \[U({\bf x},{\bf v},t)=U_{0}({\bf x})+U_{1}({\bf x},{\bf v},t),\] (11) \[\Phi({\bf x},{\bf v},t)=\Phi_{0}({\bf x})+\Phi_{1}({\bf x},{\bf v },t),\] (12) \[\Pi_{i}({\bf x},{\bf v},t)=\Pi^{0}_{i}({\bf x})+\Pi^{1}_{i}({\bf x },{\bf v},t). \tag{13}\] We begin by introducing the representations (10) - (13) into the Boltzmann equation for the constituent \(\alpha\) (1) and get that the resulting background equation is identically satisfied if \(\nabla U_{0}=0\), \(\nabla\Phi_{0}=0\) and \(\nabla\Pi^{0}_{i}=0\), while the perturbed equation reduces to \[\frac{\partial f^{\alpha}_{1}}{\partial t}+v^{\alpha}_{i}\frac{ \partial f^{\alpha}_{1}}{\partial x^{i}}+\frac{\partial U_{1}}{\partial x^{i} }\frac{\partial f^{0\alpha}_{MJ}}{\partial v^{\alpha}_{\alpha}}-\frac{2v^{ \alpha}_{\alpha}f^{\alpha}_{0}}{\sigma^{2}_{\alpha}c^{2}}\left(\frac{\partial U _{1}}{\partial t}+v^{\alpha}_{i}\frac{\partial U_{1}}{\partial x^{i}}\right)+ \frac{1}{c^{2}}\Bigg{[}\left(v^{2}_{\alpha}-4U_{0}\right)\frac{\partial U_{1} }{\partial x^{i}}+2\frac{\partial\Phi_{1}}{\partial x^{i}}\] \[-3v^{\alpha}_{i}\frac{\partial U_{1}}{\partial t}+\frac{\partial \Pi^{1}_{i}}{\partial t}-4v^{\alpha}_{i}v^{\alpha}_{j}\frac{\partial U_{1}}{ \partial x^{j}}+v^{\alpha}_{j}\left(\frac{\partial\Pi^{1}_{i}}{\partial x^{j }}-\frac{\partial\Pi^{1}_{j}}{\partial x^{i}}\right)\Bigg{]}\frac{\partial f^{ 0}_{0}}{\partial v^{i}_{\alpha}}=0. \tag{14}\] Here \(f^{0\alpha}_{MJ}\) is the background Maxwell-Juttner distribution function \[f^{0\alpha}_{MJ}=f^{\alpha}_{0}\left\{1-\frac{\sigma^{2}_{\alpha}}{c^{2}} \left[\frac{15}{8}+\frac{3v^{4}_{\alpha}}{8\sigma^{4}_{\alpha}}+2\frac{U_{0}v ^{2}_{\alpha}}{\sigma^{4}_{\alpha}}\right]\right\}. \tag{15}\] For the Poisson equations (2) and (3) we assume "Jeans swindle" (see e.g. [3; 4]) and consider that they are valid only for the perturbed gravitational potentials and distribution functions. First we have to evaluate the components of the energy-momentum tensor of each constituent and next to insert the perturbed values into the Poisson equations which lead to \[\nabla^{2}U_{1}=-4\pi G\sum_{\alpha=b}^{d}m^{4}_{\alpha}\int f^{ \alpha}_{1}d^{3}v_{\alpha}, \tag{16}\] \[\nabla^{2}\Pi^{i}_{1}=-16\pi G\sum_{\alpha=b}^{d}m^{4}_{\alpha} \int v^{\alpha}_{i}f^{\alpha}_{1}d^{3}v_{\alpha}+\frac{\partial^{2}U_{1}}{ \partial t\partial x^{i}},\] (17) \[\nabla^{2}\Phi_{1}=-2\pi G\sum_{\alpha=b}^{d}m^{4}_{\alpha}\int \left(4v^{2}_{\alpha}+8U_{0}\right)f^{\alpha}_{1}d^{3}v_{\alpha}+4\pi G\sum_{ \alpha=b}^{d}\rho^{\alpha}_{0}U_{1}. \tag{18}\] ## IV Plane wave representations We represent the instabilities as plane waves of frequency \(\omega\), wave number vector \({\bf k}\) and small amplitudes \(\overline{f^{\alpha}_{1}},\overline{U}_{1},\overline{\Phi}_{1}\) and \(\overline{\Pi^{i}_{1}}\): \[f^{\alpha}_{1}({\bf x},{\bf v},t)=\overline{f}^{\alpha}_{1}e^{i({ \bf k}\cdot{\bf x}-\omega t)},\quad U_{1}({\bf x},{\bf v},t)=\overline{U}_{1}e^ {i({\bf k}\cdot{\bf x}-\omega t)}, \tag{19}\] \[\Phi_{1}({\bf x},{\bf v},t)=\overline{\Phi}_{1}e^{i({\bf k}\cdot{ \bf x}-\omega t)},\quad\Pi^{i}_{1}({\bf x},{\bf v},t)=\overline{\Pi^{i}_{1}}e^ {i({\bf k}\cdot{\bf x}-\omega t)}. \tag{20}\] From the insertion of the above plane wave representations into the perturbed Boltzmann equation for constituent \(\alpha\) (14) we get the following equation which gives the perturbed amplitude of the distribution function of constituent \(\alpha\) in terms of the perturbed gravitational potentials \[({\bf v}_{\alpha}\cdot{\bf k}-\omega)\overline{f}_{1}^{\alpha}- \frac{f_{0}^{\alpha}}{\sigma_{\alpha}^{2}}\bigg{\{}({\bf v}_{\alpha}\cdot{\bf k })\overline{U}_{1}\bigg{[}1-\frac{\sigma_{\alpha}^{2}}{c^{2}}\bigg{(}\frac{15} {8}+\frac{3v_{\alpha}^{4}}{8\sigma_{\alpha}^{4}}-\frac{v_{\alpha}^{2}}{2\sigma _{\alpha}^{2}}+\frac{2v_{\alpha}^{2}U_{0}}{\sigma_{\alpha}^{4}}\bigg{)}\bigg{]}\] \[+\frac{1}{c^{2}}\bigg{[}v_{\alpha}^{2}\omega\overline{U}_{1}+2({ \bf v}_{\alpha}\cdot{\bf k})\overline{\Phi}_{1}-\omega v_{i}^{\alpha}\overline {\Pi}_{i}^{\dagger}\bigg{]}\bigg{\}}=0. \tag{21}\] Without loss of generality we consider the wave number vector in the \(x\)-direction, i.e., \({\bf k}=(\kappa,0,0)\) and insert the perturbed amplitude of the distribution function of constituent \(\alpha\) from (21) and the plane wave representations of the gravitational potentials into the Poisson equations (16) - (18) and integrate the resulting equations, yielding \[\kappa^{2}\overline{\Pi}_{x}^{1}=\frac{16\pi G\omega}{\kappa} \sum_{\alpha=b}^{d}\frac{\rho_{0}^{\alpha}}{\sigma_{\alpha}^{2}}\bigg{\{}\bigg{[} I_{2}^{\alpha}-\frac{3\sigma_{\alpha}^{2}}{2c^{2}}\left(I_{6}^{\alpha}+ \frac{5}{4}I_{2}^{\alpha}\right)-\frac{4U_{0}}{c^{2}}\left(I_{2}^{\alpha}+I_{4 }^{\alpha}\right)\bigg{]}\overline{U}_{1}\] \[+\frac{I_{2}^{\alpha}}{c^{2}}\left[2\overline{\Phi}_{1}-\frac{ \omega}{\kappa}\overline{\Pi}_{x}^{1}\right]\bigg{\}}\bigg{\}}-\kappa\omega \overline{U}_{1}, \tag{22}\] \[\kappa^{2}\overline{U}_{1}=4\pi G\sum_{\alpha=b}^{d}\frac{\rho_{0 }^{\alpha}}{\sigma_{\alpha}^{2}}\bigg{\{}\bigg{[}I_{2}^{\alpha}+\left(I_{0}^{ \alpha}+I_{2}^{\alpha}\right)\frac{\omega^{2}}{c^{2}\kappa^{2}}-\frac{3\sigma_ {\alpha}^{2}}{2c^{2}}\bigg{(}I_{6}^{\alpha}+\frac{4}{3}I_{4}^{\alpha}+\frac{3 1}{12}I_{2}^{\alpha}\bigg{)}-\frac{4U_{0}}{c^{2}}\left(I_{2}^{\alpha}+I_{4}^{ \alpha}\right)\bigg{]}\overline{U}_{1}\] \[+\frac{I_{2}^{\alpha}}{c^{2}}\bigg{[}2\overline{\Phi}_{1}-\frac{ \omega}{\kappa}\overline{\Pi}_{x}^{1}\bigg{]}\bigg{\}},\] (23) \[\kappa^{2}\overline{\Phi}_{1}=4\pi G\sum_{\alpha=b}^{d}\rho_{0}^{ \alpha}\overline{U}_{1}+16\pi G\sum_{\alpha=b}^{d}\rho_{0}^{\alpha}\bigg{\{}2 \bigg{(}I_{0}^{\alpha}+I_{2}^{\alpha}+\frac{I_{4}^{\alpha}}{2}\bigg{)}\frac{ \omega^{2}}{\kappa^{2}c^{2}}+I_{2}^{\alpha}+I_{4}^{\alpha}-\frac{3\sigma_{ \alpha}^{2}}{2c^{2}}\bigg{(}I_{8}^{\alpha}+\frac{7}{3}I_{6}^{\alpha}\] \[+\frac{\omega^{2}}{c^{2}\kappa^{2}}(I_{0}^{\alpha}+I_{2}^{\alpha} )\bigg{]}\bigg{\}}\overline{U}_{1}+\frac{16\pi G}{c^{2}}\sum_{\alpha=b}^{d} \rho_{0}^{\alpha}\left(I_{2}^{\alpha}+I_{4}^{\alpha}+I_{2}^{\alpha}\frac{U_{0 }}{\sigma_{\alpha}^{2}}\right)\left[2\overline{\Phi}_{1}-\frac{\omega}{\kappa }\overline{\Pi}_{x}^{1}\right]. \tag{24}\] The components \(\overline{\Pi}_{y}^{1}\) and \(\overline{\Pi}_{z}^{1}\) vanish and above we have introduced the integrals \[I_{n}^{\alpha}(\kappa,\omega)=\frac{2}{\sqrt{\pi}}\int_{0}^{\infty}\frac{x^{n} e^{-x^{2}}}{x^{2}-(\omega/\sqrt{2}\sigma_{\alpha}\kappa)^{2}}dx, \tag{25}\] where \(x=v_{x}^{\alpha}/\sqrt{2}\sigma_{\alpha}\). Equations (22) - (24) for the amplitudes \(\overline{\Pi}_{x}^{1}\), \(\overline{U}_{1}\) and \(\overline{\Phi}_{1}\) compose an algebraic system of equations which admits a solution if the determinant of the coefficients of the amplitudes vanishes. Up to the \(1/c^{2}\)-order it follows the dispersion relation \[\kappa_{*}^{4}-\kappa_{*}^{2}\bigg{\{}I_{2}^{d}\bigg{(}1+\frac{I_ {2}^{b}\rho_{0}^{b}\sigma_{2}^{d}}{I_{2}^{d}\rho_{0}^{d}\sigma_{b}^{2}}\bigg{)} +\frac{\sigma_{d}^{2}}{c^{2}}\bigg{[}I_{2}^{d}\bigg{(}1+\frac{I_{2}^{b}\rho_{0 }^{b}\sigma_{2}^{d}}{I_{2}^{d}\rho_{0}^{d}\sigma_{b}^{2}}\bigg{)}\bigg{(}\frac{ 33}{8}+4\frac{U_{0}}{\sigma_{d}^{2}}\bigg{)}+6I_{4}^{d}\bigg{(}1+\frac{I_{4}^{b} \rho_{0}^{b}}{I_{4}^{d}\rho_{0}^{d}}\bigg{)}\] \[-\frac{3}{2}I_{6}^{d}\bigg{(}1+\frac{I_{6}^{b}\rho_{0}^{b}}{I_{6}^ {d}\rho_{0}^{d}}\bigg{)}-4\frac{U_{0}}{\sigma_{d}^{2}}I_{4}^{d}\bigg{(}1+ \frac{I_{4}^{b}\rho_{0}^{b}\sigma_{d}^{2}}{I_{4}^{d}\rho_{0}^{d}\sigma_{b}^{2}} \bigg{)}\bigg{]}\bigg{\}}-\frac{\sigma_{d}^{2}}{c^{2}}\bigg{[}I_{0}^{d}\omega_{ *}^{2}\bigg{(}1+\frac{I_{0}^{b}\rho_{0}^{b}\sigma_{d}^{2}}{I_{0}^{d}\rho_{0}^{d} \sigma_{b}^{2}}\bigg{)}\] \[+2I_{2}^{d}\bigg{(}1+\frac{\rho_{0}^{b}}{\rho_{0}^{d}}-\omega_{*}^{ 2}\bigg{)}\bigg{(}1+\frac{I_{2}^{b}\rho_{0}^{b}\sigma_{d}^{2}}{I_{2}^{d}\rho_{0}^{d }\sigma_{b}^{2}}\bigg{)}\bigg{]}=0. \tag{26}\] The dispersion relation relates the dimensionless frequency \(\omega_{*}=\omega/\sqrt{4\pi G\rho_{0}^{d}}\) with the dimensionless wavenumber \(\kappa_{*}=\kappa/\kappa_{J}\), which are given in terms of the dark matter Jeans wave number defined by \(\kappa_{J}=\sqrt{4\pi G\rho_{0}^{d}}/\sigma_{d}\). The reason to take the dark matter to build the dimensionless quantities is that the dark matter begins to collapse into a complex network of dark matter halos well before the ordinary matter. From the analysis of the dispersion relation algebraic equation (26) we infer two distinct regimes: in one of them the frequency assumes real values implying that the perturbations will propagate as harmonic waves in time, while in the other the frequency has pure imaginary values and the perturbations will grow or decay in time. Here we are interesting in analysing the case where an instability happens which corresponds to the Jeans instability and is related to the minimum mass where an overdensity begins the gravitational collapse. In this case the limiting value of the frequency where the instability occurs is when \(\omega_{*}=0\) and the dispersion relation (26) reduces to \[\kappa_{*}^{4}-\bigg{(}1+\frac{\rho_{0}^{b}\sigma_{d}^{2}}{\rho_{0}^{d}\sigma_{b }^{2}}\bigg{)}\kappa_{*}^{2}+2\frac{\sigma_{d}^{2}}{c^{2}}\bigg{\{}1+3\bigg{(}1 +\frac{\rho_{0}^{b}}{\rho_{0}^{d}}\bigg{)}\kappa_{*}^{2}+\frac{\rho_{0}^{b}}{ \rho_{0}^{d}}\bigg{[}1+\frac{\sigma_{d}^{2}}{\sigma_{b}^{2}}\bigg{(}1+\frac{ \rho_{0}^{b}}{\rho_{0}^{d}}\bigg{)}\bigg{]}+\frac{U_{0}}{\sigma_{d}^{2}}\bigg{(} 1+\frac{\rho_{0}^{b}\sigma_{d}^{2}}{\rho_{0}^{d}\sigma_{b}^{2}}\bigg{)}\kappa_ {*}^{2}\bigg{\}}=0, \tag{27}\] since the integrals have the values \(I_{2}^{b}=I_{2}^{d}=1\), \(I_{4}^{b}=I_{4}^{d}=1/2\) and \(I_{6}^{b}=I_{6}^{d}=3/4\). The minimum mass for an overdensity to star the gravitational collapse is related with the real positive value of \(\kappa_{*}\) obtained from (27), which by considering terms up to the \(1/c^{2}\) order reads \[\kappa_{*}=\bigg{(}1+\frac{\rho_{0}^{b}\sigma_{d}^{2}}{\rho_{0}^{d}\sigma_{b}^ {2}}\bigg{)}^{-\frac{1}{2}}\bigg{\{}1+\frac{\sigma_{d}^{2}}{c^{2}}\bigg{(}4+ \frac{U_{0}}{\sigma_{d}^{2}}\bigg{)}+\frac{\rho_{0}^{b}\sigma_{d}^{2}}{\rho_{0 }^{d}\sigma_{b}^{2}}\bigg{[}1+\frac{\sigma_{b}^{2}}{c^{2}}\bigg{(}4+\frac{U_{ 0}}{\sigma_{b}^{2}}\bigg{)}\bigg{]}\bigg{\}}. \tag{28}\] From (28) two limiting cases are interesting to analyse. The first one is when we have only one component present, which we choose as the dark matter component, in this case we take a vanishing mass density of the baryonic matter \(\rho_{0}^{b}=0\) and get \[\kappa_{*}=1+\frac{\sigma_{d}^{2}}{c^{2}}\bigg{(}4+\frac{U_{0}}{\sigma_{d}^{2 }}\bigg{)}, \tag{29}\] which is the expression given in [20]. On the other hand without the relativistic correction we have from (28) by neglecting the \(1/c^{2}\) terms \[\kappa_{*}=\sqrt{1+\frac{\rho_{0}^{b}\sigma_{d}^{2}}{\rho_{0}^{d}\sigma_{b}^{2 }}}, \tag{30}\] and we recover the result given in [14; 15]. The Jeans mass is associated with the mass contained in a sphere of radius equal to the wavelength of the mass perturbation. By building the ratio of the Jeans masses corresponding to the dark-baryonic system \(M_{J}^{db}\) and the dark matter system \(M_{J}^{d}\) we get up to \(1/c^{2}\) terms \[\frac{M_{J}^{db}}{M_{J}^{d}}=\bigg{(}1+\frac{\rho_{0}^{b}}{\rho_{0}^{d}} \bigg{)}\bigg{(}1+\frac{\rho_{0}^{b}\sigma_{d}^{2}}{\rho_{0}^{d}\sigma_{b}^{2 }}\bigg{)}^{-\frac{5}{2}}\bigg{\{}1-3\frac{\sigma_{d}^{2}}{c^{2}}\bigg{(}\frac {U_{0}}{\sigma_{d}^{2}}+4\bigg{)}+\frac{\rho_{0}^{b}\sigma_{d}^{2}}{\rho_{0}^{ d}\sigma_{b}^{2}}\bigg{[}1-3\frac{\sigma_{d}^{2}}{c^{2}}\bigg{(}\frac{U_{0}}{ \sigma_{d}^{2}}+4\frac{\sigma_{b}^{2}}{\sigma_{d}^{2}}\bigg{)}\bigg{]}\bigg{\}}. \tag{31}\] ## V Comparison with observational data To assess the physical viability of the present model, it is interesting to confront it with the observation of regions in the Universe that can experience star formation. Here, we compare the theoretical prediction for the Jeans mass (31), with the observational stability data of Bok globules. The latter are nearby isolated clouds of interstellar gas and dust with simple shapes. They have characteristic temperatures of the order of \(10K\), and masses of \(\sim 10M_{\odot}\), which are close to their corresponding Jeans masses. This last characteristic places them among the most interesting astrophysical objects to test a deviation from the standard Jeans criterion, since a small modification of their Jeans mass leads to a different prediction for their stability. We are mainly interested in the data of [28] (see also [29]), consisting of a set of 11 Bok globules, reproduced in Table 1, together with their kinetic temperature, density, mass, Jeans mass (assuming Newtonian gravity and in the absence of a dark matter background), and their observed stability. One may observe from Table 1 that 7 out of the 11 considered Bok globules are predicted to be stable while observation reveals that they exhibit star formation. As the criterion (31) allows for critical masses smaller than the usual Jeans mass, it may potentially account for this discrepancy. For that, we impose that the critical mass of the model is equal to the mass of the Bok globule; this provides the maximal value of the critical mass in order to account for the data. We set \(U_{0}=\sigma^{2}\) (Virial theorem), and we assume that the ratio \(\rho_{0}^{d}/\rho_{0}^{b}\) corresponds to the ratio of the density parameter \(\Omega_{d}/\Omega_{b}\approx 5.5\) today, as it has not changed that much during the evolution of the Universe (see e.g., [14; 17]). With these assumptions, we are left with a single free parameter, i.e., \(\sigma_{d}^{2}\). The lower values of \(\sigma_{d}^{2}\) allowing to match with the observational data are given in Table 2, for each Bok globule. One may see that the observational stability is correctly accounted for, with \(\sigma_{d}^{2}\sim 10^{6}m^{2}/s^{2}\). One may ask whether such values of \(\sigma_{d}^{2}\) are physically well-motivated and whether they match with what is known about the properties of dark matter. To see that, one may use the observational evidence that there are no dark matter halos with a radius smaller that \(R\sim 1kpc\) and a mass smaller than \(M\sim 10^{8}M_{\odot}\)[30]. These ultracompact dark matter halos correspond typically to dwarf spheroidal galaxies like _Fornax_. If we assume that Fornax is the smallest halo observed in the Universe and associate their mass and radius to the Jeans mass and Jeans length, we find \(\sigma_{d}^{2}\sim 0.13\times 10^{6}m^{2}/s^{2}\), which is comparable to the values of \(\sigma_{d}^{2}\) given in Table 2, required to account for the stability of Bok globules. ## VI Conclusions In this work, we have analysed the Jeans-type gravitational instability for a mixture of baryonic and dark matter particles, in the post-Newtonian approximation. We have laid out a kinetic model consisting of two post-Newtonian collisionless Boltzmann equations, for each particle species, and the post-Newtonian Poisson equations. The relativistic Maxwell-Juttner distribution function was used to evaluate the components of the energy-momentum tensor in the Poisson equations. By considering perturbations, around background state, in the form of plane waves, we have established the post-Newtonian dispersion relation, for a mixture of baryonic and dark matter particles. This leads to a Jeans mass smaller than the standard Jeans mass, meaning that smaller masses are required to initiate the gravitational collapse. We have used this to study to what extent the model can account for the observational stability of Bok globules. In particular, a set of Bok globules are theoretically predicted as stable (using the standard Jeans mass), yet they are observed to exhibit star formation [28]. We have shown that the present model can correctly account for this discrepancy, with physically reasonable values of the dark matter velocity dispersion. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Bok Globule & \(\sigma_{b}^{2}[10^{7}m^{2}/s^{2}]\) & \(n_{H_{2}}[\mathrm{cm^{-3}}]\) & \(M[M_{\odot}]\) & \(M_{J}[M_{\odot}]\) & \(\sigma_{d}^{2}[10^{6}m^{2}/s^{2}]\) \\ \hline CB 110 & 17.98 & \((1.5\pm 0.6)\times 10^{7}\) & \(7.21\pm 1.64\) & 8.5 & 8.53 \\ \hline CB 131 & 20.70 & \((2.5\pm 1.3)\times 10^{7}\) & \(7.83\pm 2.35\) & 8.1 & 49.81 \\ \hline CB 161 & 10.31 & \((7.0\pm 1.6)\times 10^{7}\) & \(2.79\pm 0.72\) & 5.4 & 1.02 \\ \hline CB 184 & 12.78 & \((3.0\pm 0.4)\times 10^{4}\) & \(4.70\pm 1.76\) & 11.4 & 0.87 \\ \hline CB 188 & 15.67 & \((1.2\pm 0.2)\times 10^{7}\) & \(7.19\pm 2.28\) & 7.7 & 18.44 \\ \hline Fest 1-457 & 8.99 & \((6.5\pm 1.7)\times 10^{7}\) & \(11.29\pm 0.23\) & 1.4 & 3.08 \\ \hline Lynds 495 & 10.39 & \((4.8\pm 1.4)\times 10^{4}\) & \(2.95\pm 0.77\) & 6.6 & 0.80 \\ \hline \end{tabular} \end{table} Table 2: \(\sigma_{b}^{2}\), particle number density, mass, and Jeans mass for 7 of the Bok globules of Table 1, whose predicted stability is contradicted by observation, together with the saturation bounds for \(\sigma_{d}^{2}\) obtained with Eq. (31). \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Bok Globule & \(\tau_{b}^{2}[10^{7}m^{2}/s^{2}]\) & \(n_{H_{2}}[\mathrm{cm^{-3}}]\) & \(M[M_{\odot}]\) & \(M_{J}[M_{\odot}]\) & Stability \\ \hline CB 87 & 11.4 & \((1.7\pm 0.2)\times 10^{3}\) & \(2.73\pm 0.24\) & 9.6 & stable \\ \hline CB 110 & 21.8 & \((1.5\pm 0.6)\times 10^{7}\) & \(7.21\pm 1.64\) & 8.5 & unstable \\ \hline CB 131 & 25.1 & \((2.5\pm 1.3)\times 10^{9}\) & \(7.83\pm 2.35\) & 8.1 & unstable \\ \hline CB 134 & 13.2 & \((7.5+3.3)\times 10^{7}\) & \(1.91\pm 0.52\) & 1.8 & unstable \\ \hline CB 161 & 12.5 & \((7.0\pm 1.6)\times 10^{4}\) & \(2.79\pm 0.72\) & 5.4 & unstable \\ \hline CB 184 & 15.5 & \((3.0\pm 0.4)\times 10^{4}\) & \(4.70\pm 1.76\) & 11.4 & unstable \\ \hline CB 188 & 19.0 & \((1.2\pm 0.2)\times 10^{7}\) & \(7.19\pm 2.28\) & 7.7 & unstable \\ \hline Fest 1-457 & 10.9 & \((5.5\pm 1.7)\times 10^{7}\) & \(11.29\pm 0.23\) & 1.4 & unstable \\ \hline Lynds 495 & 12.6 & \((4.8\pm 1.4)\times 10^{2}\) & \(2.95\pm 0.77\) & 6.6 & unstable \\ \hline Lynds 498 & 11.0 & \((4.3\pm 0.5)\times 10^{4}\) & \(1.42\pm 0.16\) & 5.7 & stable \\ \hline Coalsack & 15 & \((5.4\pm 1.4)\times 10^{4}\) & \(4.50\) & 8.1 & stable \\ \hline \end{tabular} \end{table} Table 1: Kinetic temperature, particle number density, mass, Jeans mass, and observed stability for several Bok globules [28; 29]. ###### Acknowledgements. (GMK) was supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), grant No. 304054/2019-4.
2310.03209
Planar Hall effect in Weyl semimetals induced by pseudoelectromagnetic fields
The planar Hall effect (PHE), the appearance of an in-plane transverse voltage in the presence of coplanar electric and magnetic fields, has been ascribed to the chiral anomaly and Berry curvature effects in Weyl semimetals. In the presence of position- and time-dependent perturbations, such as strain, Weyl semimetals react as if they would be subjected to emergent electromagnetic fields, kwnon as pseudo-fields. In this paper we investigate the possibility of inducing nonlinear phenomena, including the PHE, in strained Weyl semimetals. Using the chiral kinetic theory in the presence of pseudo-fields, we derive general expressions for the magnetoconductivity tensor by considering the simultaneous effects of the Berry curvature and orbital magnetic moment of carriers, which are indeed of the same order of magnitude. Since pseudo-fields couple to the Weyl fermions of opposite chirality with opposite signs, we study chirality-dependent phenomena, including the longitudinal magnetoconductivity and the planar Hall effect. We discuss our results in terms of the chiral anomaly with pseudo-fields. These may open new possibilities in chiralitytronics.
L. Medel Onofre, A. Martín-Ruiz
2023-10-04T23:28:21Z
http://arxiv.org/abs/2310.03209v1
# Planar Hall effect in Weyl semimetals induced by pseudoelectromagnetic fields ###### Abstract The planar Hall effect (PHE), the appearance of an in-plane transverse voltage in the presence of coplanar electric and magnetic fields, has been ascribed to the chiral anomaly and Berry curvature effects in Weyl semimetals. In the presence of position- and time-dependent perturbations, such as strain, Weyl semimetals react as if they would be subjected to emergent electromagnetic fields, kwon as pseudo-fields. In this paper we investigate the possibility of inducing nonlinear phenomena, including the PHE, in strained Weyl semimetals. Using the chiral kinetic theory in the presence of pseudo-fields, we derive general expressions for the magneto-conductivity tensor by considering the simultaneous effects of the Berry curvature and orbital magnetic moment of carriers, which are indeed of the same order of magnitude. Since pseudo-fields couple to the Weyl fermions of opposite chirality with opposite signs, we study chirality-dependent phenomena, including the longitudinal magnetoconductivity and the planar Hall effect. We discuss our results in terms of the chiral anomaly with pseudo-fields. These may open new possibilities in chiralitytronics. ## I Introduction Weyl semimetals (WSMs) are topologically nontrivial conductors in which the non-degenerate valence and conduction bands touch at isolated points (the so called Weyl nodes) in the Brillouin zone [1]. Near these touching-points, the electron spectrum can be described by the Weyl equation, originally introduced in the particle physics context. The Weyl nodes occur in pairs of opposite chirality which act as a source and sink of Berry curvature in reciprocal space, and the WSM phase is topologically protected by a nonzero Berry flux across the Fermi surface. A distinguishing transport property of WSMs with broken time-reversal symmetry is the anomalous Hall effect, which arises when the conduction (valence) band is completely empty (filled) [2]. Another intriguing property of WSMs is the chiral anomaly, i.e., the nonconservation of the chiral current in the presence of parallel electric and magnetic fields: \(\partial_{\mu}J^{\mu}_{z}=\frac{e^{2}}{2\pi^{2}\hbar^{2}}\mathbf{E}\cdot\mathbf{B}\). The appearance of a positive longitudinal magnetoconductance has been regarded as a manifestation of the chiral anomaly [3; 4]. However, the magnetoconductivity tensor receives additional contributions, not related with the chiral anomaly, which indeed reverses the overall sign of the magnetoconductance. The planar Hall effect (PHE), the appearance of an in-plane transverse voltage in the presence of coplanar electric and magnetic fields, has been ascribed to the chiral anomaly in WSMs as well as to Berry curvature effects [5; 6; 7; 8; 9; 10; 11]. However, as in the case of the longitudinal magnetoconductance, within the semiclassical Boltzmann transport theory, the PHE is not described solely by the Berry curvature. In fact, as we show in this paper, the orbital magnetic moment (OMM) of charge carriers contributes also to the PHE, in much the same order of magnitude than the Berry curvature contribution, and therefore it cannot be disregarded. In a similar footing, the electrochemical transport in WSMs has been associated with the chiral anomaly, with the orbital magnetic moment playing a fundamental role, since the statistical transport is sensitive to the spatial gradients of the distribution function [12]. In conventional transport experiments, Weyl quasiparticles are coupled to the electromagnetic fields \(\mathbf{E}\) and \(\mathbf{B}\), which cannot be used as probe for chirality (\(\chi=\pm 1\)) since they do not differentiate the nodes. However, an interesting phenomena arising in Dirac matter is the fact that elastic deformations of the lattice couple to the electronic Hamiltonian as pseudo-electromagnetic gauge potentials \(\mathbf{\bar{A}}^{\text{cl}}_{\chi}\) and \(\bar{\Phi}^{\text{cl}}_{\chi}\), while define pseudo-electromagnetic fields as usual, \(\mathbf{E}^{\text{cl}}_{\chi}=-\nabla\Phi^{\text{cl}}_{\chi}-\partial_{t}\mathbf{A}^ {\text{cl}}_{\chi}\) and \(\mathbf{B}^{\text{cl}}_{\chi}=\nabla\times\mathbf{A}^{\text{cl}}_{\chi}\), known as pseudo-fields or elastic-fields [13]. These pseudo-fields can be expressed as the sum of two terms: \(\mathbf{E}^{\text{cl}}_{\chi}=\mathbf{\mathcal{E}}+\chi\mathbf{E}_{5}\) and \(\mathbf{B}^{\text{cl}}_{\chi}=\mathbf{\mathcal{B}}+\chi\mathbf{B}_{5}\). While the \(\mathbf{\mathcal{E}}\) and \(\mathbf{\mathcal{B}}\) couple to the Weyl nodes in a similar fashion as electromagnetic fields do (with the same sign), \(\mathbf{E}_{5}\) and \(\mathbf{B}_{5}\) couple to the nodes in an axial fusion (i.e. they couple opposite chiral fermions with opposite signs). The notation for \(\mathbf{E}_{5}\) and \(\mathbf{B}_{5}\) in inherited from the high-energy physics literature, where axial fields couple to the Dirac matrix \(\gamma_{5}\). In a strained material, the hoping parameters between atomic orbitals and on-site energies are both changed, and the modifications are driven by the components of the strain-tensor \(u_{ij}\). Therefore, the axial fields become determined by the position- and/or time-dependence of the the strain tensor. For example, in the case of strained graphene, the induced pseudo-gauge fields couple to the Dirac fermions oppositely in the two valleys \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\)[14]. This gave rise to a new line of research called strainronics [15; 16]. More recently, the study of strain-induced gauge fields in Weyl semimetals has attracted great attention [17], since it could opens a pathway for a prolific industry associated with straintronics and chiraltronics. In the case of WSMs, mechanical strain shifts the position of the Weyl nodes in momentum and/or energy, which can be effectively described in terms of pseudo-electromagnetic fields [17]. This affects the low-energy description of Weyl fermions, which now have to include the pseudo-gauge potentials [18; 19; 20]. Interestingly, axial pseudo-fields \(\mathbf{E}_{5}\) and \(\mathbf{B}_{5}\) couple opposite chiral fermions with opposite signs, and hence the chirality can be tested by using conventional experimental probes such as electrical transport. It is worth to mention that whereas electromagnetic potentials are gauge de pendent and hence are not observables, the axial gauge potentials are quantum expectation values and thus produce gauge-invariant and observable effects. Indeed, an experimental realization of strain-induced pseudo-magnetic fields was recently observed in strained crystals of Re-doped MoTe\({}_{2}\)[21]. In this paper we aim to explore also nonlinear transport phenomena induced by pseudo-fields, in particular the longitudinal magnetoconductivity and the planar Hall effect. We clearly differentiate the contributions arising from the Berry curvature from those arising from the OMM of charge carriers. The coupling of Weyl fermions with pseudo-gauge fields not only produces new interesting transport phenomena in Weyl semimetals, but also affects the well-known chiral anomaly. The inclusion of pseudo-fields in the semiclassical derivation of the chiral anomaly produces an interesting generalization of the anomaly equation, known in high-energy physics as the covariant anomaly [3]. In the presence of genuine and pseudo-electromagnetic fields, the covariant anomaly equations reads \[\partial_{\mu}J_{5}^{\mu} =\frac{e^{2}}{2\pi^{2}\hbar^{2}}(\mathbf{E}\cdot\mathbf{B}+\mathbf{E}_{5} \cdot\mathbf{B}_{5}), \tag{1}\] \[\partial_{\mu}J^{\mu} =\frac{e^{2}}{2\pi^{2}\hbar^{2}}(\mathbf{E}\cdot\mathbf{B}_{5}+\mathbf{E}_{5 }\cdot\mathbf{B}), \tag{2}\] where the axial current \(J_{5}^{\mu}\) measures the difference between currents of opposite chiralities and \(J^{\mu}\) is the total current. The breaking of charge conservation, as indicated by Eq. (2), implies that additional currents must exist in the system to restore local charge conservation. In fact, in the case of WSMs, this problem is cured by including the conventional anomalous Hall current. Nonlinear transport phenomena considered in this work also provides a testing ground for the chiral anomaly induced transport. In fact, we interpret our findings in terms of the chiral anomaly with pseudo-fields. This paper is organized as follows. In Sec. II we use chiral kinetic theory to investigate the planar Hall effect in Weyl semimetals in the presence of pseudo-fields. We obtain a general expression for the magnetoconductivity tensor, separating in a clear fashion the contributions arising from the Berry curvature and the orbital magnetic moment. In Sec. III we evaluate such contributions for a simple linearly dispersing model of a WSM. We discuss the total and axial conductivities and interpret them in terms of the chiral anomaly with pseudo-fields. Section IV is devoted to applications of our results to strained Weyl semimetals. We conclude in Sec. V. All technical calculations are relegated to the Appendices. ## II Kinetic theory approach We will now investigate the PHE in Weyl semimetals by using the chiral kinetic theory, which is a topologically modified semiclassical Boltzmann formalism to describe the behavior of Weyl fermions for a finite chemical potential. Within this approach, the semiclassical equations of motion are extended to include an anomalous velocity term arising from the Berry curvature, which acts as a magnetic field in reciprocal space [22]. In the presence of electromagnetic fields (\(\mathbf{E}\) and \(\mathbf{B}\)) and axial pseudo-fields (\(\mathbf{E}_{5}\) and \(\mathbf{B}_{5}\)), the semiclassical equations of motion for an electron wavepacket in a metal can be cast in the standard form [23] \[\dot{\mathbf{r}}_{\alpha} =\frac{1}{\hbar}\nabla_{\mathbf{k}}\mathcal{E}_{\alpha}(\mathbf{k})-\dot{ \mathbf{k}}_{\alpha}\times\mathbf{\Omega}_{\alpha}(\mathbf{k}), \tag{3}\] \[\hbar\dot{\mathbf{k}}_{\alpha} =-e\mathbf{E}_{\chi}-e\dot{\mathbf{r}}_{\alpha}\times\mathbf{B}_{\chi}, \tag{4}\] where \(\mathbf{E}_{\chi}=\mathbf{E}+\chi\mathbf{E}_{5}\) and \(\mathbf{B}_{\chi}=\mathbf{B}+\chi\mathbf{B}_{5}\) are effective fields. Elastic gauge fields \(\mathbf{\mathcal{E}}\) and \(\mathbf{B}\), which couple to the Weyl fermions in a similar manner than genuine electromagnetic fields, are accounted by promoting \(\mathbf{E}\rightarrow\mathbf{E}+\mathbf{\mathcal{E}}\) and \(\mathbf{B}\rightarrow\mathbf{B}+\mathbf{\mathcal{B}}\). The presence of \(\chi\) in the definitions of the effective fields accounts for the fact that pseudo-fields couple opposite chiral fermions with opposite signs. Here, \(\mathbf{\Omega}_{\alpha}(\mathbf{k})=i\left<\nabla_{\mathbf{k}}u_{\alpha}(\mathbf{k})\right| \times\left|\nabla_{\mathbf{k}}u_{\alpha}(\mathbf{k})\right>\) is the Berry curvature and \(\mathcal{E}_{\alpha}(\mathbf{k})=\mathcal{E}_{\alpha}^{(0)}-\mathbf{m}_{\alpha}\cdot \mathbf{B}_{\chi}\) is the energy dispersion which includes a Zeeman-like correction due to the orbital magnetic moment \(\mathbf{m}_{\alpha}(\mathbf{k})=-i\frac{e}{2\hbar}\left<\nabla_{\mathbf{k}}u_{\alpha}(\mathbf{ k})\right|\times\left[\hat{H}(\mathbf{k})-\mathcal{E}_{\alpha}^{(0)}\right] \left|\nabla_{\mathbf{k}}u_{\alpha}(\mathbf{k})\right>\)[24; 25]. Here, the Bloch states \(\left|u_{\alpha}(\mathbf{k})\right>\) are defined by \(\hat{H}(\mathbf{k})\left|u_{\alpha}(\mathbf{k})\right>=\mathcal{E}_{\alpha}^{(0)}( \mathbf{k})\left|u_{\alpha}(\mathbf{k})\right>\) with \(B_{\chi}=0\). The subindex \(\alpha\) stands collectively for the band index \(s\) and the chirality index \(\chi\). As they are, the equations of motion (3) and (4) are reminiscent of the standard semiclassical equations by replacing the electromagnetic fields \(\mathbf{E}\) and \(\mathbf{B}\) by the pseudo-fields \(\mathbf{E}_{\chi}\) and \(\mathbf{B}_{\chi}\) solely; however, they do not immediately follow the semiclassical theory. The wave-packet dynamics of electrons in crystals subject to perturbations varying slowly in space and time yields to generalized equations of motion that contain corrections which are accounted by a generalized Berry curvature defined in terms of two derivatives of the Bloch functions with respect to momentum (as the one defined above), position and time [24; 25]. From the generalized equations of motion and assuming that the moment of the wavepacket is close to a Weyl node, a change of coordinate frame produces the equation of motion (3) and (4) [23]. In the presence of impurity scattering the phenomenological transport equation can be written as [26] \[\left(\frac{\partial}{\partial t}+\dot{\mathbf{r}}_{\alpha}\cdot\nabla_{\mathbf{r}}+ \dot{\mathbf{k}}_{\alpha}\cdot\nabla_{\mathbf{k}}\right)f_{\alpha}(\mathbf{r},\mathbf{k},t)=I_ {\text{coll}}\left[f_{\alpha}(\mathbf{r},\mathbf{k},t)\right], \tag{5}\] where \(f_{\alpha}(\mathbf{r},\mathbf{k},t)\) is the electron distribution function. The collision integral \(I_{\text{coll}}\) accounts for the scattering mechanisms of the conduction electrons (such as impurity scattering effects, electron correlations, or scattering effects due to thermal vibrations of lattice ions). In the relaxation time approximation, the collision integral takes the simple form \(I_{\text{coll}}[f_{\alpha}]=-\frac{f_{\alpha}-f_{\alpha}^{\text{m}}}{\tau(\mathbf{k})}\), where \(\tau(\mathbf{k})\) is the scattering time of quasiparticles and \(f_{\alpha}^{\text{m}}\) is the equilibrium Fermi-Dirac distribution to be evaluated at the modified dispersion \(\mathcal{E}_{\alpha}(\mathbf{k})\). Although the momentum dependence of \(\tau\) covers a wide range of scattering processes, taking it as a constant parameter is still a good approximation that unveils interesting physics. In the following we take it as constant. Here we are interested in stationary and homogeneous solutions to the Boltzmann equation (5). Using the equations of motion (3) and (4) we have \[-\tau\,\dot{\mathbf{k}}_{\alpha}\cdot\nabla_{\mathbf{k}}f_{\alpha}(\mathbf{k})=f_{\alpha}(\bm {k})-f_{\alpha}^{\text{eq}}(\mathbf{k}). \tag{6}\] Next, we expand the distribution function in powers of the electromagnetic fields. Keeping only linear order dependence on the electric field, the nonequilibrium distribution function becomes \[f_{\alpha}=f_{\alpha}^{\text{eq}}+e\tau D_{\alpha}\left[\mathbf{v}_{ \alpha}\cdot\mathbf{E}_{\chi}+\frac{e}{\hbar}(\mathbf{E}_{\chi}\cdot\mathbf{B}_{\chi})(\bm {v}_{\alpha}\cdot\mathbf{\Omega}_{\alpha})\right]\frac{\partial f_{\alpha}^{\text{ eq}}}{\partial\mathcal{E}_{\alpha}}, \tag{7}\] where \(\mathbf{v}_{\alpha}(\mathbf{k})=\frac{1}{\hbar}\nabla_{\mathbf{k}}\mathcal{E}_{\alpha}( \mathbf{k})\) is the band velocity of Bloch electrons and \(D_{\alpha}(\mathbf{k})=[1+\frac{e}{\hbar}(\mathbf{B}_{\chi}\cdot\mathbf{\Omega}_{\alpha}) ]^{-1}\) is the modification factor of the phase space volume element [27; 28]. In these expressions we have omitted all momentum dependencies for simplicity. In the absence of any thermal and chemical potential gradients, the charge density current for a single \(\alpha\) (i.e. band, chirality, etc.) can be written as \(\mathbf{J}_{\alpha}=-e\int\frac{d^{2}\mathbf{k}}{(2\pi)^{3}}D_{\alpha}^{-1}\dot{\mathbf{r }}_{\alpha}(\mathbf{k})f_{\alpha}(\mathbf{k})\), accounting for the modified density of states due to the phase space factor \(D_{\alpha}\). Substituting the nonequilibrium distribution function (7) into this equation and keeping the linear term in the electric field, but neglecting the chiral magnetic and anomalous Hall effect contributions, we now arrive at the expression for the magnetoconductivity tensor for a single \(\alpha\): \[\sigma_{ij}^{(\alpha)}(\mathbf{B}_{\chi})=-e^{2}\tau\int\frac{d^{3} \mathbf{k}}{(2\pi)^{3}}D_{\alpha}\left[v_{\alpha i}+\frac{e}{\hbar}(\mathbf{v}_{\alpha }\cdot\mathbf{\Omega}_{\alpha})B_{\chi i}\right]\\ \times\left[v_{\alpha j}+\frac{e}{\hbar}(\mathbf{v}_{\alpha}\cdot\bm {\Omega}_{\alpha})B_{\chi j}\right]\frac{\partial f_{\alpha}^{\text{eq}}( \mathcal{E}_{\alpha})}{\partial\mathcal{E}_{\alpha}}, \tag{8}\] which includes the effects of the Berry curvature and the orbital magnetic moment. Our main goal in this paper is to evince that the orbital magnetic moment contributes in most the same fashion that the Berry curvature to the PHE. To distinguish these contributions we write the conductivity tensor (8), in the weak-field limit, as the sum of three terms: \[\sigma_{ij}^{(\alpha)}(\mathbf{B}_{\chi})=\sigma_{ij}^{(0,\alpha)}+\sigma_{ij}^{( \Omega,\alpha)}(\mathbf{B}_{\chi})+\sigma_{ij}^{(m,\alpha)}(\mathbf{B}_{\chi}), \tag{9}\] where the first term, \[\sigma_{ij}^{(0,\alpha)}=-e^{2}\tau\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}v_{ \alpha i}^{(0)}v_{\alpha j}^{(0)}\,\frac{\partial f_{\alpha}^{\text{eq}}( \mathcal{E}_{\alpha}^{(0)})}{\partial\mathcal{E}_{\alpha}^{(0)}}, \tag{10}\] is the conductivity in the absence of the magnetic field (i.e. for \(\mathbf{B}=\mathbf{B}_{5}=\mathbf{0}\)) and the second term, \[\sigma_{ij}^{(\Omega,\alpha)}(\mathbf{B}_{\chi})=-\frac{e^{4}\tau}{\hbar^{2}}\int \frac{d^{3}\mathbf{k}}{(2\pi)^{3}}Q_{\alpha i}\,Q_{\alpha j}\,\frac{\partial f_{ \alpha}^{\text{eq}}(\mathcal{E}_{\alpha}^{(0)})}{\partial\mathcal{E}_{\alpha }^{(0)}}, \tag{11}\] with \(\mathbf{Q}_{\alpha}=\mathbf{\Omega}_{\alpha}\times(\mathbf{v}_{\alpha}^{(0)}\times\mathbf{B} _{\chi})\), is the contribution arising from the Berry curvature solely. In Eqs. (10) and (11), \(\mathcal{E}_{\alpha}^{(0)}(\mathbf{k})\) is the band energy without the Zeeman-like correction and \(\mathbf{v}_{\alpha}^{(0)}(\mathbf{k})=\frac{1}{\hbar}\nabla_{\mathbf{k}}\mathcal{E}_{ \alpha}^{(0)}(\mathbf{k})\) is the corresponding band velocity. In deriving Eq. (11) we have expanded the phase space volume factor \(D_{\alpha}(\mathbf{k})\) up to second order in the magnetic field. The contribution from the orbital magnetic moment, \(\sigma_{ij}^{(m,\alpha)}(\mathbf{B}_{\chi})\), appears in various ways. In the presence of a magnetic field, on the one hand, the energy dispersion is corrected by \(\mathcal{E}_{\alpha}^{(m)}(\mathbf{k})=-\mathbf{m}_{\alpha}(\mathbf{k})\cdot\mathbf{B}_{\chi}\) and consequently the band velocity becomes corrected as \(\mathbf{v}_{\alpha}^{(m)}(\mathbf{k})=\frac{1}{\hbar}\nabla_{\mathbf{k}}\mathcal{E}_{ \alpha}^{(m)}(\mathbf{k})\). On the other hand, for a weak magnetic field, the equilibrium distribution function \(f_{\alpha}^{\text{eq}}(\mathcal{E}_{\alpha})\) and the phase space volume factor \(D_{\alpha}(\mathbf{k})\) can be Taylor expanded up to the second power of the magnetic field (see Appendix A). Taking into account these terms in Eq. (8) and substracting the contributions \(\sigma_{ij}^{(0,\alpha)}\) and \(\sigma_{ij}^{(\Omega,\alpha)}(\mathbf{B}_{\chi})\), one gets \[\sigma_{ij}^{(\chi,m)}(\mathbf{B}_{\chi})=\frac{2e^{3}\tau}{\hbar}\!\int\!\frac{d^{3 }\mathbf{k}}{(2\pi)^{3}}\left[Q_{\alpha i}v_{\alpha j}^{(m)}+\frac{1}{e}\mathcal{E }_{\alpha}^{(m)}\nabla_{\mathbf{k}}\cdot\mathbf{T}_{\alpha ij}+\frac{1}{2}\mathcal{E }_{\alpha}^{(m)}\mathbf{B}_{\chi}\cdot\mathbf{V}_{\alpha ij}\frac{\partial}{\partial \mathcal{E}_{\alpha}^{(0)}}\right]\!\frac{\partial f_{\alpha}^{\text{eq}}( \mathcal{E}_{\alpha}^{(0)})}{\partial\mathcal{E}_{\alpha}^{(0)}}, \tag{12}\] where we have defined the tensors \[\mathbf{T}_{\alpha ij}=\frac{e}{\hbar}\mathbf{\Omega}_{\alpha}B_{\chi\chi}v_{\alpha j}^{ (0)}+\frac{1}{2}\hat{\mathbf{e}}_{i}v_{\alpha j}^{(m)},\qquad\mathbf{V}_{\alpha ij}=\mathbf{ \Omega}_{\alpha}v_{\alpha i}^{(0)}v_{\alpha j}^{(0)}-\frac{1}{2e}\mathbf{m}_{ \alpha}\partial_{k_{i}}v_{\alpha j}^{(0)}. \tag{13}\] A detailed derivation of the formula (12) is presented in the Appendix A. ## III Planar Hall effect with pseudofields in Weyl semimetals In this section we investigate the PHE in Weyl semimetals by using the semiclassical approach developed in the previous section. To this end we consider a simple model of a WSM consisting of two Weyl nodes of opposite chiralities separated in momentum and energy, ignoring the nonuniversal corrections due to band bending far away from the nodes. The low energy Hamiltonian for each Weyl node can be expressed as \[\hat{H}_{\chi}(\mathbf{k})=\chi\hbar v_{F}\mathbf{\sigma}\cdot\mathbf{k}+b_{0\chi}, \tag{14}\] where \(v_{F}\) is the Fermi velocity, \(\chi=\pm 1\) specifies the chirality, \(\mathbf{\sigma}\) is the vector of the Pauli matrices, \(\mathbf{k}\) is the momentum measured relative to the Weyl point and \(b_{0\chi}\) denotes the energy shift of the node with chirality \(\chi\). The corresponding energy dispersion is \(\mathcal{E}_{\alpha}^{(0)}(\mathbf{k})=b_{0\chi}+\delta\hbar v_{F}k\), where \(s=\pm 1\) is the band index. As a result, the band velocity becomes \(\mathbf{v}_{s}^{(0)}(\mathbf{k})=sv_{F}\hat{\mathbf{k}}\), where \(\hat{\mathbf{k}}\) is the unit vector along \(\mathbf{k}\). Using the Bloch states, it is straightforward to obtain the Berry curvature and the orbital magnetic moment: \[\mathbf{\Omega}_{\alpha}(\mathbf{k})=-s\chi\frac{\hat{\mathbf{k}}}{2k^{2}},\qquad\mathbf{m}_{ \chi}(\mathbf{k})=-\chi ev_{F}\frac{\hat{\mathbf{k}}}{2k}, \tag{15}\] respectively. In the presence of an effective magnetic field \(\mathbf{B}_{\chi}\), the energy dispersion is corrected by \(\mathcal{E}_{\chi}^{(m)}(\mathbf{k})=\frac{\chi ev_{F}}{2k}\hat{\mathbf{k}}\cdot\mathbf{B }_{\chi}\), which implies a correction to the band velocity as \[\mathbf{v}_{\chi}^{(m)}(\mathbf{k})=\frac{\chi ev_{F}}{2\hbar}\frac{\mathbf{B}_{\chi}-2 \hat{\mathbf{k}}(\hat{\mathbf{k}}\cdot\mathbf{B}_{\chi})}{k^{2}}. \tag{16}\] For a given node of chirality \(\chi\), the band index is determined by the sign of the difference between the chemical potential and the energy shift of the node, i.e. \(s=\text{sgn}(\mu-b_{0\chi})\). This is so since \(\mu>b_{0\chi}\) (\(\mu<b_{0\chi}\)) implies \(s=1\) (\(s=-1\)), as depicted in Fig. 1. So, we define \(\mu_{\chi}=s\mu_{0\chi}\), with \(\mu_{0\chi}=|\mu-b_{0\chi}|>0\). Here we work at finite temperature \(T\). With the above information we are able to apply the semiclassical formulas (10)-(12) to compute the different contributions to the planar Hall effect in Weyl semimetals. Details of technical computations are relegated to the Appendix B and here we present only the final results. Interestingly, all of them become independent of the band index \(s\), so we make the replacement \(\alpha\to\chi\) in the following expressions. The \(\mathbf{B}_{\chi}\)-independent conductivity (10) takes the simple form \[\sigma_{ij}^{(0,\chi)}(T)=\frac{e^{2}\mu_{0\chi}^{2}\tau}{6\pi^{2}\hbar^{3}v_ {F}}\delta_{ij}\,f_{2}(\Lambda_{\chi}), \tag{17}\] with \(\Lambda_{\chi}\equiv k_{B}T/\mu_{0\chi}\) and where we have introduced the function \[f_{n}(y)\equiv\frac{1}{y}\int_{0}^{\infty}dx\,x^{n}\,\frac{e^{(x-1)/y}}{\left[ 1+e^{(x-1)/y}\right]^{2}}. \tag{18}\] Interestingly, the conductivity (17) vanishes at the neutrality point (\(\mu_{0\chi}=0\)). Furthermore, at zero temperature one can further verify that \(\lim_{T\to 0}f_{n}(\Lambda_{\chi})=1\). The Berry curvature contribution (11) becomes \[\sigma_{ij}^{(\Omega,\chi)}(\mathbf{B}_{\chi},T)=\frac{e^{4}v_{F}^{3}\tau}{120\pi ^{2}\hbar\mu_{0\chi}^{2}}\left(\delta_{ij}B_{\chi}^{2}+7B_{\chi i}B_{\chi j} \right)f_{-2}(\Lambda_{\chi}), \tag{19}\] while the result from orbital magnetic part is \[\sigma_{ij}^{(m,\chi)}(\mathbf{B}_{\chi},T)= \frac{-e^{4}v_{F}^{3}\tau}{120\pi^{2}\hbar\mu_{0\chi}^{2}} \left(3\delta_{ij}B_{\chi}^{2}+B_{\chi i}B_{\chi j}\right)f_{-2}(\Lambda_{ \chi}), \tag{20}\] where \(B_{\chi}^{2}=\mathbf{B}_{\chi}\cdot\mathbf{B}_{\chi}\). The conductivities of the opposite chiralities are related by symmetry properties dictated by the definition of the effective magnetic field \(\mathbf{B}_{\chi}\), namely, \(\sigma_{ij}^{(\xi,\chi)}(\mathbf{B},\mathbf{B}_{5},T)=\sigma_{ij}^{(\xi,-\chi)}(-\mathbf{B },\mathbf{B}_{5},T)\) and \(\sigma_{ij}^{(\xi,\chi)}(\mathbf{B},\mathbf{B}_{5},T)=\sigma_{ij}^{(\xi,-\chi)}(\mathbf{B },-\mathbf{B}_{5},T)\), where \(\xi=\Omega,m\). If either \(\mathbf{B}\) or \(\mathbf{B}_{5}\) vanishes, the contribution from both chiralities are the same. Besides, we observe that the transverse conductivities does not satisfy the usual antisymmetry relation (\(\sigma_{xy}=-\sigma_{yx}\)) displayed by Hall effect systems, since in this case the transverse conductivity does not stem from Lorentz force, but from the chiral anomaly. The Berry curvature induced conductivity (19) has been discussed recently in a variety of papers and it has been regarded as a direct consequence of the chiral anomaly [5; 6; 7; 8; 9]. Figure 1: Low-energy spectrum of a Weyl semimetal with two bulk Weyl nodes of different chiralities separated in momentum space by \(2\mathbf{b}\) and shifted in energy by \(2b_{0}\). Figure 2: Prototypical experimental setup employed to measure the longitudinal and planar Hall voltages generated by in-plane electromagnetic fields and pseudo fields. Our formula (19) generalizes the previously reported results, where only some components were computed. Besides, here we report a general formula (20) for the orbital magnetic moment induced conductivity, which as far as we know, it has not been reported yet. In order to elucidate the importance of the orbital magnetic moment contribution, we next explore the angular dependence of both the longitudinal magnetoconductivity and the planar Hall conductivity for the different chiralities. To this end, we introduce the normalized conductivity tensor for the Weyl node with chirality \(\chi\): \[\Sigma_{ij}^{(\chi)}(\mathbf{B}_{\chi},T)=\frac{\sigma_{ij}^{(\chi)}(\mathbf{B}_{\chi },T)-\sigma_{0}^{(\chi)}(T)\delta_{ij}}{\sigma_{0}^{(\chi)}(T)}, \tag{21}\] where \(\sigma_{ij}^{(\chi)}(\mathbf{B}_{\chi},T)\) is the total conductivity given by Eq. (9) and \(\sigma_{0}^{(\chi)}(T)\equiv\frac{e^{2}\mu_{0\chi}^{2}\tau}{6\pi^{2}\hbar^{5}v_ {F}}f_{2}(\Lambda_{\chi})\) is the longitudinal conductivity. Note that \(\Sigma_{ij}^{(\chi)}\) isolates the joint contributions from the Berry curvature and the orbital magnetic moment of charge carriers. The tensor (21) inherits the symmetries of \(\sigma_{ij}^{(\Omega,\chi)}\) and \(\sigma_{ij}^{(m,\chi)}\). Now we assume an electric field pointing along the \(x\)-axis, i.e. \(\mathbf{E}=E\hat{\mathbf{e}}_{x}\), and restrict the magnetic field to be in the \(xy\)-plane, i.e. \(\mathbf{B}=B(\cos\theta\hat{\mathbf{e}}_{x}+\sin\theta\hat{\mathbf{e}}_{y})\). We also assume that the applied strain induces a pseudomagnetic field lying in the \(xy\)-plane, i.e. \(\mathbf{B}_{5}=B_{5}(\cos\theta_{5}\hat{\mathbf{e}}_{x}+\sin\theta_{5}\hat{\mathbf{e}}_{y})\) and vanishing axial electric field \(\mathbf{E}_{5}=\mathbf{0}\), as depicted in Fig. 2. Later we will discuss the impact of a nonzero \(\mathbf{E}_{5}\) upon the longitudinal and planar Hall currents. To discuss the longitudinal and planar Hall responses in a realistic WSM, it is convenient to consider the precise values of the parameters appearing in our expressions and verify first the validity of the chiral kinetic theory. We take \(B=B_{5}=0.5\)T and use typical parameters for a Weyl semimetal such as TaAs: \(v_{F}=3\times 10^{5}\)m/s, \(b_{0\chi}=0\), \(\mu=20\)meV [29; 30], and \(\tau\sim 10^{-13}\)s [31; 32]. Therefore, the Boltzmann formalism is valid because \(\omega_{\text{C}}\tau\sim 0.08\ll 1\), where \(\omega_{\text{C}}=eB/m^{*}c\) is the cyclotron frequency and we have used \(m^{*}\sim 0.11m_{e}\)[29; 30] and \(B\sim 0.5\)T. One can further verify that near the nodes, the corrections induced by the orbital magnetic moment to the energy satisfies \(\mathcal{E}_{\chi}^{(m)}\ll\mathcal{E}_{\alpha}^{(0)}\) and \((e/\hbar)B\Omega\ll 1\), thus validating the expansions performed in Section II. Here we take \(T=2\)K, which corresponds to the temperature at which the angular dependence of the planar Hall conductivity was observed in topological insulators [33]. This value, together with \(\mu=20\)meV for TaAs, implies that \(\Lambda_{\chi}=0.0086\) and hence \(f_{-2}(\Lambda_{\chi})=1.00073\approx 1\). Therefore, in the following we safely take \(f_{-2}(\Lambda_{\chi})=1\) for definiteness, which is appropriate at low temperatures. In Fig. 3 we plot the normalized longitudinal (upper panel) and planar Hall (lower panel) conductivities, given by Eq. (21), as a function of the angle \(\theta\) and fixed values \(\theta_{5}=0\) (left panel) and \(\theta_{5}=\pi/2\) (right panel). In these plots, the dashed orange (continuous red) line shows the Berry curvature contribution (19) normalized by the longitudinal conductivity \(\sigma_{0}^{(\chi)}\), i.e. \(\sigma_{ij}^{(\Omega,\chi)}/\sigma_{0}^{(\chi)}\), for the chirality \(\chi=+1\) (\(\chi=-1\)). The purple (blue) line with square (circle) markers shows the normalized conductivity \(\Sigma_{ij}^{(\chi)}\) as defined in Eq. (21) for the chirality \(\chi=+1\) (\(\chi=-1\)). These plots evince the importance of the orbital magnetic moment upon the longitudinal and planar Hall conductivities. In fact, as we can see in Fig. 3, the longitudinal conductivity is more sensitive to the OMM than the planar Hall conductivity. However, this fact does not mean that such contribution is zero in the planar Hall conductivity. Clearly, for \(\theta_{5}=0\), the OMM contribution is negligible near to \(\theta=0,\pi\), but becomes important near to some specific angles, namely, at \(\theta^{*}=\pi/3\), \(5\pi/3\) for \(\chi=+1\) and at \(\theta^{*}=2\pi/3,4\pi/3\) for \(\chi=-1\). For \(\theta_{5}=\pi/2\), however, the OMM contribution approaches zero near to \(\theta=\pi/2,\ 3\pi/2\), and becomes appreciable around \(\theta^{*}=7\pi/6\), \(11\pi/6\) for \(\chi=+1\) and around \(\theta^{*}=\pi/6\), \(5\pi/6\) for \(\chi=-1\). The position of these critical angles \(\theta^{*}\) varies according to the ratio \(B_{5}/B\) as well as the direction of \(\mathbf{B}_{5}\). In fact, it becomes determined by the equation: \[\cos(2\theta^{*})+\chi(B_{5}/B)\cos(\theta^{*}+\theta_{5})=0. \tag{22}\] Following the definition of the current for fermions with chirality \(\chi\) in the presence of effective electromagnetic fields, \(J_{i}^{(\chi)}=\sigma_{ij}^{(\chi)}E_{\chi j}\), with the conductivity given by Eq. (8), we now define the total and axial currents by \(\mathbf{J}=\sum_{\chi=\pm 1}\mathbf{J}^{(\chi)}\) and \(\mathbf{J}_{5}=\sum_{\chi=\pm 1}\chi\mathbf{J}^{(\chi)}\), respectively. These suggest the def Figure 3: Angular dependence of the longitudinal (upper panel) and planar Hall (lower panel) conductivities for \(\theta_{5}=0\) (left panel) and \(\theta_{5}=\pi/2\) (right panel). The dashed orange (continuous red) line shows the Berry curvature contribution (normalized by the longitudinal conductivity) for the chirality \(\chi=+1\) (\(\chi=-1\)), while the purple (blue) line with square (circle) markers shows the full conductivities including the orbital magnetic moment contribution. inition of the total and the chiral conductivities as follows: \[\sigma_{ij}=\sum_{\chi=\pm 1}\sigma_{ij}^{(\chi)},\qquad\sigma_{5ij}=\sum_{\chi= \pm 1}\chi\sigma_{ij}^{(\chi)}, \tag{23}\] each in order, such that the total and axial currents take the simple form \[J_{i} =\sigma_{ij}(\mathbf{B},\mathbf{B}_{5})\,E_{j}+\sigma_{5ij}(\mathbf{B},\mathbf{B }_{5})\,E_{5j}, \tag{24}\] \[J_{5i} =\sigma_{5ij}(\mathbf{B},\mathbf{B}_{5})\,E_{j}+\sigma_{ij}(\mathbf{B},\mathbf{B }_{5})\,E_{5j}, \tag{25}\] respectively. In the problem at hand, using the conductivity tensor (8), together with the three contributions (17), (19) and (20), one can obtain general expressions for the total and chiral conductivity tensors. Assuming \(b_{0\chi}=0\) for simplicity, and taking \(T=0\)K in view of the above discussion, the total conductivity becomes \[\sigma_{ij}(\mathbf{B},\mathbf{B}_{5})=\sigma_{0}\delta_{ij}+\frac{\sigma _{0}}{10B_{0}^{2}}\Big{[}-\delta_{ij}(B^{2}+B_{5}^{2})\\ +3(B_{i}B_{j}+B_{5i}B_{5j})\Big{]}, \tag{26}\] where \(\sigma_{0}\equiv\frac{e^{2}\mu^{2}\tau}{3\pi^{2}\hbar^{3}v_{F}}\) is the field-independent longitudinal conductivity and \(B_{0}\equiv(e\hbar)^{-1}(\mu/v_{F})^{2}\) is a characteristic magnetic field. Similarly, for the chiral conductivity we obtain \[\sigma_{5ij}(\mathbf{B},\mathbf{B}_{5})=\frac{\sigma_{0}}{10B_{0}^{2}} \left[-2\delta_{ij}\mathbf{B}\cdot\mathbf{B}_{5}+3(B_{i}B_{5j}+B_{5i}B_{j})\right]. \tag{27}\] Clearly, the chiral conductivity vanishes if either \(\mathbf{B}\) or \(\mathbf{B}_{5}\) are zero. Therefore, in such case, to probe the chiral current, a pseudo-electric field is required. This is so because the axial gauge fields couple opposite chiral fermions with opposite signs. To illustrate the angular dependence of these conductivities we introduce the normalized total conductivity \[\Sigma_{ij}(\mathbf{B},\mathbf{B}_{5})=\frac{\sigma_{ij}(\mathbf{B},\mathbf{B}_{5})-\sigma_{0 }\delta_{ij}}{\sigma_{0}}, \tag{28}\] and similarly one can define the normalized axial conductivity as \(\Sigma_{5ij}(\mathbf{B},\mathbf{B}_{5})=\sigma_{5ij}(\mathbf{B},\mathbf{B}_{5})/\sigma_{0}\). Note that expressions (27) and (28) are valid for arbitrary orientations of the fields \(\mathbf{B}\) and \(\mathbf{B}_{5}\). Now we take the same planar configuration as before, depicted in Fig. 2. To elucidate the interplay of the genuine and axial magnetic fields, we fix the direction of the effective electric field to be along \(+\hat{\mathbf{e}}_{x}\), i.e. \(\mathbf{E}_{\chi}=E_{\chi}\hat{\mathbf{e}}_{x}\), and rotate the magnetic field along the \(xy\)-plane such that it makes an angle \(\theta\) with respect to the direction of the electric field, i.e. \(\mathbf{B}=B(\cos\theta\hat{\mathbf{e}}_{x}+\sin\theta\hat{\mathbf{e}}_{y})\). Also we take the pseudo-magnetic field in the same plane, i.e. \(\mathbf{B}_{5}=B_{5}(\cos\theta_{5}\hat{\mathbf{e}}_{x}+\sin\theta_{5}\hat{\mathbf{e}}_{y})\). In this case, the normalized total and axial longitudinal conductivities become \[\Sigma_{xx}(\mathbf{B},\mathbf{B}_{5})=\frac{B^{2}(3\cos^{2}\theta-1)+B_{5}^{2}(3\cos ^{2}\theta_{5}-1)}{10B_{0}^{2}}, \tag{29}\] \[\Sigma_{5xx}(\mathbf{B},\mathbf{B}_{5})=\frac{BB_{5}}{10B_{0}^{2}}\left[3\cos(\theta+ \theta_{5})+\cos(\theta-\theta_{5})\right], \tag{30}\] respectively. The corresponding expressions for the normalized total and axial planar Hall conductivities are \[\Sigma_{xy}(\mathbf{B},\mathbf{B}_{5})=\frac{3}{20B_{0}^{2}}\left[B^{2} \sin(2\theta)+B_{5}^{2}\sin(2\theta_{5})\right], \tag{31}\] \[\Sigma_{5xy}(\mathbf{B},\mathbf{B}_{5})=\frac{3BB_{5}}{10B_{0}^{2}}\sin( \theta+\theta_{5}), \tag{32}\] respectively. We now plot these expressions. To this end, we take \(B=B_{5}=0.5\) T and use typical parameters for TaAs (\(v_{F}=3\times 10^{5}\) m/s and \(\mu=20\) meV) which yields \(B_{0}=6.75\) T. In Fig. 4 we show the longitudinal (upper panel) and planar Hall (lower panel) conductivities as a function of the angle \(\theta\) and fixed values \(\theta_{5}=0\) (at left) and \(\theta_{5}=\pi/2\) (at right). Each panel displays both the normalized total and axial conductivities. The dashed orange (continuous red) line corresponds solely to the Berry curvature contribution to the total (axial) conductivities. The purple (blue) line with square (circle) markers shows the full contribution (i.e. including both Berry curvature and OMM) to the total (axial) conductivities. The importance of the OMM is quite evident in the case of the total and axial longitudinal conductivities: the unmarked lines significantly differ from those with markers. However, the total and axial planar Hall conductivities exhibit slight changes which can be appreciated Figure 4: Longitudinal (upper panel) and planar Hall (lower panel) conductivities for \(\theta_{5}=0\) (left panel) and \(\theta_{5}=\pi/2\) (right panel). The dashed orange (continuous red) line shows the Berry curvature contribution to the total (axial) conductivities, while the purple (blue) line with square (circle) markers shows the full contribution (including the orbital magnetic moment contribution) to the total (axial) conductivities. gles. For example, in the case of the total planar Hall conductivity \(\Sigma_{yx}\), the differences are important near to \(\theta^{*}=n\pi/4\), where \(n=1,3,5,7\), in both situations \(\theta_{5}=0\) and \(\theta_{5}=\pi/2\). For an arbitrary value of \(\theta_{5}\) the position of these critical angles does not change, but the value of the normalized conductivity is shifted by \(\frac{3}{20}(B_{5}/B_{0})^{2}\sin(2\theta_{5})\). In the case of the axial conductivity, the main differences are near to \(\theta^{*}=\pi/2,\,3\pi/2\) for \(\theta_{5}=0\) and near to \(\theta^{*}=0,\,\pi\) for \(\theta_{5}=\pi/2\). The behavior of the axial conductivity is quite the same for any value of \(\theta_{5}\); the only difference is that the critical angles are shifted by \(\theta_{5}\), i.e., they are given by \(\theta^{*}=\theta_{5}+n\pi/2\). Apart from the angular dependence of the longitudinal and planar Hall conductivities shown in Fig. 4, the dependency on the magnitude of the magnetic field would also be relevant for an experimental detection of our results. In fact, as we read off from Eqs. (29)-(32), the total longitudinal \(\Sigma_{xx}\) and transverse \(\Sigma_{xy}\) conductivities show \(B^{2}\) dependence, while the axial conductivities \(\Sigma_{5xx}\) and \(\Sigma_{5xy}\) exhibit a linear dependency on \(B\). In Figs. 5(a) and 5(c) we plot the total and axial planar Hall conductivities, \(\Sigma_{xy}(\mathbf{B},\mathbf{B}_{5})\) and \(\Sigma_{5xy}(\mathbf{B},\mathbf{B}_{5})\), respectively, as a function of the (normalized) magnetic field \(B/B_{0}\), for \(\theta_{5}=0\), \(B_{5}=0.5\) T and different values of the angle \(\theta\). In Fig. 5(a) the curves correspond to angles ranging from \(\theta=\pi/6\) to \(\theta=\pi/2\), and they closes progressively in such order, i.e. the focal length decreases as increasing the angle \(\theta\). The vertex of the parabola is located at \(\Sigma_{xy}(\mathbf{0},\mathbf{B}_{5})=\frac{3}{20B_{0}^{2}}B_{5}^{2}\sin(2\theta_{5})\), which for the chosen parameter values is zero. Different values of the angle \(\theta_{5}\) only shift the parabola to the top for \(\theta_{5}\in[0,\pi/2]\cup[\pi,3\pi/2]\) or to the bottom for \(\theta_{5}\in[\pi/2,\pi]\cup[3\pi/2,2\pi]\). In Fig. 5(c) the straight lines crosses the origin and the slop is given by \(\frac{3B_{5}}{10B_{0}^{2}}\sin(\theta+\theta_{5})\). The curves correspond to angles ranging from \(\theta=\pi/6\) to \(\theta=\pi/2\), where the slope increases. Figures 5(b) and 5(d) show the total and axial planar Hall conductivities, \(\Sigma_{xy}\) and \(\Sigma_{5xy}\), as a function of the angles \(\theta\) and \(\theta_{5}\), for \(B=B_{5}=0.5\) T. Red (blue) shaded regions display the maximum (minimum) of the corresponding functions. Everything discussed thus far correspond, in a separate fashion, to the total and axial conductivities appearing in the general formulas (24)-(25). In the presence of a genuine electric field and vanishing axial electric field, the plots presented in Figs. 4 and 5 correctly describes the behaviour of the longitudinal and planar Hall conductivities. In the opposite case, for a vanishing genuine electric field and nonzero axial electric field, the roles of the total and axial conductivities becomes interchanged, as Eqs. (24) and (25) suggest. However, in the presence of both genuine and axial electric fields, the effects of the total \(\Sigma_{ij}\) and axial \(\Sigma_{5ij}\) conductivities become intertwined. To elucidate the interplay of the electric fields, we now consider the configuration shown in Fig. 2 in the presence of an electric field \(\mathbf{E}=E_{x}\hat{\mathbf{e}}_{x}\) and an in-plane axial electric field \(\mathbf{E}_{5}=E_{5}(\cos\varphi\hat{\mathbf{e}}_{x}+\sin\varphi\hat{\mathbf{e}}_{y})\). In this case, the total and axial currents become \[J_{i}/E_{x} =\sigma_{ix}+\epsilon_{5}(\cos\varphi\,\sigma_{5ix}+\sin\varphi\, \sigma_{5iy}), \tag{33}\] \[J_{5i}/E_{x} =\sigma_{5ix}+\epsilon_{5}(\cos\varphi\,\sigma_{ix}+\sin\varphi\, \sigma_{iy}), \tag{34}\] where \(\epsilon_{5}\equiv E_{5}/E_{x}\) is the ratio between the axial and genuine electric field, and \(\sigma_{ij}\) and \(\sigma_{5ij}\) are the conductivity tensors given by Eqs. (26) and (27), respectively. To plot these functions we take appropriate values for the genuine and axial fields. Genuine electromagnetic fields are extensively controlled in experiments. However, strain-induced fields are more subtle. For example, for a WSM in the presence of torsion, the maximum attainable axial magnetic field is \(B_{5}\approx 0.5\)T [20]. For simplicity we also take \(B=0.5\)T. An estimate for the axial electric field can be obtained from the upper limit \(E_{5}\leq v_{F}B_{5}\), which is required to circumvent the collapsing of Landau levels in WSMs [34]. In TaAs we find \(E_{5}\leq 1.5\times 10^{5}\)V/m, and then here we take \(E_{5}=2\times 10^{4}\)V/m, which vastly fulfils the condition. In the next Section we deepen on the origin of strain-induced pseudofields. If we take a moderate electric field of strength \(E_{x}=2\times 10^{4}\)V/m we obtain \(\epsilon_{5}=0.5\). In the following, to better understand the interplay of the axial electric field, we fix the axial magnetic field pointing parallel to the electric field, i.e. \(\mathbf{B}=B\hat{\mathbf{e}}_{x}\), and explore the angular dependence of the full currents (Berry curvature + OMM contributions) as a function of the angles \(\theta\) (defined by the magnetic field) and \(\varphi\). In the upper panel of Fig. 6 we plot the longitudinal total \(J_{x}\) and axial \(J_{5x}\) currents (in units of \(E_{x}\)) as a function of the angle \(\theta\) for \(\theta_{5}=0\) and two different values of \(\varphi\), namely, \(\varphi=0\) (left panel) and \(\varphi=\pi/2\) Figure 5: (a) and (c) show the total and axial planar Hall conductivities, respectively, as a function of the magnetic field for \(\theta_{5}=0\), \(B_{5}=0.5\) T and different values of the angle \(\theta\). (b) and (d) show 3D plots of the planar and axial Hall conductivities, respectively, as a function of the angles \(\theta\) and \(\theta_{5}\) for \(B=B_{5}=0.5\) T. (right panel). The lower panel displays the total \(J_{y}\) and axial \(J_{5y}\) planar Hall currents (in units of \(E_{x}\)). The dashed orange and continuous red lines correspond to the currents in the absence of the axial electric field, while the purple and blue lines with square and circle markers correspond to the currents in the presence of \(E_{5}\). We observe that for an axial electric field in parallel configuration (i.e. with \(\varphi=0\)), the longitudinal total and axial currents behave in a similar manner than in the absence of \(E_{5}\), changing sign approximately around the same angles. However, in a perpendicular configuration (i.e. with \(\varphi=\pi/2\)), the angular dependence of the current is slightly different, with the position of the extrema shifted (indeed, the shifts can be directly calculated from Eqs. (33)-(34)). We highlight the fact that the total conductivity (purple line with square markers) reverses its sign near to \(\theta=3\pi/2\), and this is a direct consequence of the nonzero axial electric field. Therefore, this is a direct signature of the effects of the \(E_{5}\) upon the longitudinal conductivities. The case of the planar Hall conductivities is similar. In parallel configuration (\(\varphi=0\)), the planar Hall current both in the presence and in the absence of the axial electric field display the same angular dependence, flipping sign around the same angles, with the extrema slightly shifted and with the amplitude controlled by the field strength \(E_{5}\). In perpendicular configuration, there are two aspects that could be highlighted. On the one hand, in the case \(E_{5}=0\) there are three angles at which the current vanishes, i.e. \(\theta=0,\pi,2\pi\), since \(J_{y}\sim 3\sigma_{0}E_{x}\sin\theta\cos\theta\); however, in the presence of an axial electric field the planar Hall current is of the form \(J_{y}\sim\sigma_{0}E_{x}(3\sin\theta-2\epsilon_{5})\cos\theta\), thus implying that there is only one critical angle at which the current vanishes, given by \(\theta^{*}=\arcsin(2\epsilon_{5}/3)\), and take the constant value \(J_{y}\sim-2\sigma_{0}\epsilon_{5}E_{x}\) at \(\theta=0,2\pi\). On the other hand, as the lower-right panel of Fig. 6 shows, in the presence of the axial electric field the periodicity of the current in the angle \(\theta\) is broken, which becomes clear from the previous equations for the currents with and without \(E_{5}\). These are distinguishing features of the presence of the axial electric field. All these conclusions are also supported by the 3D plots presented in Fig. 7, which display the total and axial planar Hall currents (in units of \(E_{x}\)) as a function of the angles \(\theta\) and \(\varphi\). Red (blue) shaded regions display the maximum (minimum) of the corresponding functions. As pointed out, anomaly related transport phenomena have attracted great attention in condensed matter physics. For example, in graphene, fermionic excitations near the Dirac cones are described by a (2+1)-dimensional quantum field theory exhibiting the parity anomaly. This gives rise to the valley Hall effect, observed experimentally [35; 36] and extensively explored due to possible applications in valleytronics [37]. On the same footing, Weyl semimetals provide an electronic route for realizing the chiral anomaly in condensed matter. The chiral anomaly induces a number of novel phenomena in WSMs, including the chiral magnetic effect [38] that manifests itself through the negative longitudinal magnetoresistance [3; 4], and it was confirmed in transport experiments in the TaAs family [39; 30]. The planar Hall effect has been ascribed also to the chiral anomaly; however, as shown in this paper, there Figure 7: 3D plots of the total and axial planar Hall current (in units of \(E_{x}\)) as a function of the angles \(\theta\) (defined by the magnetic field) and \(\varphi\) (defined by the axial electric field) Figure 6: Angular dependence of the longitudinal (upper panel) and planar Hall (lower panel) currents for \(\mathbf{E}=E_{x}\hat{\mathbf{e}}_{x}\), \(\theta_{5}=0\), \(\varphi=0\) (left panel) and \(\varphi=\pi/2\) (right panel). The dashed orange (continuous red) line shows the currents for \(E_{5}=0\), while the purple (blue) line with square (circle) markers shows the full currents including the axial electric field. are other contributions that affect the PHE in a similar order of magnitude than the Berry curvature contribution, namely, the orbital magnetic moment of charge carriers. WSMs in the presence of pseudofields give unique opportunities to probe the different contributions to the covariant anomaly equations (1)-(2). Along this line, our results provide a testing ground for the chiral and charge anomalies by means of the planar Hall effect. Experimentally, pseudofields are induced by applying strain on the crystal. For example, a pseudomagnetic field \(\mathbf{B}_{5}\) can be created by applying a static torsion or bending the sample. A nonzero pseudoelectric field \(\mathbf{E}_{5}\) is generated, for instance, by dynamically stretching or compressing the sample. Therefore, properly combining genuine and pseudo-electromagnetic fields, we are able to test the four terms in the right-hand side of Eqs. (1) and (2). On the one hand, the two terms in the chiral anomaly equation (1), \(\mathbf{E}\cdot\mathbf{B}\) and \(\mathbf{E}_{5}\cdot\mathbf{B}_{5}\), can be tested by using nonorthogonal electromagnetic fields or pseudo-electromagnetic fields, respectively. The latter can be achieved by a simultaneous application of torsion and time-dependent unidirectional strain. Physically, in both cases, this can be understood as pumping of charge from one node to the other. On the other hand, the two terms in the charge anomaly equation (2), \(\mathbf{E}\cdot\mathbf{B}_{5}\) and \(\mathbf{E}_{5}\cdot\mathbf{B}\), can be tested by the combined application of electromagnetic and pseudo-electromagnetic fields. These nonconserving charge terms should be interpreted with caution, since in a real solid, charge is strictly conserved. Indeed, they can be understood as pumping of charge between the bulk and the boundary of the system [18; 19; 20]. This implies that additional currents must exist in the system to restore charge conservation. This problem is solved by adding Bardeen polynomials [40; 41; 42], which in the present case, corresponds to the topological Chern-Simons charge and current densities, in the order given by \(\rho_{\text{CS}}=-\frac{e^{2}}{2\pi^{2}\hbar^{2}}\mathbf{b}\cdot\mathbf{B}\) and \(\mathbf{J}_{\text{CS}}=\frac{e^{2}}{2\pi^{2}\hbar^{2}}(-b_{0}\mathbf{B}+\mathbf{b}\times \mathbf{E})\), being \(2\mathbf{b}\) and \(2b_{0}\) the separation between the Weyl nodes in momentum and energy, respectively, as depicted in Fig. 1. To understand this result physically, we must recall that pseudofields are induced by inhomogeneous strain, which results in a position and time dependent separation between the nodes in momentum and energy, i.e. \(\mathbf{b}(\mathbf{r},t)\) and \(b_{0}(\mathbf{r},t)\), such that the emerging pseudo-fields are determined by \(\mathbf{B}_{5}=\nabla\times\mathbf{b}\) and \(\mathbf{E}_{5}=-\nabla b_{0}-\partial_{t}\mathbf{b}\). It is now straightforward to check that the Chern-Simons 4-current \(J^{\mu}_{\text{CS}}=(c\rho_{\text{CS}},\mathbf{J}_{\text{CS}})\) satisfies \(\partial_{\mu}J^{\mu}_{\text{CS}}=\frac{e^{2}}{2\pi^{2}\hbar^{2}}(\mathbf{E}\cdot \mathbf{B}_{5}+\mathbf{E}_{5}\cdot\mathbf{B})\). This fact suggests the definition of consistent total and axial currents \(J^{\mu}_{\text{cons}}=J^{\mu}+J^{\mu}_{\text{CS}}\) and \(J^{\mu}_{\text{5cons}}=J^{\mu}_{5}+J^{\mu}_{\text{5CS}}\), respectively, restoring charge conservation. In this manner, the consistent version of the anomaly equations (1) and (2) are \[\partial_{\mu}J^{\mu}_{\text{5cons}} =\frac{e^{2}}{2\pi^{2}\hbar^{2}}(\mathbf{E}\cdot\mathbf{B}+\frac{1}{3} \mathbf{E}_{5}\cdot\mathbf{B}_{5}), \tag{35}\] \[\partial_{\mu}J^{\mu}_{\text{cons}} =0, \tag{36}\] respectively. Therefore, charge nonconservation stated in the anomaly equations (1) and (2) not to worry us. In the next Section we will back to this point in particular examples. ## IV Strain-induced nonlinear transport phenomena in WSMs So far we have discussed in general the effects of pseudo-fields upon nonlinear transport phenomena in Weyl semimetals and remarked in passing that they emerge due to position and time dependent deformations. In this Section we deepen on the physical origin of the pseudo-fields and apply our results to particular strain configurations. In a strained material, the modifications in the hoping parameters (between atomic orbitals) and on-site energies are determined by the components of the strain-tensor \(u_{ij}=\frac{1}{2}\left(\partial u_{i}/\partial x^{j}+\partial u_{j}/\partial x ^{i}\right)\), where \(u_{i}\) is the local displacement vector of the strained lattice. In Weyl semimetals the deformation of the crystal lattice shifts the Weyl nodes in momentum and energy, as shown in Fig. 8, and such node shifts can be described in terms of pseudo-gauge potentials \(\mathbf{\tilde{A}}^{\text{el}}_{\chi}(\mathbf{r},t)\) and \(\tilde{\Phi}^{\text{el}}_{\chi}(\mathbf{r},t)\)[17]. These couple to the electronic degrees of freedom as the electromagnetic vector and scalar potentials do. Therefore, the effects of strain on a single Weyl node of chirality \(\chi\), as depicted in Fig. 8, are captured by the Hamiltonian \[\hat{H}_{\chi}(\mathbf{k})=\chi v_{F}\mathbf{\sigma}\cdot\left(\hbar\mathbf{k}+\mathbf{\tilde{ A}}^{\text{el}}_{\chi}\right)+\tilde{\Phi}^{\text{el}}_{\chi}+b_{0\chi}, \tag{37}\] where the pseudo-gauge potentials are expressed in terms of the strain-tensor as \(\mathbf{\tilde{A}}^{\text{el}}_{\chi i}=h_{ijk}u_{jk}\) and \(\tilde{\Phi}^{\text{el}}_{\chi}=g_{ij}u_{ij}\). Here, \(h_{ijk}\) is the Weyl node shift per unit strain and \(g_{ij}\) is the energy shift per unit strain, which have to be computed with a microscopic model (e.g. tight-binding, _ab initio_) or determined by experiments [43]. These tensors contain all material details, such as anisotropy and elastic parameters. The corresponding pseudo-fields \(\mathbf{\tilde{E}}^{\text{el}}_{\chi}=-\nabla\tilde{\Phi}^{\text{el}}_{\chi}- \partial_{t}\mathbf{\tilde{A}}^{\text{el}}_{\chi}\) and \(\mathbf{\tilde{B}}^{\text{el}}_{\chi}=\nabla\times\mathbf{\tilde{A}}^{\text{el}}_{\chi}\) may couple opposite chiral fermions with opposite signs, in much the same fashion than the case of strained graphene, in which pseudo-fields couple to the Dirac fermions oppositely in the two valleys. To account for the two possibilities, that pseudo-fields couple axially or not to the Weyl nodes, we introduce the notation \(\tilde{\mathbf{E}}_{\chi}^{\rm el}=\mathbf{\mathcal{E}}+\chi\mathbf{E}_{5}\) and \(\tilde{\mathbf{B}}_{\chi}^{\rm el}=\mathbf{\mathcal{B}}+\chi\mathbf{B}_{5}\), where \(\mathbf{\mathcal{E}}\) and \(\mathbf{\mathcal{B}}\) are strain-induced pseudo-fields which couple to the Weyl nodes in the same manner as electromagnetic fields do, while \(\mathbf{E}_{5}\) and \(\mathbf{B}_{5}\) are the axial parts of the pseudo-fields, which couple opposite chiral fermions with opposite signs. All these fields are fully determined by the strain-tensor. Note that our results of the previous Section holds in this case; however, the elastic (non-axial) electric and magnetic fields should be considered as genuine electromagnetic fields. In order to probe strain-induced nonlinear phenomena in Weyl semimetals, we have to consider suitable non-uniform strain tensors. In the following we shall discuss two interesting cases with experimental possibilities. To be precise, we study the effects of position-dependent strain tensors: (i) bending the WSM into a circular arc and (ii) twisting a wire-shaped WSM. These configurations produce uniform pseudo-fields, such that the pseudo-magnetic field couples axially to the Weyl fermions, and the pseudo-electric field is non-axial and then couples to the Weyl fermions in the usual manner [43]. With the help of an additional magnetic field, these can be used to test the covariant anomaly equations. ### Bending of WSM thin films Films and wires realizations of Weyl semimetals are excellent probes to test strain-induced phenomena. In fact, the best experimentally accessible geometry of applied strain is obtained by bending thin films of WSMs, which is a 3D generalization of the configuration suggested for graphene sheets in Ref. [44]. As we shall see, bending of WSM thin films is an excellent test bed for the covariant anomaly equations. Let us consider a rectangular lattice model of a WSM, as depicted in the upper panel of Fig. 9, with two nodes separated by a distance \(b\) in the \(k_{x}\) direction. Bending the system into a circular arc in the \(x\)-\(y\) plane, as sketched in the lower panel of Fig. 9, is described by the deformation [45]: \[u_{x} =u_{0}(2xy+Cx), \tag{38}\] \[u_{y} =u_{0}\left[-x^{2}-Dy(y+C)\right],\] (39) \[u_{z} =0, \tag{40}\] where \(u_{0}\), \(C\) and \(D\) are constants that depend on the material. This yields the axial vector potential \(A_{5x}=u_{xx}b=u_{0}(2y+C)b\), i.e. it couples to the Weyl nodes with opposite signs. The corresponding axial pseudo-magnetic field is then \(B_{5z}=-2u_{0}b\). On the other hand, this strain configuration produces also a scalar (deformation) potential \(\tilde{\Phi}_{\chi}=u_{xx}+u_{yy}=u_{0}(1-D)(2y+C)\), which produces the elastic (i.e. non-axial) pseudo-electric field \(\mathcal{E}_{y}=-2u_{0}(1-D)g\), perpendicular to the pseudo-magnetic field, as shown in Fig. 9. The orthogonality between the strain-induced pseudo-fields makes this configuration appropriate to investigate the planar Hall effect in the presence of an external magnetic field in the \(y\)-\(z\) plane, i.e. \(\mathbf{B}=B(\cos\theta\hat{\mathbf{e}}_{y}+\sin\theta\hat{\mathbf{e}}_{z})\), coplanar to the pseudo-fields. Using equations (24) and (25) one determines directly the total and axial currents. Normalizing the currents by \(\sigma_{0}\mathcal{E}_{y}\) one finds the following expressions for the total \(\mathbf{j}\) and axial \(\mathbf{j}_{5}\) currents \[\mathbf{j}(\theta) =\frac{1}{10}\left(\begin{array}{c}0\\ \tilde{B}^{2}(3\cos^{2}\theta-1)-\tilde{B}_{5}^{2}\\ 3\tilde{B}^{2}\sin\theta\cos\theta\end{array}\right), \tag{41}\] \[\mathbf{j}_{5}(\theta) =\frac{\tilde{B}_{5}\tilde{B}}{10}\left(\begin{array}{c}0\\ -2\sin\theta\\ 3\cos\theta\end{array}\right) \tag{42}\] where we have subtracted the field-independent current from the total current \(\mathbf{j}\). Here, \(\tilde{B}=B/B_{0}\) and \(\tilde{B}_{5}=B_{5}/B_{0}\). Interestingly, these are planar Hall currents., i.e. they are in the \(y\)-\(z\) plane, where the pseudo-fields and the magnetic field lie. The angular dependences of the components of \(\mathbf{j}\) and \(\mathbf{j}_{5}\) are plotted in Fig. 10. We have taken \(B=1\)T, \(B_{5}=0.5\)T, \(B_{0}=6.75\)T (characteristic of TaAs) and normalized the currents by its maximum values \(j_{\text{max}}=j_{y}(\pi)\) and \(j_{5\text{max}}=j_{5z}(\pi/2)\). The left panel shows the total current components: the blue dashed line corresponds to \(j_{y}(\theta)/j_{\text{max}}\) and the red continuous line is \(j_{z}(\theta)/j_{\text{max}}\). The right panel shows the axial current components: the blue dashed line corresponds to \(j_{5y}(\theta)/j_{5\text{max}}\) and the red continuous line is \(j_{5z}(\theta)/j_{5\text{max}}\). The effects of pseudo-fields can be increased according to the strain. In the numerical calculations we have used \(B_{5}=0.5\) T; however, strain-induced pseudo-magnetic fields (on the order of 3 Tesla) were recently observed in strained crystals of Re-doped MoTe\({}_{2}\)[21]. The analysis of this subsection can be extended to situations involving more deformation profiles. In particular, it would be interesting to analyse the effect of an axial pseudo-electric field, which unlike the deformation potential considered above, couple with opposite sign to the nodes of opposite chirality. Figure 9: Sketch of the bending geometry that generates pseudo-fields. The applied strain bends the original configuration (upper panel) into a circular arc in the \(x\)-\(y\) plane (lower panel). ### Rotational strain In three dimensions, the antisymmetric part of the strain tensor, \(\omega_{ij}=\frac{1}{2}\left(\partial u_{ij}/\partial x^{i}-\partial u_{i}/ \partial x^{j}\right)\), also gives rise to pseudo-fields in Weyl semimetals. Physically, it is related to infinitesimal rotations by a vector \(\mathbf{\Omega}\) with the components given by \(\Omega_{k}=\frac{1}{2}\epsilon_{ijk}\omega_{ij}\). One can further see that this vector is related to the deformation vector \(\mathbf{u}\) by \[\mathbf{\Omega}(\mathbf{r})=\frac{1}{2}\,\nabla\times\mathbf{u}(\mathbf{r}). \tag{43}\] A full discussion on the effects of rotational strain in Dirac matter is presented in Ref. [46]. There, the authors derive the low-energy effective Hamiltonians for the electron-strain interactions around Weyl nodes associated with the antisymmetric part of the strain tensor. A distinguishing physical example, which we consider here, is a wire-shaped Weyl semimetal of length \(L\) with an axis along the \(z\)-direction and the nodes separated by a distance \(b\) along the axis. The displacement vector \(\mathbf{u}\) that derives from twisting the sample an angle \(\theta\) is given by: \[\mathbf{u}(\mathbf{r})=\theta\frac{z}{L}(\mathbf{r}\times\hat{\mathbf{e}}_{z}), \tag{44}\] where \(\mathbf{r}\) is the position relative to the origin located on the axis of the wire. In Fig. 11 we show the effect of the deformation (44). The strain tensor associated with the deformation (44) is traceless, and therefore the deformation potential generated from \(u_{ij}\) is zero. The corresponding axial vector potential is given by \(\mathbf{A}_{5}(\mathbf{r})=\theta\frac{b}{2L}(y\hat{\mathbf{e}}_{x}-x\hat{\mathbf{e}}_{y})\), producing the uniform pseudo-magnetic field \(\mathbf{B}_{5}=-\theta\frac{b}{2L}\hat{\mathbf{e}}_{z}\). According to Eq. (43), the rotation vector becomes \(\mathbf{\Omega}(\mathbf{r})=\frac{\theta}{2L}(\mathbf{r}-3z\hat{\mathbf{e}}_{z})\). This produces a deformation potential \(\tilde{\Phi}(\mathbf{r})=\mathbf{b}\cdot\mathbf{\Omega}(\mathbf{r})=-\frac{\theta}{L}bz\), which is non-axial. The corresponding pseudo-electric field is \(\mathbf{\mathcal{E}}=-\nabla\mathbf{\Omega}(\mathbf{r})=\frac{\theta}{L}b\hat{\mathbf{e}}_{z}\), which is antiparallel to the pseudo-magnetic field \(\mathbf{B}_{5}\), as depicted in Fig. 11. It is interesting that these pseudo-fields induce a longitudinal current along the direction of the pseudo-fields: \[J_{z}=\sigma_{0}\left(1+\frac{B_{5}^{2}}{5B_{0}^{2}}\right)\mathcal{E}. \tag{45}\] Note that the direction of the torsion-induced current can be controlled by the direction of the rotation: it is positive (negative) when the twisting is clockwise (anticlockwise). This configuration, supplemented by an external magnetic field, could also be used to test the covariant anomaly equations. ## V Discussion In recent years, the study of strain-induced transport has attracted great attention due to its potential applications for the development of straintronic devices. The most salient example is perhaps graphene, in which strain induces pseudo-fields which couples to the Dirac fermions oppositely in the two valleys. Other well-known examples are carbon nanotubes [47], bilayer graphene [48; 49] and transition metal dichalcogenides [50; 51]. More recently, the study of strain-induced pseudo-fields in Dirac materials has garnered a lot of attention since it could open a pathway to the development of the strain-induced chiralitytronics. In Weyl semimetals, for example, these chirality selective pseudo-fields has lead to interesting strain-induced phenomena, such as quantum oscillations [45], the chiral magnetic effect and the negative resistivity [52; 53], the chiral torsional effect [54], the acoustogalvanic effect [55], among others. In this paper we used the chiral kinetic theory approach, at a finite temperature, to investigate nonlinear transport phenomena in Weyl semimetals induced by electromagnetic fields and strain-induced pseudo-fields. Our main focus was the study of the planar Hall effect, the appearance of an in-plane transverse voltage in the presence of coplanar electromagnetic fields. Using the relaxation time approximation for the collision integral, we first derived general expressions for the nonlinear conductivity tensor and clearly differentiate the contributions arising from the Berry curvature and the orbital magnetic moment of charge carries, since the latter has been continuously disregarded in the analysis of the PHE, and we showed that it is as important as the Berry curvature. Next, using the simplest linearly dispersing model for a Weyl semimetal with two nodes of opposite chiralities separated in momentum and energy, we obtained analytical expressions for the contributions Figure 11: Schematic representation of a cross-section of the wire-shaped nanowire twisted by an angle \(\theta\). Figure 10: Angle dependence of the (normalized) total \(\mathbf{j}\) (at left) and axial \(\mathbf{j}_{5}\) (at right) currents for a WSM strained into a circular arc in the \(x\)-\(y\) plane, as depicted in Fig. 9. The blue dashed lines show the \(y\)-components, and the red continuous line are the \(z\)-components. to the magnetoconductivity tensor at a finite temperature. A rough estimation of the finite temperature effect reveals that, at low temperatures (\(T\sim 2\)K, which is the temperature used to detect the PHE in topological insulators), the amplitude of the currents differ by a factor of \(10^{-4}\) with respect to the result at \(T=0\). Our numerical calculations, presented in Figs. 3-7, reveal that the OMM contribution is more pronounced in the longitudinal magnetoconductivity than in the planar Hall conductivity. Our general expressions for the total \(\mathbf{J}\) and axial \(\mathbf{J}_{5}\) currents, (24) and (25) respectively, include the effects of usual electromagnetic fields \(\mathbf{E}\) and \(\mathbf{B}\), besides the contributions arising from the pseudo-electric \(\mathbf{E}_{5}\) and the pseudo-magnetic \(\mathbf{B}_{5}\) fields. In the absence of the axial electric field the longitudinal and planar Hall currents exhibit periodicity in the angle \(\theta\), defined by the magnetic field as \(\cos\theta=\hat{\mathbf{B}}_{5}\cdot\hat{\mathbf{E}}\); however, when the axial electric field is turned on, such periodicity goes down and the position of the extrema in the conductivities change. Interestingly, these expressions show the possibility of inducing nonlinear transport phenomena by using only strain-induced pseudo-fields without needing real electromagnetic fields, which opens up new avenues for manipulating and controlling this effect. Finally, we apply our results to two strained configurations with experimental possibilities: (i) bending a WSM thin film into a circular arc and (ii) applying torsion to a WSM wire. These configurations could also be useful in the investigation of the covariant anomaly equations (1) and (2) of Weyl fermions. The planar Hall effect in multilayer structures of NiFe/IrMn and NiFe/Cu/NiFe has been widely used in the development of high-performance magnetic field sensors, including those based on spin valves, giant magnetoresistance and tunneling magnetoresistance [56; 57; 58]. On the other hand, in Dirac materials, for example, recent magneto-transport measurements on single crystals of the magnetic Weyl semimetal Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) reported the observation of the PHE [59]. Along these lines, our results create opportunities for the design of novel devices or sensors that exploit pseudo-fields to detect the PHE in Weyl semimetals. Finally, we note that the study of strain-induced PHE can be extended straightforwardly to include dynamical local deformations of the crystal, giving rise to an acoustic-induced PHE, thus enlarging the possibilities for the design of devices for chiralitytronics. ###### Acknowledgements. L.M.O. was supported by the CONACyT PhD fellowship No. 834773. A.M.-R. has been partially supported by DGAPA-UNAM Project No. IA102722 and by Project CONACyT (Mexico) No. 428214. We are also indebted to the reviewers for their valuable comments and suggestions to improve the quality of the paper. ## Appendix A Derivation of the orbital magnetic moment contribution In this Section we derive the formula for the orbital magnetic moment contribution \(\sigma_{ij}^{(m,\alpha)}\) to the magnetoconductivity tensor \(\sigma_{ij}^{(\alpha)}\). To this end, we start with the general expression for the magnetoconductivity tensor, given by Eq. (8) and expand the factor \(D_{\alpha}(\mathbf{k})\) and the equilibrium distribution function \(f_{\alpha}^{\text{eq}}(\mathcal{E}_{\alpha})\) in powers of the pseudo-magnetic field. Keeping terms up to quadratic order in the pseudo-magnetic field we obtain \[D_{\alpha}(\mathbf{k}) \approx 1-\frac{e}{\hbar}\left[\mathbf{B}_{\chi}\cdot\mathbf{\Omega}_{ \alpha}(\mathbf{k})\right]+\frac{e^{2}}{\hbar^{2}}\left[\mathbf{B}_{\chi}\cdot\mathbf{ \Omega}_{\alpha}(\mathbf{k})\right]^{2}\] \[\equiv 1+D_{\alpha}^{(1)}(\mathbf{k})+D_{\alpha}^{(2)}(\mathbf{k}) \tag{10}\] and \[f_{\alpha}^{\text{eq}}(\mathcal{E}_{\alpha})\approx f_{\alpha}^{\text{eq}}( \mathcal{E}_{\alpha}^{(0)})+\mathcal{E}_{\alpha}^{(m)}f_{\alpha}^{\text{eq} \,\prime}(\mathcal{E}_{\alpha}^{(0)})+\frac{1}{2}\mathcal{E}_{\alpha}^{(m)2} f_{\alpha}^{\text{eq}\,\prime\prime}(\mathcal{E}_{\alpha}^{(0)}), \tag{11}\] where \(f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{\alpha}^{(0)})=\frac{\partial f_{ \alpha}^{\text{eq}}(\mathcal{E}_{\alpha}^{(0)})}{\partial\mathcal{E}_{\alpha }^{(0)}}\) and \(f_{\alpha}^{\text{eq}\,\prime\prime}(\mathcal{E}_{\alpha}^{(0)})=\frac{ \partial^{2}f_{\alpha}^{\text{eq}}(\mathcal{E}_{\alpha}^{(0)})}{\partial \mathcal{E}_{\alpha}^{(0)2}}\). Now we substitute these expansions into Eq. (8) and separate the velocity as \(\mathbf{v}_{\alpha}=\mathbf{v}_{\alpha}^{(0)}+\mathbf{v}_{\alpha}^{(m)}\). Keeping the terms quadratic in the pseudo-magnetic field we obtain: \[\sigma_{ij}^{(m,\alpha)} =-e^{2}\tau\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\Bigg{\{}\bigg{[}v_{ \alpha i}^{(m)}v_{\alpha j}^{(m)}f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_ {\alpha}^{(0)})+(v_{\alpha i}^{(0)}v_{\alpha j}^{(m)}+v_{\alpha i}^{(m)}v_{ \alpha j}^{(0)})\mathcal{E}_{\alpha}^{(m)}f_{\alpha}^{\text{eq}\,\prime\prime }(\mathcal{E}_{\alpha}^{(0)})+\frac{1}{2}v_{\alpha i}^{(0)}v_{\alpha j}^{(0)} \mathcal{E}_{\alpha}^{(m)2}f_{\alpha}^{\text{eq}\,\prime\prime}(\mathcal{E}_ {\alpha}^{(0)})\bigg{]}\] \[\quad+\frac{e}{\hbar}\left[\Omega_{\alpha k}\left(B_{Xi}(v_{ \alpha j}^{(0)}v_{\alpha k}^{(m)}+v_{\alpha j}^{(m)}v_{\alpha k}^{(0)})+B_{ Xj}(v_{\alpha i}^{(0)}v_{\alpha k}^{(m)}+v_{\alpha i}^{(m)}v_{\alpha k}^{(0)})-B_{ Xk}(v_{\alpha i}^{(0)}v_{\alpha j}^{(m)}+v_{\alpha i}^{(m)}v_{\alpha j}^{(0)}) \right)f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{\alpha}^{(0)})\] \[\quad+\Omega_{\alpha k}\left(B_{Xi}(v_{\alpha j}^{(0)}v_{\alpha k }^{(0)}+B_{Xj}v_{\alpha i}^{(0)}v_{\alpha k}^{(0)}-B_{Xk}v_{\alpha i}^{(0)}v_{ \alpha j}^{(0)})\mathcal{E}_{\alpha}^{(m)}f_{\alpha}^{\text{eq}\,\prime\prime}( \mathcal{E}_{\alpha}^{(0)})\right]\ \Bigg{\}}\, \tag{12}\] where we have subtracted the field-independent and Berry curvature contributions, given by equations (10) and (11) respectively. To simplify this expression, on the one hand, we observe that: \[\partial_{k_{i}}\partial_{k_{j}}\left(\frac{1}{2\hbar^{2}}\mathcal{ E}_{\alpha}^{(m)2}f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{\alpha}^{(0)})\right) =\frac{1}{\hbar}\left(\partial_{k_{j}}v_{\alpha i}^{(m)}\right) \mathcal{E}_{\alpha}^{(m)}f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{\alpha} ^{(0)})+\frac{1}{2\hbar}\left(\partial_{k_{j}}v_{\alpha i}^{(0)}\right) \mathcal{E}_{\alpha}^{(m)2}f_{\alpha}^{\text{eq}\,\prime\prime}(\mathcal{E}_{ \alpha}^{(0)})+\left[v_{\alpha i}^{(m)}v_{\alpha j}^{(m)}f_{\alpha}^{\text{eq }\,\prime}(\mathcal{E}_{\alpha}^{(0)})\right.\right.\] \[\left.+\left.(v_{\alpha i}^{(0)}v_{\alpha j}^{(m)}+v_{\alpha i}^{ (m)}v_{\alpha j}^{(0)})\mathcal{E}_{\alpha}^{(m)}f_{\alpha}^{\text{eq}\, \prime\prime}(\mathcal{E}_{\alpha}^{(0)})+\frac{1}{2}v_{\alpha i}^{(0)}v_{ \alpha j}^{(0)}\mathcal{E}_{\alpha}^{(m)2}f_{\alpha}^{\text{eq}\,\prime \prime\prime}(\mathcal{E}_{\alpha}^{(0)})\right], \tag{10}\] where we have used that \(\nabla_{\mathbf{k}}f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{\alpha}^{(0)})= \hbar\mathbf{v}_{\alpha}^{(0)}f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{\alpha }^{(0)})\). The term in square brackets is exactly the same that appears in the first line of Eq. (10). Besides, upon integration of Eq. (10) over the Brillouin zone, the left-hand side vanishes due to the periodic boundary conditions. Hence, the first line of Eq. (10) can be replaced by the two first terms in the right-hand side of Eq. (10). On the other hand, the second term in square brackets in Eq. (10) can be further simplified in terms of the vector \(\mathbf{Q}_{\alpha}\equiv\mathbf{\Omega}_{\alpha}\times(\mathbf{v}_{\alpha}^{(0)}\times\bm {B}_{\chi})\). We obtain \[\sigma_{ij}^{(m,\alpha)} =-e^{2}\tau\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\Bigg{\{}-\left[ \frac{1}{\hbar}\left(\partial_{k_{j}}v_{\alpha i}^{(m)}\right)\mathcal{E}_{ \alpha}^{(m)}f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{\alpha}^{(0)})+ \frac{1}{2\hbar}\left(\partial_{k_{j}}v_{\alpha i}^{(0)}\right)\mathcal{E}_{ \alpha}^{(m)2}f_{\alpha}^{\text{eq}\,\prime\prime}(\mathcal{E}_{\alpha}^{(0) })\right]\] \[\quad+\frac{e}{\hbar}\left[-\left(Q_{\alpha i}v_{\alpha j}^{(m)}+ Q_{\alpha j}v_{\alpha i}^{(m)}\right)f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{ \alpha}^{(0)})+\left(B_{\chi i}v_{\alpha j}^{(0)}+B_{\chi j}v_{\alpha i}^{(0)} \right)\mathbf{\Omega}_{\alpha}\cdot\left(\mathbf{v}_{\alpha}^{(m)}f_{\alpha}^{\text{ eq}\,\prime}(\mathcal{E}_{\alpha}^{(0)})+\mathbf{v}_{\alpha}^{(0)}\mathcal{E}_{ \alpha}^{(m)}f_{\alpha}^{\text{eq}\,\prime\prime}(\mathcal{E}_{\alpha}^{(0) })\right)\right]\] \[\quad-\frac{e}{\hbar}(\mathbf{\Omega}_{\alpha}\cdot\mathbf{B}_{\chi})v_{ \alpha i}^{(0)}v_{\alpha j}^{(0)}\mathcal{E}_{\alpha}^{(m)}f_{\alpha}^{\text{ eq}\,\prime\prime}(\mathcal{E}_{\alpha}^{(0)})\ \Bigg{\}}. \tag{11}\] Now, using the fact that \(\frac{1}{\hbar}\nabla_{\mathbf{k}}\left[\mathcal{E}_{\alpha}^{(m)}f_{\alpha}^{ \text{eq}\,\prime}(\mathcal{E}_{\alpha}^{(0)})\right]=\mathbf{v}_{\alpha}^{(m)}f _{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{\alpha}^{(0)})+\mathbf{v}_{\alpha}^{(0 )}\mathcal{E}_{\alpha}^{(m)}f_{\alpha}^{\text{eq}\,\prime\prime}(\mathcal{E}_ {\alpha}^{(0)})\) and integrating by parts we get \[\sigma_{ij}^{(m,\alpha)} =\frac{e^{3}\tau}{\hbar}\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}} \Bigg{\{}\Big{(}Q_{\alpha i}v_{\alpha j}^{(m)}+Q_{\alpha j}v_{\alpha i}^{(m) }\Big{)}f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{\alpha}^{(0)})+ \mathcal{E}_{\alpha}^{(m)}\left[(\mathbf{\Omega}_{\alpha}\cdot\mathbf{B}_{\chi})v_{ \alpha i}^{(0)}v_{\alpha j}^{(0)}+\frac{1}{2e}\mathcal{E}_{\alpha}^{(m)}\left( \partial_{kj}v_{\alpha i}^{(0)}\right)\right]f_{\alpha}^{\text{eq}\,\prime \prime}(\mathcal{E}_{\alpha}^{(0)})\] \[\quad+\frac{1}{e}\mathcal{E}_{\alpha}^{(m)}\left[\left(\partial_{ k_{j}}v_{\alpha i}^{(m)}\right)+\frac{e}{\hbar}\nabla_{\mathbf{k}}\cdot\left[\mathbf{\Omega}_{ \alpha}(B_{\chi i}v_{\alpha j}^{(0)}+B_{\chi j}v_{\alpha i}^{(0)})\right] \right]f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{\alpha}^{(0)})\ \Bigg{\}}\, \tag{12}\] where we have used the periodicity of the Brillouin zone. Finally, using the symmetry of the tensor under the interchange \(i\leftrightarrow j\) and the expression \(\mathcal{E}_{\alpha}^{(m)}=-\mathbf{m}_{\alpha}\cdot\mathbf{B}_{\chi}\) we obtain \[\sigma_{ij}^{(m,\alpha)} =\frac{e^{3}\tau}{\hbar}\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\Bigg{\{}Q _{\alpha i}v_{\alpha j}^{(m)}f_{\alpha}^{\text{eq}\,\prime}(\mathcal{E}_{ \alpha}^{(0)})+\frac{1}{2}\mathcal{E}_{\alpha}^{(m)}\mathbf{B}_{\chi}\cdot\left[\mathbf{ \Omega}_{\alpha}v_{\alpha i}^{(0)}v_{\alpha j}^{(0)}-\mathbf{m}_{\alpha}\frac{1}{2e }\left(\partial_{kj}v_{\alpha i}^{(0)}\right)\right]f_{\alpha}^{\text{eq}\, \prime\prime}(\mathcal{E}_{\alpha}^{(0)})\] \[\quad+\frac{1}{e}\,\mathcal{E}_{\alpha}^{(m)}\left[\frac{e}{\hbar} \nabla_{\mathbf{k}}\cdot(\mathbf{\Omega}_{\alpha}B_{\chi i}v_{\alpha j}^{(0)})+\frac{1}{2} \left(\partial_{kj}v_{\alpha i}^{(m)}\right)\right]f_{\alpha}^{\text{eq}\, \prime}(\mathcal{E}_{\alpha}^{(0)})\ \Bigg{\}}\ +(i\leftrightarrow j). \tag{13}\] This result suggests the definition of the tensors in Eq. (13) and yields the final expression for the orbital magnetic moment contribution, given by Eq. (12). ## Appendix B Computation of the conductivity tensors Here we evaluate in detail the different contributions to the magnetoconductivity tensor defined in Section II for a Weyl semimetal. Owing to the rotational symmetry of the problem, the required integrals can be performed straightforwardly by using the spherical coordinate system \(\mathbf{k}=k(\sin\theta\cos\phi\hat{\mathbf{e}}_{x}+\sin\theta\sin\phi\hat{\mathbf{e}}_{y}+ \cos\theta\hat{\mathbf{e}}_{z})\) and the volume element \(d^{3}\mathbf{k}=k^{2}dkd\Omega\), being \(d\Omega\) the differential solid angle. Note that the difference between the chemical potential and the energy shift of the node determines the band index, i.e. \(s=\text{sgn}(\mu-b_{0\chi})\), as evinced in figure 1. This is so since \(\mu>b_{0\chi}\) (\(\mu<b_{0\chi}\)) implies \(s=1\) (\(s=-1\)). Therefore we take \(\mu_{\chi}=s\mu_{0\chi}\), with \(\mu_{0\chi}=|\mu-b_{0\chi}|>0\). Besides, we work at finite temperature. Let us first consider the \(B\)-independent conductivity, given by Eq. (10). Substituting the required components of the band velocity \(\mathbf{v}_{s}^{(0)}=sv_{F}\hat{\mathbf{k}}\) one gets \[\sigma_{ij}^{(0,\alpha)}(T)=-\frac{e^{2}v_{F}^{2}\tau}{8\pi^{3} }\int d^{3}\mathbf{k}\,\hat{k}_{i}\hat{k}_{j}\,\frac{\partial f_{\alpha}^{\text{eq} }(\mathcal{E}_{\alpha}^{(0)})}{\partial\mathcal{E}_{\alpha}^{(0)}}. \tag{14}\] Owing to comes \(\int d\Omega\,\hat{k}_{i}\hat{k}_{j}=\int d\Omega\,\frac{1}{3}\delta_{ij}=\frac{4 \pi}{3}\delta_{ij}\). Therefore we obtain \[\sigma^{(0,\alpha)}_{ij}(T)=\frac{e^{2}v_{F}^{2}\tau}{6\pi^{2}} \delta_{ij}\int_{0}^{\infty}dk\,k^{2}\,\frac{1}{k_{B}T}\frac{e^{\frac{e^{(0)} -\mu_{\chi}}{k_{B}T}}}{\left(1+e^{\frac{e^{(0)}-\mu_{\chi}}{k_{B}T}}\right)^{ 2}}, \tag{10}\] where we have used the Fermi-Dirac distribution. Using the fact that \(\mathcal{E}^{(0)}_{\alpha}(\mathbf{k})=b_{0\chi}+shv_{F}k\) and \(\mu_{\chi}=s\mu_{0\chi}\), with \(\mu_{0\chi}=|\mu-b_{0\chi}|>0\), this result can be written in the simple form \[\sigma^{(\chi,0)}_{ij}(T)=\frac{e^{2}\tau\mu_{0\chi}^{2}}{6\pi^{2} \hbar^{3}v_{F}}\delta_{ij}\,f_{2}(\Lambda_{\chi}), \tag{11}\] with \(\Lambda_{\chi}\equiv k_{B}T/\mu_{0\chi}\) and where we have introduced the function \[f_{n}(y)\equiv\frac{1}{y}\int_{0}^{\infty}dx\,x^{n}\,\frac{e^{(x-1)/y}}{\left[1 +e^{(x-1)/y}\right]^{2}}. \tag{12}\] It is interesting that \(\lim_{y\to 0}f_{n}(y)=1\), which applied to our result of Eq. (11) corresponds to the zero temperature limit. We now turn to the Berry curvature contribution, given by Eq. (11). Using the Berry curvature \(\mathbf{\Omega}_{\alpha}\) and band velocity \(\mathbf{v}^{(0)}_{\alpha}\), given in Eq. (15), one finds \(\mathbf{Q}_{\alpha}=-\chi\frac{v_{F}}{2k^{2}}\hat{\mathbf{k}}\times(\hat{\mathbf{k}} \times\mathbf{B}_{\chi})\). Therefore, Eq. (11) yields \[\sigma^{(\alpha,\Omega)}_{ij}(\mathbf{B}_{\chi},T) =-\frac{e^{4}v_{F}^{2}\tau}{32\pi^{3}\hbar^{2}}\int d^{3}\mathbf{k} \,\frac{1}{k^{4}}\left[\hat{\mathbf{k}}\times(\hat{\mathbf{k}}\times\mathbf{B}_{\chi}) \right]_{i}\] \[\quad\times\left[\hat{\mathbf{k}}\times(\hat{\mathbf{k}}\times\mathbf{B}_{ \chi})\right]_{j}\,\frac{\partial f^{\text{eq}}_{\alpha}(\mathcal{E}^{(0)}_{ \alpha})}{\partial\mathcal{E}^{(0)}_{\alpha}}. \tag{13}\] Algebraic manipulation of the integrand produces \[\sigma^{(\alpha,\Omega)}_{ij}(\mathbf{B}_{\chi},T) =-\frac{e^{4}v_{F}^{2}\tau}{32\pi^{3}\hbar^{2}}\int\!\frac{dk}{k^ {2}}\,\frac{\partial f^{\text{eq}}_{\alpha}(\mathcal{E}^{(0)}_{\alpha})}{ \partial\mathcal{E}^{(0)}_{\alpha}}\int d\Omega\Big{[}B_{\chi i}B_{\chi j}\] \[\quad-(\hat{\mathbf{k}}\cdot\mathbf{B}_{\chi})(\hat{k}_{i}B_{\chi j}+ \hat{k}_{j}B_{\chi i})+(\hat{\mathbf{k}}\cdot\mathbf{B}_{\chi})^{2}\hat{k}_{i}\hat{k} _{j}\Big{]}. \tag{14}\] These integrals are simple, but not straightforward. The necessary angular integral over products of rectangular components of \(\hat{\mathbf{k}}\) is readily found to be \[\int d\Omega\,\hat{k}_{i}\hat{k}_{j}\hat{k}_{i}\hat{k}_{m}=\frac{4 \pi}{15}\left(\delta_{ij}\delta_{lm}+\delta_{il}\delta_{jm}+\delta_{im}\delta_ {jl}\right). \tag{15}\] On the other hand, the required radial integral can be expressed in terms of the function \(f_{n}(\Lambda_{\chi})\), defined by Eq. (12), with \(n=-2\), i.e. \[\int_{0}^{\infty}dk\,\frac{1}{k^{2}}\,\frac{\partial f^{\text{eq}}_{\alpha}( \mathcal{E}^{(0)}_{\alpha})}{\partial\mathcal{E}^{(0)}_{\alpha}}=-\frac{\hbar v _{F}}{\mu_{0\chi}^{2}}f_{-2}(\Lambda_{\chi}). \tag{16}\] Substituting these results into Eq. (14) we establish Eq. (19). We now evaluate the orbital magnetic moment contribution \(\sigma^{(\chi,m)}_{ij}(\mathbf{B}_{\chi},T)\), given by Eq. (12). To this end, we require the corrections to the energy \(\mathcal{E}^{(m)}_{\alpha}\) and band velocity \(\mathbf{v}^{(m)}_{\alpha}=\frac{1}{\hbar}\nabla_{\mathbf{k}}\mathcal{E}^{(m)}_{\alpha}\) due to the orbital magnetic moment. Using the OMM \(\mathbf{m}_{\alpha}\), given in Eq. (15), one finds \(\mathcal{E}^{(m)}_{\alpha}=\frac{\chi ev_{F}}{2k}\hat{\mathbf{k}}\cdot\mathbf{B}_{\chi}\), and \[\mathbf{v}^{(m)}_{\alpha}(\mathbf{k})=\frac{\chi ev_{F}}{2\hbar}\frac{\mathbf{B}_{\chi}-2 \hat{\mathbf{k}}(\hat{\mathbf{k}}\cdot\mathbf{B}_{\chi})}{k^{2}}. \tag{17}\] Now we compute separately the three integrals in Eq. (12) in the order they appear, i.e. \(\sigma^{(\chi,m)}_{ij}=\sigma^{(\chi,m)}_{1ij}+\sigma^{(\chi,m)}_{2ij}+\sigma^{ (\chi,m)}_{3ij}\). Let us consider first: \[\sigma^{(\chi,m)}_{1ij}(\mathbf{B}_{\chi},T)=\frac{2e^{3}\tau}{\hbar} \!\int\!\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}Q_{\alpha i}v^{(m)}_{\alpha j}\frac{ \partial f^{\text{eq}}_{\alpha}(\mathcal{E}^{(0)}_{\alpha})}{\partial \mathcal{E}^{(0)}_{\alpha}}. \tag{18}\] Substituting the function \(Q_{\alpha i}\) and the velocity (17) we have \[\sigma^{(\chi,m)}_{1ij}(\mathbf{B}_{\chi},T) =-\frac{e^{4}v_{F}^{2}\tau}{16\pi^{3}\hbar^{2}}\int d^{3}\mathbf{k}\, \frac{1}{k^{4}}\,[\hat{\mathbf{k}}\times(\hat{\mathbf{k}}\times\mathbf{B}_{\chi})]_{i}\] \[\quad\times[\mathbf{B}_{\chi}-2\hat{\mathbf{k}}(\hat{\mathbf{k}}\cdot\mathbf{B}_{ \chi})]_{j}\,\frac{\partial f^{\text{eq}}_{\alpha}(\mathcal{E}^{(0)}_{\alpha})}{ \partial\mathcal{E}^{(0)}_{\alpha}}. \tag{19}\] Manipulating the integrand we obtain \[\sigma^{(\chi,m)}_{1ij} =\frac{e^{4}v_{F}^{2}\tau}{16\pi^{3}\hbar^{2}}\int\frac{dk}{k^{2}} \,\frac{\partial f^{\text{eq}}_{\alpha}(\mathcal{E}^{(0)}_{\alpha})}{\partial \mathcal{E}^{(0)}_{\alpha}}\int d\Omega\,\Big{[}B_{\chi i}B_{\chi j}\] \[\quad+2\hat{k}_{i}\hat{k}_{j}(\hat{\mathbf{k}}\cdot\mathbf{B}_{\chi})^{2} -(\hat{\mathbf{k}}\cdot\mathbf{B}_{\chi})(\hat{k}_{i}B_{\chi j}+2\hat{k}_{j}B_{\chi i}) \Big{]}. \tag{20}\] The angular integration can be performed by using the formula (15), and the radial integration is given by Eq. (16). These results imply: \[\sigma^{(\chi,m)}_{1ij}(\mathbf{B}_{\chi},T)=-\frac{e^{4}v_{F}^{3}\tau}{30 \pi^{2}\hbar\mu_{0\chi}^{2}}(\delta_{ij}B_{\chi}^{2}+2B_{\chi i}B_{\chi j})f_{-2 }(\Lambda_{\chi}). \tag{21}\] We now consider the second term in Eq. (12), namely \[\sigma^{(\chi,m)}_{2ij}(\mathbf{B}_{\chi})=\frac{2e^{3}\tau}{\hbar} \!\int\!\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\,\frac{1}{\mathcal{E}^{(m)}_{\alpha}} \nabla_{\mathbf{k}}\cdot\mathbf{T}_{\alpha ij}\frac{\partial f^{\text{eq}}_{\alpha}( \mathcal{E}^{(0)}_{\alpha})}{\partial\mathcal{E}^{(0)}_{\alpha}}, \tag{22}\] where \(\mathbf{T}_{\alpha ij}\) is defined in Eq. (13). Using the Berry curvature \(\mathbf{\Omega}_{\alpha}\) and the contributions to the band velocity \(\mathbf{v}^{(0)}_{\alpha}\) and \(\mathbf{v}^{(m)}_{\alpha}\), one finds \[\mathbf{T}_{\alpha ij}=\frac{\chi ev_{F}}{4\hbar k^{2}}\left[\hat{\mathbf{e}}_{i}B_{\chi j }-2\hat{k}_{j}\left(B_{\ Inserting this result into Eq. (114) and manipulating the integral we have \[\sigma^{(\chi,m)}_{2ij} =-\frac{e^{4}v_{F}^{2}\tau}{16\pi^{3}\hbar^{2}}\int\frac{dk}{k^{2}} \frac{\partial f^{\text{eq}}_{\alpha}(\mathcal{E}^{(0)}_{\alpha})}{\partial \mathcal{E}^{(0)}_{\alpha}}\int d\Omega\,\hat{\mathbf{k}}\cdot\mathbf{B}_{\chi}\] \[\quad\times\left[(\delta_{ij}-4\hat{k}_{i}\hat{k}_{j})\hat{\mathbf{k} }\cdot\mathbf{B}_{\chi}+\hat{k}_{i}B_{\chi j}+\hat{k}_{j}B_{\chi i}\right]. \tag{117}\] Finally, using the integrals (116) and (117) one gets \[\sigma^{(\chi,m)}_{2ij}(\mathbf{B}_{\chi},T)=\frac{e^{4}v_{F}^{3}\tau}{60\pi^{2} \hbar\mu_{0\chi}^{2}}(\delta_{ij}B_{\chi}^{2}+2B_{\chi i}B_{\chi j})f_{-2}( \Lambda_{\chi}). \tag{118}\] The last term we have to evaluate is: \[\sigma^{(\chi,m)}_{3ij}(\mathbf{B}_{\chi},T)=\frac{e^{3}\tau}{\hbar} \int\!\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\mathcal{E}^{(m)}_{\alpha}\mathbf{B}_{\chi} \cdot\mathbf{V}_{\alpha ij}\frac{\partial^{2}f^{\text{eq}}_{\alpha}(\mathcal{E}^ {(0)}_{\alpha})}{\partial\mathcal{E}^{(0)2}_{\alpha}}, \tag{119}\] where \(\mathbf{V}_{\alpha ij}\) is defined in Eq. (13). Using the Berry curvature \(\mathbf{\Omega}_{\alpha}\) and the orbital magnetic moment \(\mathbf{m}_{\alpha}\), given by Eq. (15), we find \[\mathbf{V}_{\alpha ij}=s\chi v_{F}^{2}\frac{\hat{\mathbf{k}}}{4k^{2}}(\delta_{ij}-3 \hat{k}_{i}\hat{k}_{j}), \tag{120}\] and the integral to be solved is \[\sigma^{(\chi,m)}_{3ij}(\mathbf{B}_{\chi},T)=\frac{e^{4}v_{F}^{3}\tau }{64\pi^{3}\hbar}\int d^{3}\mathbf{k}\,\frac{s}{k^{3}}(\mathbf{B}_{\chi}\cdot\hat{\bm {k}})^{2}(\delta_{ij}-3\hat{k}_{i}\hat{k}_{j})\] \[\times\frac{\partial^{2}f^{\text{eq}}_{\alpha}(\mathcal{E}^{(0)} _{\alpha})}{\partial\mathcal{E}^{(0)2}_{\alpha}}. \tag{121}\] Further algebraic manipulations yield \[\sigma^{(\chi,m)}_{3ij}(\mathbf{B}_{\chi},T)=\frac{e^{4}v_{F}^{3}\tau }{64\pi^{3}\hbar}\int d\Omega\,(\mathbf{B}_{\chi}\cdot\hat{\mathbf{k}})^{2}(\delta_{ ij}-3\hat{k}_{i}\hat{k}_{j})\] \[\times\left[s\int\frac{dk}{k}\frac{\partial^{2}f^{\text{eq}}_{ \alpha}(\mathcal{E}^{(0)}_{\alpha})}{\partial\mathcal{E}^{(0)2}_{\alpha}} \right]. \tag{122}\] The angular integration is directly evaluated with the help of the result (116). For the radial integration, starting from the function \(f_{n}(y)\) given by Eq. (118), one can spot the identity \[s\int\frac{dk}{k}\frac{\partial^{2}f^{\text{eq}}_{\alpha}(\mathcal{E}^{(0)}_{ \alpha})}{\partial\mathcal{E}^{(0)2}_{\alpha}}=-\frac{1}{\mu_{0\chi}^{2}}f_{-2 }(\Lambda_{\chi}). \tag{123}\] All in all, the final result is \[\sigma^{(\chi,m)}_{3ij}(\mathbf{B}_{\chi},T)=\frac{e^{4}v_{F}^{3} \tau}{120\pi^{2}\hbar\mu_{0\chi}^{2}}\left(3B_{\chi i}B_{\chi j}-\delta_{ij}B _{\chi}^{2}\right)f_{-2}(\Lambda_{\chi}). \tag{124}\] Summing up the three contributions, (113), (118) and (124), we establishes the result of Eq. (20) for the orbital magnetic moment contribution \(\sigma^{(\chi,m)}_{ij}(\mathbf{B}_{\chi},T)\).
2306.06497
Applications of P-functions to Fully Nonlinear Elliptic equations: Gradient Estimates and Rigidity Results
We introduce the notion of $ P -$functions for fully nonlinear equations and establish a general criterion for obtaining such quantities for this class of equations. Some applications are gradient bounds, De Giorgi-type properties of entire solutions and rigidity results. Particularly, we establish a gradient bound and a rigidity result for Pucci's equations. Furthermore, we prove Harnack-type inequalities and local pointwise estimates for the gradient of solutions to fully nonlinear elliptic equations. In addition, we consider such quantities for higher order nonlinear equations and for equations of order greater than two we obtain Liouville-type theorems and pointwise estimates for the Laplacian.
Dimitrios Gazoulis
2023-06-10T17:42:51Z
http://arxiv.org/abs/2306.06497v3
Applications of P-functions to quasi-linear equations: gradient bounds and Liouville-type properties ###### Abstract. We introduce the notion of \(P-\)functions for fully-nonlinear equations and obtain some abstract consequences. We study \(P-\)functions for a class of quasi-linear equations and establish some general criterion for obtaining such quantities. Some applications are gradient bounds, De Giorgi-type properties of entire solutions and Liouville-type theorems. As a special case we obtain a gradient bound that differs from the Modica inequality. In addition, we provide examples of such quantities for the Monge-Ampere and for higher order nonlinear equations. One application for equations of order greater than two is pointwise estimates for the Laplacian. ## 1. Introduction \(P-\)_functions_ can be thought as quantities of a function \(u\) and its higher order derivatives that is related to an elliptic or more general differential equation or to a differential inequality and has the property that satisfies the maximum principle. Perhaps the most well-known example is \(P(u,x)=\frac{1}{2}|\nabla u|^{2}-W(u)\) that is related to the Allen-Cahn equation \[\Delta u=W^{\prime}(u)\ \,\ u:\Omega\subset\mathbb{R}^{n}\to\mathbb{R} \tag{1.1}\] and Modica in [14] proved the well-known gradient bound \[\frac{1}{2}|\nabla u|^{2}\leq W(u) \tag{1.2}\] for every solution of (1.1). Later, Caffarelli et al in [6] generalized this gradient bound for a class of varational quasi-linear equations and proved Liouville-type and De Giorgi-type properties for a particular choise of \(P-\)_function_ related to the equation \(div(\Phi^{\prime}(|\nabla u|^{2})\nabla u)=F^{\prime}(u)\). This bound was generalized for anisotropic partial differential equations in [9]. Furthermore, \(P-\)_functions_ had been already studied by Sperb in [18], Payne and Philippin in [15] and [16] who studied other types of quasilinear equations for the form \(div\)\((A(u,|\nabla u|^{2})\nabla u)=B(u,|\nabla u|^{2})\), which are not necessarily Euler-Lagrange equations of an elliptic integrand. They derived maximum principles for some appropriate \(P-\)_functions_. Due to the greater generality, however, the relevant \(P\) and the conditions under which satisfies an elliptic differential inequality are rather implicitly given while in [6] and [7] are given explicitly. Nevertheless, one advantage of being given implicitly rather than explicitly is that one can extract many examples of \(P-\)_functions_ that satisfy such conditions. There are many other applications of \(P-\)_functions_ that can be found in [18], among others, such as lower bounds for eigenvalue problems. One additional important application is in [1], where they showed that the monotonicity assumption \(u_{x_{n}}>0\), that is also stated in the De Giorgi's conjecture, does in fact imply the local minimality of \(u\). Such implication is by no means trivial and it is based on the construction of a so-called _calibration_ associated to the energy functional. Such notion is intimately connected to the theory of null-Lagrangians, see [11], chapter 1 and chapter 4, section 2.4. In Theorem 4.4 in [1], then carry the construction of the appropriate calibration for general integrands of the calculus of variations and such construction relies explicitly on the \(P-\)_function_. Last but not least, there are applications such as gradient bounds similar to (1.2) and Liouville-type properties for vector equations. To be more precise, in Theorem 3.5 in [17], there is a gradient bound for the Ginzburg-Landau system of equations. In this paper, we introduce the notion of \(P-\)_function_ for general fully-nonlinear differential equations or differential inequalities and we incorporate the "\(P-\)_function_ technique" in a general setting. Assuming that we have a \(P-\)_function_ with a particular form associated to a general type of fully nonlinear equation that attains it's supremum at a point, we can determine the smooth entire solutions without excluding a priori some potential singularities. Moreover, we extract some implicit conditions for obtaining \(P-\)_functions_ for a general class of quasi-linear equations. Additionally, we prove some gradient bounds, Liouville-type properties and De Giorgi-type properties for smooth entire solutions, utilizing techniques from [6] and we also obtain as a special case an a priori gradient bound for the Allen-Cahn equation for general potentials that is different from the Modica inequality. These bounds hold for any \(P-\)function that satisfy some implicit conditions, so in fact, for any explicit example we have a different gradient bound. This method allow us to obtain many different types of gradient bounds, each one for every explicit example of \(P-\)_function_ that satisfy the implicit conditions mentioned above. We also illustrate this claim more generally by proving gradient bounds from the examples of \(P-\)_functions_ in [15]. One such bound, generalizes the one in [6] for a more general class of quasi-linear equations. Another consequence of determining many types of \(P-\)_functions_ is that we can obtain a Liouville-type theorem for the equation in [6], when \(F^{\prime\prime}\geq 0\), in which case we have stability of solutions. For a different class of equations, such Liouville-type properties imply non existence of solutions and we give some examples. We also provide a class of such quantities for the Monge-Ampere's equation and for higher order nonlinear equations together with some applications. For instance, we establish a mean value-type theorem for the Monge-Ampere equation and for nonlinear equations of order greater than two. One additional application for higher order equations is an a priori bound for the Laplacian and pointwise estimates through the mean value properties. Finally, some Liouville-type properties can be extended for nonlinear equations of order greater than two. In this setting, we believe that one can obtain many other types of bounds for any order of derivatives, assuming a \(C^{k,\alpha}\) a priori estimate and that we have an appropriate \(P-\)_function_ related to the respective equation. ## 2. The notion of \(P-\)function and abstract consequences We begin by defining the notion of P-function **Definition 2.1**.: _Let \(u:\Omega\subset\mathbb{R}^{n}\to\mathbb{R}^{d}\) be a smooth solution or subsolution of_ \[F(u,\nabla u,...,\nabla^{m}u)=0 \tag{2.1}\] _where \(F\) is a continuous function._ _We say that \(P=P(u,\nabla u,...,\nabla^{m-1}u)\) is a \(P-\)function of (2.1) if there exists an elliptic operator \(L\) and a non negative function \(\mu=\mu(x)\geq 0\)_ \[\begin{array}{c}L=-\sum_{i,j=1}^{n}a_{ij}\partial_{x_{i}x_{j}}+ \sum_{i=1}^{n}b_{i}\partial_{x_{i}}+c\;\;,\;\mbox{with}\;\;c\geq 0\\ \mbox{such that}\;\;\mu L\;P\leq 0\;\;,\;\mbox{in}\;\;\Omega.\end{array} \tag{2.2}\] an immediate corollary is that any \(P-\)function related to an equation or to a differential inequality attains its maximum at the boundary \(\partial\Omega\) or at a point \(x\in\Omega\) such that \(\mu(x)=0\). The independence from the \(x-\)variables is needed in order the equation (2.1) to be translation invariant. This ingredient is necessary for the gradient bounds and similar applications. We initially state as a direct consequence a strong maximum principle that holds in general (see Theorem 2.2 in [6] or Theorem 4.7 in [7]). **Theorem 2.2**.: _Let \(u\) be a smooth solution or subsolution of_ \[\begin{array}{c}F(u,\nabla u,...,\nabla^{m}u)=0\;\;\;,\;\;u:\Omega\to\mathbb{ R}^{d}\\ \mbox{where}\;\;\Omega\;\;\mbox{is a connected, bounded subset of}\;\;\mathbb{R}^{n}\end{array} \tag{2.3}\] _such that \(\inf_{\overline{\Omega}}g(\nabla^{k}u)>0\) for some \(g:\mathbb{R}^{n^{k}\times d}\to[0,+\infty)\;,\;k\in\{1,...,m-1\}\) and suppose that \(P=P(u,\nabla u,...,\nabla^{m-1}u)\) is a \(P-\)function of (2.3) with \(\mu=\mu(g(\nabla^{k}u))\;,\;\mu(t)>0\;,\;\forall\;t>0\)._ _If there exists \(x_{0}\in\Omega\) such that_ \[P(u(x_{0}),...,\nabla^{m-1}u(x_{0}))=\sup_{\Omega}P(u,...,\nabla^{m-1}u) \tag{2.4}\] _then \(P(u,\nabla u,...,\nabla^{m-1}u)\) is constant in \(\Omega\)._ Proof.: The proof is an immediate consequence of the strong maximum principle since \(\mu(g(\nabla^{k}u))>0\) in \(\Omega\) The most common choice of \(g\) in Theorem 2.2 above is the Euclidean norm. For example, if \(k=1,\)\(g(\nabla u)=|\nabla u|.\) If \(\mu>0\), \(\forall\,t\geq 0,\) then the assumption \(\inf_{\overline{\Omega}}g(\nabla^{k}u)>0\) is dismissed. **Remark 2.3**.: _The constancy of \(P-\)functions with a particular form hides geometric information on the level sets \(\{x\in\mathbb{R}^{n}\:|\:u(x)=t\}\) of the solution \(u\), such as the property of being surfaces of zero mean curvature (see Proposition 4.11 in [7])._ Next, we have a De Giorgi type result as in Theorem 5.1 in [6], without excluding a priori potential singularities. We denote as \(\mathcal{H}^{1}\) the \(1\)-Hausdorff measure in \(\mathbb{R}^{n}.\) **Theorem 2.4**.: _Let \(u\) be a smooth entire solution (or subsolution) of_ \[F(u,\nabla u,\nabla^{2}u)=0\ \,\ \ u:\mathbb{R}^{n}\setminus S\to\mathbb{R} \tag{2.5}\] _with \(u_{x_{n}}>0\), except perhaps on a closed set \(S\) of potential singularities such that \(\mathcal{H}^{1}(S)=0\) and \(\mathbb{R}^{n}\setminus S\) is connected. Assume that \(P=P(u,|\nabla u|)\) is a \(P-\)function of (2.5) with \(\mu=\mu(|\nabla u|)\:,\:\mu(t)>0\:,\:\forall\:t>0\), such that \(P_{t}>0\) for \(t>0\:(P=P(s,t))\)._ _If there exists \(x_{0}\in\mathbb{R}^{n}\setminus S\) such that_ \[P(u(x_{0}),|\nabla u(x_{0})|)=\sup_{\mathbb{R}^{n}\setminus S}P(u,|\nabla u|) <+\infty \tag{2.6}\] _then there exists a function \(g:\mathbb{R}\to\mathbb{R}\:,\:a\in\mathbb{R}^{n}\) with \(|a|=1\) and \(b\in\mathbb{R}\), such that_ \[u(x)=g(a\cdot x+b)\ \,\ x\in\mathbb{R}^{n}\:\text{ and }\:S=\emptyset. \tag{2.7}\] Proof.: Let \(c_{0}=\sup_{\mathbb{R}^{n}\setminus S}P(u,|\nabla u|)\) and consider the set \[A=\{x\in\mathbb{R}^{n}\setminus S\::\:P(u,|\nabla u|)=c_{0}\} \tag{2.8}\] \(A\) is closed and by the assumption \(A\neq\emptyset\). We are going to prove that \(A\) is open. Let \(x_{1}\in A\), we take \(\delta>0\) such that \(B_{\delta}(x_{1})\subset\mathbb{R}^{n}\setminus S\). Since \(u_{x_{n}}>0\), we have \(\inf_{\overline{B}_{\delta}(x_{1})}|\nabla u|>0\) and by Theorem 2.2 we conclude that \(P(u,|\nabla u|)\equiv c_{0}\) in \(B_{\delta}(x_{1})\) and therefore \(A\) is open. Since \(\mathbb{R}^{n}\setminus S\) is connected, we have that \(A=\mathbb{R}^{n}\setminus S\), that is, \[P(u,|\nabla u|)\equiv c_{0}\ \,\ \forall\:x\in\mathbb{R}^{n}\setminus S \tag{2.9}\] and \(P_{t}>0\), thus \[|\nabla u|=Q(u)\ \,\ \ \text{in}\:\:\mathbb{R}^{n}\setminus S\ \,\ \text{for some function}\:\:Q:\mathbb{R}\to\mathbb{R} \tag{2.10}\] Now, if there exists \(x_{2}\in\mathbb{R}^{n}\setminus S\) such that \(Q(u(x_{2}))=0\), so \(|\nabla u(x_{2})|=0\), that contradicts the fact the \(u_{x_{n}}>0\). So, \(Q(u)>0\;\;,\;\forall\;x\in\mathbb{R}^{n}\setminus S\) and set \[v=G(u)\;\;\;,\;\;\mbox{where}\;\;G^{\prime}(s)=\frac{1}{Q(s)} \tag{2.11}\] \[\mbox{and}\;\;|\nabla v|^{2}=1\;\;\;\mbox{in}\;\;\mathbb{R}^{n}\setminus S\] Therefore, by the result in [5], we have that \[\begin{split}\mbox{either}&\;v(x)=a\cdot x+b\;\;, \;a\in\mathbb{R}^{n}\;\;\mbox{with}\;\;|a|=1\;\;\mbox{and}\;\;b\in\mathbb{R}\\ &\mbox{or}\;\;v(x)=|x-z_{0}|+c\;\;,\;z_{0}\in\mathbb{R}^{n}\;\; \mbox{and}\;\;c\in\mathbb{R}\end{split} \tag{2.12}\] and we conclude \[u(x)=g(a\cdot x+b)\;\;,\;a\in\mathbb{R}^{n}\;\;\mbox{with}\;\;|a|=1\;,\;\;b\in \mathbb{R}\;\;\mbox{and}\;\;g(s)=G^{-1}(s) \tag{2.13}\] The radially symmetric solutions with respect to a point \(z_{0}\in\mathbb{R}^{n}\) are excluded since \(u\) is monotone in \(x_{n}\). ## 3. General Criterion for obtaining \(P-\)functions We now provide a general criterion for obtaining \(P-\)functions for a class of nonlinear elliptic equations of the form \(\Delta u=F(u,|\nabla u|^{2})\) that do not in general arise as variational problems. A special case for \(F(u,|\nabla u|^{2})=W^{\prime}(u)\), is the Allen-Cahn equation. For any function \(P=P(s,t)\) and consider \(F=F(s,t)\), we define the quantity \[\begin{split} I(s,t)=P_{t}(s,t^{2})P_{s}(s,t^{2})F(s,t^{2})+\frac {P_{s}^{2}(s,t^{2})}{2}+2t^{2}P_{t}^{2}(s,t^{2})F_{s}(s,t^{2})\\ -2t^{2}P_{t}(s,t^{2})P_{s}(s,t^{2})F_{t}(s,t^{2})\end{split} \tag{3.1}\] **Theorem 3.1**.: _Let \(u:\Omega\subset\mathbb{R}^{n}\to\mathbb{R}\) be a smooth solution of_ \[\Delta u=F(u,|\nabla u|^{2}) \tag{3.2}\] _and \(P(s,t)=P:\mathbb{R}^{2}\to\mathbb{R}\) such that \(P_{t}>0\) for \(t>0\) and either_ \[\begin{cases} Hes_{(s,t)}\,P\,\,\mbox{is positive semidefinite and}\\ I(s,t)\geq 0\;\;\;,\;\;\forall\;\;(s,t)\in\mathbb{R}\times[0,+\infty)\\ \mbox{where}\;\;I=I(s,t)\;\,\mbox{is defined in}\;\;(\ref{eq:1})\end{cases} \tag{3.3}\] _or_ \[\begin{cases}P_{st}=0\;,\;P_{tt}\geq 0\;\;\mbox{and}\\ t^{2}P_{ss}(s,t^{2})P_{t}(s,t^{2})+I(s,t)\geq 0\;\;\;,\;\;\forall\;\;(s,t)\in \mathbb{R}\times[0,+\infty)\end{cases} \tag{3.4}\] _hold._ _Then \(P=P(u,|\nabla u|^{2})\) is a \(P-\)function of (3.2)._ Proof.: (3.5) \[P_{x_{i}}=P_{s}u_{x_{i}}+2P_{t}\sum_{j=1}^{n}u_{x_{j}}u_{x_{j}x_{i}}\] \[\Rightarrow\sum_{i=1}^{n}(P_{x_{i}}-P_{s}u_{x_{i}})^{2}=\sum_{i=1}^{n}(2P_{t} \sum_{j=1}^{n}u_{x_{j}}u_{x_{j}x_{i}})^{2}\leq 4P_{t}^{2}|\nabla u|^{2}\sum_{i,j=1}^ {n}u_{x_{i}x_{j}}^{2}\] \[\qquad\Rightarrow|\nabla P|^{2}-2P_{s}\nabla P\nabla u+P_{s}^{2}| \nabla u|^{2}\leq 4P_{t}^{2}|\nabla u|^{2}|Hes\;u|^{2} \tag{3.6}\] and in addition, \[P_{x_{i}}u_{x_{i}}=P_{s}u_{x_{i}}^{2}+2P_{t}\sum_{j=1}^{n}u_{x_{i }}u_{x_{j}}u_{x_{i}x_{j}}\] \[\Rightarrow 2P_{t}\sum_{i,j=1}^{n}u_{x_{i}}u_{x_{j}}u_{x_{i}x_{j}}= \nabla P\nabla u-P_{s}|\nabla u|^{2} \tag{3.7}\] Also by (3.2) it holds that, \[\Delta u_{x_{j}}=F_{s}u_{x_{j}}+2F_{t}\sum_{k=1}^{n}u_{x_{k}}u_{x_{k}x_{j}} \tag{3.8}\] Now, by (3.5) we have \[P_{x_{i}x_{i}}=P_{ss}u_{x_{i}}^{2}+4P_{st}u_{x_{i}}\sum_{j=1}^{n}u_{x_{j}}u_{x _{j}x_{i}}+4P_{tt}(\sum_{j=1}^{n}u_{x_{j}}u_{x_{j}x_{i}})^{2} \tag{3.9}\] \[+P_{s}u_{x_{i}x_{i}}+2P_{t}[\sum_{j=1}^{n}(u_{x_{i}x_{j}}^{2}+u_{x_{j}}u_{x_{j }x_{i}x_{i}})]\] So, if we assume (3.3) (since the hessian of \(P\) is positive semidefinite), utilizing (3.6), we have \[\Delta P\geq P_{s}\Delta u+2P_{t}|Hes\;u|^{2}+2P_{t}\sum_{j=1}^{n}u_{x_{j}} \Delta u_{x_{j}} \tag{3.10}\] \[P_{t}|\nabla u|^{2}\Delta P\geq P_{t}P_{s}|\nabla u|^{2}\Delta u+\frac{1}{2}| \nabla P|^{2}-P_{s}\nabla P\nabla u+\frac{P_{s}^{2}}{2}|\nabla u|^{2} \tag{3.11}\] \[+2P_{t}^{2}|\nabla u|^{2}\sum_{j=1}^{n}u_{x_{j}}\Delta u_{x_{j}}\] and by (3.2) and (3.8) we obtain \[\begin{split} P_{t}|\nabla u|^{2}\Delta P\geq P_{t}P_{s}|\nabla u|^{ 2}F(u,|\nabla u|^{2})+\frac{1}{2}|\nabla P|^{2}-P_{s}\nabla P\nabla u+\frac{P_{ s}^{2}}{2}|\nabla u|^{2}\\ +2P_{t}^{2}|\nabla u|^{2}[F_{s}|\nabla u|^{2}+2F_{t}\sum_{j,k=1} ^{n}u_{x_{k}}u_{x_{j}}u_{x_{j}x_{k}}]\end{split} \tag{3.11}\] In addition, by (3.7) we have \[\begin{split} P_{t}|\nabla u|^{2}\Delta P\geq P_{t}P_{s}|\nabla u |^{2}F(u,|\nabla u|^{2})+\frac{1}{2}|\nabla P|^{2}-P_{s}\nabla P\nabla u+\frac{ P_{s}^{2}}{2}|\nabla u|^{2}\\ +2P_{t}^{2}F_{s}|\nabla u|^{4}+2P_{t}|\nabla u|^{2}F_{t}\nabla P \nabla u-2P_{t}P_{s}F_{t}|\nabla u|^{4}\\ \Rightarrow P_{t}|\nabla u|^{2}\Delta P\geq\frac{1}{2}|\nabla P|^ {2}+(2P_{t}|\nabla u|^{2}F_{t}-P_{s})\nabla P\nabla u+|\nabla u|^{2}I(u,| \nabla u|)\end{split} \tag{3.12}\] Therefore, since by (3.1) it holds that \(I(u,|\nabla u|)\geq 0\), we conclude \[P_{t}|\nabla u|^{2}\Delta P-(2P_{t}|\nabla u|^{2}F_{t}-P_{s})\nabla P\nabla u \geq\frac{1}{2}|\nabla P|^{2}\geq 0 \tag{3.13}\] Finally, if we assume (3.4) instead of (3.3), equation (3.9) similarly becomes \[\begin{split} P_{t}|\nabla u|^{2}\Delta P-(2P_{t}|\nabla u|^{2}F_ {t}-P_{s})\nabla P\nabla u\geq\frac{1}{2}|\nabla P|^{2}\\ +P_{ss}P_{t}|\nabla u|^{4}+|\nabla u|^{2}I(u,|\nabla u|)\geq 0 \end{split} \tag{3.14}\] **Corollary 3.2**.: _Let \(u:\Omega\subset\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a smooth solution of_ \[\Delta u=f(u) \tag{3.15}\] _and \(P(s,t)=P:\mathbb{R}^{2}\rightarrow\mathbb{R}\) is such that \(P_{t}>0\) for \(t>0\) and either_ \[\begin{cases}Hes_{(s,t)}\,P\,\,\,\text{is positive semidefinite and}\\ I(s,t):=P_{s}(s,t^{2})f(s)+\frac{P_{s}^{2}(s,t^{2})}{2P_{t}(s,t^{2})}+2P_{t} (s,t^{2})t^{2}f^{\prime}(s)\geq 0\,\,\,\,,\,\,\forall\,\,(s,t)\in\mathbb{R} \times[0,+\infty)\end{cases} \tag{3.16}\] _or_ \[\begin{cases}P_{st}=0\,\,,\,\,P_{tt}\geq 0\,\,\,\text{and}\\ t^{2}P_{ss}(s,t^{2})+P_{s}(s,t^{2})f(s)+\frac{P_{s}^{2}(s,t^{2})}{2P_{t}(s,t^{2} )}+2P_{t}(s,t^{2})t^{2}f^{\prime}(s)\geq 0\end{cases} \tag{3.17}\] _hold._ _Then \(P=P(u,|\nabla u|^{2})\) is a \(P-\)function of (3.15)._ ### Examples of \(P-\)functions **(1)** The well known \(P-\)function of (3.15) is \[P(u,|\nabla u|^{2})=\frac{|\nabla u|^{2}}{2}-F(u)\] \[\text{where}\;\;F^{\prime}(u)=f(u) \tag{3.18}\] (see [14] or Chapter 5 in [18]). It is easy to see that (3.18) satisfies (3.17) in Corollary 3.2. The major application is the well known gradient bound \[|\nabla u|^{2}\leq 2F(u) \tag{3.19}\] that holds for every smooth and bounded entire solution of (3.15) and \(F\geq 0\) (see [14]). However we cannot obtain gradient bounds for \(P-\)functions in general. Consider for example \[\Delta u=e^{u}\;\;\text{and}\;\;P(s,t)=\frac{t}{2}-e^{s}+e^{-s}\;\;\text{(i.e. }\;P(u,|\nabla u|^{2})=\frac{|\nabla u|^{2}}{2}-e^{u}+e^{-u})\] It is easy to see that \(P=\frac{|\nabla u|^{2}}{2}-e^{u}+e^{-u}\) satisfies (3.17), but the gradient bound \(\frac{|\nabla u|^{2}}{2}\leq e^{u}-e^{-u}\) does not hold: if we take solutions of \(\Delta u=e^{u}\) such that \(|\nabla u|^{2}=2e^{u}\) (i.e. that satisfy the equipartition of the energy), then we have a contradition. **(2)** Another general example of \(P-\)function of (3.15) is \[P(u,|\nabla u|^{2})=\frac{|\nabla u|^{4}}{2}+2\int_{0}^{u}(\int_ {0}^{y}\sqrt{f(z)f^{\prime}(z)}dz)^{2}dy\quad,\;\;\text{if}\;\;f(t)f^{\prime}( t)\geq 0\;,\;\forall\;t\in\mathbb{R}\] \[P(u,|\nabla u|^{2})=\frac{|\nabla u|^{4}}{2}-2\int_{0}^{u}(\int_ {0}^{y}\sqrt{-f(z)f^{\prime}(z)}dz)^{2}dy\quad,\;\;\text{if}\;\;f(t)f^{\prime}( t)\leq 0 \tag{3.20}\] and satisfies condition (3.17) of Corollary 3.2. Note that the above example is not in the form \(P=g(u)|\nabla u|^{2}+h(u)\) that we see in [18] as general form for \(P\) related to equation (3.15). **(3)** The next example can be found in [15]. Let \(u\) be a solution of \[\Delta u=u(k|\nabla u|^{2}+\lambda e^{-cu^{2}})\] \[\text{and let}\;\;P(s,t)=\begin{cases}te^{-ks^{2}}+\frac{ \lambda}{k+c}e^{-s^{2}(k+c)}&,\;k\neq-c\\ te^{cs^{2}}-\lambda s^{2}&,\;k=-c\end{cases} \tag{3.21}\] Then \(P=P(u,|\nabla u|^{2})\) is a \(P-\)function of (3.21). **(4)** Let \(u\) be a solution of \[\Delta u=G(|\nabla u|^{2}-u)\;\;,\;\mbox{where}\;\;G(z)\leq\frac{1}{2}\;,\;\forall \;z\in\mathbb{R} \tag{3.22}\] Then \(P=P(u,|\nabla u|^{2})=|\nabla u|^{2}-u\) is a \(P-\)function of (3.22). It is easy to see that \(P\) satisfies condition (3.4) of Theorem 3.1. **(5)** The following example is in [16] (see Theorem 1). Let \(u\) be a solution of \[div(\Phi^{\prime}(|\nabla u|^{2})\nabla u)=\rho(|\nabla u|^{2})F^{\prime}(u) \tag{3.23}\] with \(\Phi^{\prime}(t),\rho(t)>0\) and \(\Phi^{\prime}(t)+2t\Phi^{\prime\prime}(t)>0\;,\;\forall\,t\geq 0\). Consider the function \[P(s,t)=\int_{0}^{t}\frac{\Phi^{\prime}(y)+2y\Phi^{\prime\prime}(y)}{\rho(y)} dy-2F(s) \tag{3.24}\] Then \(P=P(u,|\nabla u|^{2})\) is a \(P-\)function of (3.23). Note that for \(\rho\equiv 1\), the above example generalizes the one in [6]. ## 4. Gradient Bounds and properties of entire solutions of quasi-linear equations In this section we will see that utilizing the techniques of [6], we can obtain gradient bounds for solutions of equations of the form (3.2) and for more general class of quasi-linear equations. To be more precise, for any explicit example of \(P-\)function with a specific property, we obtain a particular gradient bound. Also, it holds an analogous of Theorem 2.4 that gives De Giorgi-type results and additionally, if \(P\) have a more specific form, we can drop the assumption \(u_{x_{n}}>0\). Some of the regularity assumptions can be lessened in some cases, i.e. by assuming a priori that \(u\in W^{1,p}_{loc}(\mathbb{R}^{n})\cap L^{\infty}(\mathbb{R}^{n})\), as in assumption (i) in Theorem 1.6 in [6] and utilizing regularity results in [19] afterwords. However, our main goal is not the optimal regularity assumptions since we state the results in an abstract form. Therefore, we will assume that the solutions are smooth and satisfy an analog of assumption (ii) in Theorem 1.6 in [6]. **Assumption**. \[\begin{array}{l}u\in C^{2}(\mathbb{R}^{n})\cap L^{\infty}(\mathbb{R}^{n})\;,\; \nabla u\in C^{\alpha}(\mathbb{R}^{n};\mathbb{R}^{n})\;\;\mbox{for some}\;\;\alpha\in(0,1)\\ \mbox{and}\;\;\mbox{there exists}\;\;C=C(||u||_{L^{\infty}(\mathbb{R}^{n})})>0 \;\;\mbox{such that}\;\;|\nabla u(x)|\leq C\;,\;\mbox{for any}\;\;x\in\mathbb{R}^{n} \end{array} \tag{4.1}\] The next theorem provides an a priori pointwise estimate for solutions of (3.2). In contrast to the gradient bounds in [6] and [7], the theorem below holds for any \(P-\)function that satisfies \(P(u,0)\leq 0\). When \(P\) is of the form \(P=P(u,\nabla u)\) we use the notation \(P(u,0)\) instead of \(P(u,0,...,0)\) and also we sometimes we write \(P=P(u;x)\) for simplicity. **Theorem 4.1**.: _Let \(u\) be an entire solution of_ \[F(u,\nabla u,\nabla^{2}u)=0 \tag{4.2}\] _that satisfy assumption (4.1). If \(P=P(u,\nabla u)\) is a \(P-\)function of (4.2), with \(\mu=\mu(|\nabla u|)\), \(\mu(t)>0\;,\;\forall\;t>0\), such that \(P(s,0)\leq 0\),_ _Then_ \[P(u(x),\nabla u(x))\leq 0\;\;,\;x\in\mathbb{R}^{n} \tag{4.3}\] Proof.: Let \(u\) be a solution of (4.2) that satisfies assumption (4.1) and consider the set \[\mathcal{F}=\{v\;\;\mbox{is a solution of (\ref{eq:2}) that satisfies (\ref{eq:2})}\;:\;|v(x)|\leq||u||_{L^{\infty}(\mathbb{R}^{n})}\;\;,\;\forall \;x\in\mathbb{R}^{n}\} \tag{4.4}\] \(\mathcal{F}\) is non empty since \(u\in\mathcal{F}\). Let \(P\) be a \(P-\)function of (4.2), with \(\mu=\mu(|\nabla u|)\), \(\mu(t)>0\;,\forall t>0\), such that \(P(u,0)\leq 0\). For simplicity, we denote \(P=P(u;x)\) instead of \(P=P(u(x),\nabla u(x))\). Consider now \[P_{0}=\sup\{P(v;x)\;|\;v\in\mathcal{F}\;,\;\;x\in\mathbb{R}^{n}\} \tag{4.5}\] We claim that \(P_{0}\leq 0\) and from this we conclude. We argue by contradiction. Supposet that \(P_{0}>0\), by (4.5) there exist two sequences, \((v_{k})_{k\in\mathbb{N}}\) in \(\mathcal{F}\) and \((x_{k})_{k\in\mathbb{N}}\) in \(\mathbb{R}^{n}\) such that \[P_{0}-\frac{1}{k}\leq P(v_{k};x_{k})\leq P_{0}\quad,\;\;k\in\mathbb{N} \tag{4.6}\] Let \(\tilde{v}_{k}(x)=v(x+x_{k})\). Since the equation (4.2) is translation invariant, we have that \(\tilde{v}_{k}\in\mathcal{F}\) and \(P(\tilde{v}_{k};0)=P(v_{k};x_{k})\), so that (4.6) can be rewritten as \[P_{0}-\frac{1}{k}\leq P(\tilde{v}_{k};0)\leq P_{0}\quad,\;\;k\in\mathbb{N} \tag{4.7}\] Since \(\nabla\tilde{v}_{k}\in C^{\alpha}(\mathbb{R}^{n})\), by the Ascoli-Arzela theorem since we have equicontinuity of \(\nabla\tilde{v}_{k}\) in any compact subset of \(\mathbb{R}^{n}\) together with a diagonal argument, we can extract from \((\tilde{v}_{k})_{k\in\mathbb{N}}\) a subsequence, denoted by \((\tilde{v}_{k}^{(k)})_{k\in\mathbb{N}}\) that converges with its first-order derivatives, uniformly on compact subsets of \(\mathbb{R}^{n}\). Denote by \(\tilde{v}\) the limit function. Then \(\tilde{v}\in\mathcal{F}\) and \(P(\tilde{v}_{k}^{(k)};0)\to P(\tilde{v};0)\) as \(k\to\infty\). From (4.7) we have \(P(\tilde{v};0)=P_{0}\). Consider now the set \[U=\{x\in\mathbb{R}^{n}\:|\:P(\tilde{v};x)=P_{0}\} \tag{4.8}\] \(U\) is closed since \(P\) is continuous on \(\mathbb{R}^{n}\) and non empty since \(0\in U\). We will prove that \(U\) is also open. Let \(x_{0}\in U\), we observe that \(|\nabla\tilde{v}(x_{0})|\neq 0\), otherwise we would have \[P_{0}=P(\tilde{v};x_{0})=P(\tilde{v}(x_{0}),\nabla\tilde{v}(x_{0}))=P(\tilde{ v}(x_{0}),0)\leq 0\] against the fact that \(P_{0}>0\). By continuity, there exists \(\delta>0\) such that \[\inf_{\overline{B}_{\delta}(x_{0})}|\nabla\tilde{v}|>0\;\;\text{and thus}\;\;\inf_{\overline{B}_{\delta}(x_{0})}\mu(|\nabla\tilde{v}|)>0 \tag{4.9}\] and we conclude that \(P(\tilde{v};x)\equiv P_{0}\) in \(B_{\delta}(x_{0})\) by Theorem 2.2. So \(U\) is open and it follows that \(U=\mathbb{R}^{n}\) by connectedness. On the other hand, since \(|\tilde{v}|\leq||u||_{L^{\infty}(\mathbb{R}^{n})}\), it holds that \(\inf_{\mathbb{R}^{n}}|\nabla\tilde{v}|=0\). Let \((y_{j})_{j\in\mathbb{N}}\) be a sequence in \(\mathbb{R}^{n}\) such that \(|\nabla\tilde{v}(y_{j})|\to 0\) as \(j\to+\infty\). By the boundedness of \(\tilde{v}\) we also have \(\tilde{v}(y_{j})=\tilde{v}_{j}\to v_{0}\) up to a subsequence that we still denote as \(y_{j}\), and so we obtain \[0<P_{0}=\lim_{j\to\infty}P(\tilde{v}(y_{j}),\nabla\tilde{v}(y_{j}))=P(v_{0},0)\] which contradicts the assumption \(P(s,0)\leq 0\). Therefore \(P_{0}\leq 0\) and we conclude. Next, we explore some additional consequences when \(P\) related to (4.2) is of the form \[\begin{split} P(s,t)=B(t)-\Gamma(s)\\ \text{such that}\;\;B^{\prime}(t)>0\:,\;B^{\prime\prime}(t)\geq 0 \:,\;\text{for}\;\;t>0\;,\;B(0)=0\;\;\text{and}\;\;\Gamma(s)\geq 0\\ \text{and}\;\;\mu=\mu(|\nabla u|)\end{split} \tag{4.10}\] then, the condition (3.4) becomes \[\begin{split}-t^{2}\Gamma^{\prime\prime}(s)B^{\prime}(t^{2})-B^{ \prime}(t^{2})\Gamma^{\prime}(s)F(s,t^{2})+\frac{(\Gamma^{\prime}(s))^{2}}{2} +2t^{2}(B^{\prime}(t^{2}))^{2}F_{s}(s,t^{2})\\ +2t^{2}B^{\prime}(t^{2})\Gamma^{\prime}(s)F_{t}(s,t^{2})\geq 0 \end{split} \tag{4.11}\] for \(P\) as in (4.10), the gradient bound in Theorem 4.1 becomes \[|\nabla u|^{2}\leq\Psi(u)\;\;\;,\;\;\text{where}\;\;\Psi(u)=B^{-1}(\Gamma(u)) \tag{4.12}\] and we observe that in Example (2) above, for solutions of (3.15), \[P(u,|\nabla u|^{2})=\frac{|\nabla u|^{4}}{2}-2\int_{0}^{u}(\int_{0}^{y}\sqrt{-f(z )f^{\prime}(z)}dz)^{2}dy\quad,\ \ \text{if}\,\ f(t)f^{\prime}(t)\leq 0 \tag{4.13}\] satisfies \(P(u,0)\leq 0\). Therefore, we have **Corollary 4.2**.: _Let \(u\) be a smooth and bounded entire solution to_ \[\begin{split}\Delta u=f(u)\\ \text{where}\,\ f\in C^{1,\alpha}(\mathbb{R}^{n})\,\ \text{and}\,\ f(t)f^{\prime}(t)\leq 0. \end{split} \tag{4.14}\] _Then_ \[\frac{|\nabla u|^{4}}{4}\leq\int_{0}^{u}(\int_{0}^{y}\sqrt{-f(z)f^{\prime}(z)} dz)^{2}dy \tag{4.15}\] Proof.: By elliptic regularity theory we have that \(u\in C^{2,\alpha}(\mathbb{R}^{n})\) and that \(|\nabla u|\) is bounded in \(\mathbb{R}^{n}\). It suffices to prove that \(P\) defined in (4.13) is a \(P-\)function of (4.14) and then the conclusion is direct application of Theorem 4.1. We have that \(P\) satisfies (3.17), \(P(s,t)=\frac{t^{2}}{2}-2\int_{0}^{s}(\int_{0}^{y}\sqrt{-f(z)f^{\prime}(z)}dz) ^{2}dy=\frac{t^{2}}{2}+q(s)\), so \(P_{t}=t>0\) for \(t>0\), \(P_{tt}\geq 0\), \(P_{st}=0\) and \(\mu=P_{t}(u,|\nabla u|^{2})|\nabla u|^{2}=\frac{1}{2}|\nabla u|^{4}\). Finally, \[t^{2}P_{ss}(s,t^{2})+P_{s}(s,t^{2})f(s)+2P_{t}(s,t^{2})t^{2}f^{\prime}(s)=2t^{ 4}f^{\prime}(s)+t^{2}q^{\prime\prime}(s)+q^{\prime}(s)f(s)\geq 0\] since the above polynomial has zero discriminant. **Remark 4.3**.: _(1) Note that in Example (1) above, \(P=\frac{|\nabla u|^{2}}{2}-e^{u}+e^{-u}\) which does not satisfy the gradient bound, fails to satisfy the condition \(P(s,0)\leq 0\) for \(s<0\). **(2)** We also observe that there are cases where the bound (4.15) is more optimal than (1.2). Consider for example the potential \(W(u)=au^{k}\), \(a>0\) and \(k\in(0,\frac{3-\sqrt{3}}{6})\cup(\frac{3+\sqrt{3}}{6},1)\) and for positive solutions of (1.1), with \(f(u)=W^{\prime}(u)\). Then we have_ \[\int_{0}^{u}(\int_{0}^{y}\sqrt{-f(z)f^{\prime}(z)}dz)^{2}dy=\frac{a^{2}k(1-k)} {2(k-\frac{1}{2})^{2}}u^{2k}<a^{2}u^{2k}=W^{2}(u)\] _that is,_ \[\frac{1}{2}|\nabla u|^{2}\leq\frac{a\sqrt{k(1-k)}}{\sqrt{2}\,|k-\frac{1}{2}|} u^{k}<au^{k}=W(u)\ \ \,\ \ \text{for}\,\ k\in(0,\frac{3-\sqrt{3}}{6})\cup(\frac{3+\sqrt{3}}{6},1)\] In addition, we have the following gradient bounds for Examples (3) and (4) **Corollary 4.4**.: _Let \(u\) be an entire solution of_ \[\Delta u=u(k|\nabla u|^{2}+\lambda e^{-cu^{2}}) \tag{4.16}\] _that satisfy (4.1)._ _Then_ \[|\nabla u|^{2}\leq\begin{cases}-\frac{\lambda}{k+c}e^{-cu^{2}}&,\text{ if }\,\lambda(k+c)<0\\ \lambda u^{2}e^{-cu^{2}}&,\text{ if }\,k=-c\,\text{ and }\,\lambda\geq 0\end{cases} \tag{4.17}\] Proof.: By [15], we have that \[P(s,t)=\begin{cases}te^{-ks^{2}}+\frac{\lambda}{k+c}e^{-s^{2}(k+c)}&,\,k\neq-c \\ te^{cs^{2}}-\lambda s^{2}&,\,k=-c\end{cases} \tag{4.18}\] is a \(P-\)function of (4.16) with \(\mu(t)>0\), \(\forall\,t\geq 0\) and \(P(s,0)\leq 0\) in both cases since either \(\lambda(k+c)<0\) or \(k=-c\) and \(\lambda\geq 0\). Therefore by Theorem 4.1 we conclude that \(P(u,|\nabla u|^{2})\leq 0\,\,\,\forall x\in\mathbb{R}^{n}\) and we obtain the gradient bound (4.17). **Remark 4.5**.: _For \(\lambda=0\), Corollary 4.4 says that \(|\nabla u|\equiv 0\) and thus \(u\) is a constant. That is a Liouville-type result and can also be obtained by Liouville's theorem by setting \(v=g(u)\,,\) where \(\,g(z)=\int_{0}^{y}e^{-kz^{2}}dz\) and then \(\Delta v=0\) and \(v\) is bounded since \(u\) is bounded._ **Corollary 4.6**.: _Let \(u\) be a non negative entire solution of_ \[\Delta u=G(|\nabla u|^{2}-u)\] \[\text{where }\,G:\mathbb{R}\to\mathbb{R}\,\text{ is such that }\,G(z)\leq\frac{1}{2}\,,\,\,z\in\mathbb{R}, \tag{4.19}\] _that satisfy (4.1)._ _Then_ \[|\nabla u|^{2}\leq u \tag{4.20}\] Proof.: We have that the function \[P(s,t)=t-s\] satisfy the condition (3.4) with \(\mu=P_{t}(u,|\nabla u|^{2})|\nabla u|^{2}=|\nabla u|^{2}\) and also, \(P(u,0)=-u\leq 0\) since \(u\) is non negative by assumption. Therefore we conclude by the Theorem 4.1. Another important application of Theorem 4.1, utilizing the Example (5), is the following gradient bound **Corollary 4.7**.: _Let \(u\) be an entire solution of_ \[div(\Phi^{\prime}(|\nabla u|^{2})\nabla u)=\rho(|\nabla u|^{2})F^{\prime}(u)\;\;, \;F\geq 0 \tag{4.21}\] _that satisfy assumption (4.1), with \(\Phi^{\prime}(t),\rho(t)>0\) and \(\Phi^{\prime}(t)+2t\Phi^{\prime\prime}(t)>0\;,\;\forall\;t\geq 0\)._ _Then_ \[\begin{split}|\nabla u|^{2}\leq\Psi(u)\;\;,\;\text{where}\;\; \Psi(u)=Q^{-1}(2F(u))\\ \text{and}\;\;Q(t)=\int_{0}^{t}\frac{\Phi^{\prime}(y)+2y\Phi^{ \prime\prime}(y)}{\rho(y)}dy\end{split} \tag{4.22}\] Proof.: By Theorem 1 in [16], we have that \(P(u,|\nabla u|^{2})=Q(|\nabla u|^{2})-2F(u)\) is a \(P-\)function of (4.21) with \(\mu(t)>0\;,\;\forall\;t\geq 0\) and satisfies \(P(u,0)\leq 0\) since \(F\geq 0\). Thus we apply Theorem 4.1 and we conclude. **Remark 4.8**.: _Note that the gradient bound in Corollary 4.7 is a quite more general form of the gradient bound in [6]._ For \(P-\)functions of the form (4.10), we have a Liouville-type result **Theorem 4.9**.: _Let \(u\) be an entire solution of (4.2) that satisfies assumption (4.1) and \(P\) is a \(P-\)function of the form (4.10). If there exists \(x_{0}\in\mathbb{R}^{n}\) such that \(\Gamma(u(x_{0}))=0\), then \(u\equiv\text{const. in }\mathbb{R}^{n}\)._ Proof.: We argue as in the proof of Theorem 1.8 in [6] with slight modifications. For the convenience of the reader we provide the details. Suppose that \(\Gamma(u(x_{0}))=0\), let \(u_{0}=u(x_{0})\) and consider the set \[V=\{x\in\mathbb{R}^{n}\;|\;u(x)=u_{0}\} \tag{4.23}\] \(V\) is a closed set and by the assumption, non empty. Let \(x_{1}\in V\) and consider the function \(\phi(t)=u(x_{1}+t\omega)-u_{0}\), where \(|\omega|=1\) is arbitrarily fixed. We have \(|\phi^{\prime}(t)|=|\nabla u(x_{1}+t\omega)|\). By the gradient bound in Theorem 4.1 we have, \[|\nabla u|^{2}\leq\Psi(u)\;\;,\;\text{where}\;\;\Psi(s)=B^{-1}(\Gamma(s)) \tag{4.24}\] Since \(\Psi\in C^{2}(\mathbb{R})\) and \(\Psi(u_{0})=0\), we have \(\Psi(u)=O(|u-u_{0}|^{2})\), as \(|u-u_{0}|\to 0\). So, we conclude from (4.24) that \(|\phi^{\prime}(t)|\leq C|\phi(t)|\) for \(t\) small enough. Since \(\phi(0)=0\), we must have \(\phi\equiv 0\) on \([-\delta,\delta]\), for some \(\delta>0\). Thus \(V\) is open, which gives that \(V=\mathbb{R}^{n}\). Finally, we now prove a different version of Theorem 2.4 for the equation (3.2), assuming that \(P\) is of the form (4.10). In this case we can drop the assumption that \(u\) is monotone with respect to \(x_{n}\), but we do not allow any a priori singularities. **Theorem 4.10**.: _Let \(u\) be an entire solution of_ \[F(u,\nabla u,\nabla^{2}u)=0 \tag{4.25}\] _that satisfy assumption (4.1) and let \(P=P(u,|\nabla u|^{2})\) be a \(P-\)function of (4.25) of the form (4.10). If there exists \(x_{0}\in\mathbb{R}^{n}\) such that_ \[P(u(x_{0}),|\nabla u(x_{0})|^{2})=0 \tag{4.26}\] _then there exists a function \(g:\mathbb{R}\to\mathbb{R}\) such that_ \[\begin{array}{ll}\mbox{either}&u(x)=g(a\cdot x+b)\,\ a\in\mathbb{R}^{n} \ \mbox{with}\ \,|a|=1,\,\ b\in\mathbb{R}\\ \mbox{or}&u(x)=g(|x-z_{0}|+c)\,\ z_{0}\in\mathbb{R}^{n}\ \ \mbox{and}\ \,c\in \mathbb{R}\end{array} \tag{4.27}\] Proof.: By Theorem 4.1, we have that \(P(u,|\nabla u|^{2})\leq 0\). As in the proof of Theorem 2.4, we begin by considering the set \[A=\{x\in\mathbb{R}^{n}\,:\,P(u,|\nabla u|^{2})=0\} \tag{4.28}\] \(A\) is closed and by the assumption \(A\neq\emptyset\). We are going to prove that \(A\) is open. Let \(x_{1}\in A\), If \(\nabla u(x_{1})=0\), we obtain by the form \(P(s,t)=B(t)-\Gamma(s)\) that \(P(u(x_{1}),0)=-\Gamma(u(x_{1}))=0\). By Theorem 4.9, we conclude that \(u\equiv u(x_{1})\) and \(\nabla u\equiv 0\) and hence \(P\equiv 0\). On the other hand, if \(\nabla u(x_{1})\neq 0\), we have \(\inf_{\overline{B}_{\delta_{1}}(x_{1})}|\nabla u|>0\) for some \(\delta_{1}>0\) and by Theorem 2.2 we conclude that \(P(u,|\nabla u|^{2})\equiv 0\) in \(B_{\delta_{1}}(x_{1})\) and therefore \(A\) is open. By connectedness, we have that \(A=\mathbb{R}^{n}\), that is, \[P(u,|\nabla u|^{2})\equiv 0\ \,\ \forall\,x\in\mathbb{R}^{n} \tag{4.29}\] and \(P_{t}=B^{\prime}(t)>0\), thus \[|\nabla u|^{2}=\Psi(u)\ \ \,\ \ \mbox{in}\ \,\ \mathbb{R}^{n}\ \,\ \mbox{where}\ \,\ \Psi(u)=B^{-1}(\Gamma(u)) \tag{4.30}\] Now, if there exists \(x_{2}\in\mathbb{R}^{n}\) such that \(\Phi(u(x_{2}))=0\), so \(|\nabla u(x_{2})|=0\), again by Theorem 4.9 we have that \(u\equiv u(x_{2})\). If, on the other hand \(\Psi(u(x))>0\ \,\ \forall\,x\in\mathbb{R}^{n}\), we set \[\begin{array}{ll}v=G(u)&,\ \ \mbox{where}\ \,G^{\prime}(s)=\frac{1}{\Psi(s)} \\ &\mbox{and}\ \,|\nabla v|^{2}=1\ \ \mbox{in}\ \,\ \mathbb{R}^{n}\end{array} \tag{4.31}\] Therefore, by the result in [5], we have that \[\begin{array}{ll}\mbox{either}&v(x)=a\cdot x+b\ \,\ a\in\mathbb{R}^{n}\ \ \mbox{with}\ \,|a|=1\,\ \mbox{and}\ \,b\in\mathbb{R}\\ &\mbox{or}\ \,v(x)=|x-z_{0}|+c\ \,\ z_{0}\in\mathbb{R}^{n}\ \ \mbox{and}\ \,c\in \mathbb{R}\end{array} \tag{4.32}\] So we conclude that \[\begin{array}{ll}\mbox{either}&u(x)=g(a\cdot x+b)\ \,\ a\in\mathbb{R}^{n}\ \ \mbox{with}\ \,|a|=1\ \,\ b\in\mathbb{R}\ \ \mbox{where}\ \,g(s)=G^{-1}(s)\\ &\mbox{or}\ \ \ \ u(x)=g(|x-z_{0}|+c)\ \,\ z_{0}\in\mathbb{R}^{n}\ \ \mbox{and}\ \,c\in \mathbb{R}\end{array} \tag{4.33}\] ## 5. A Liouville-type property and nonexistence results In this section we will see that if we are able to find a \(P-\)function, related to any equation (4.2), of the form \(P=g(\nabla u)\), where \(g\geq 0\) and \(g(t_{1},...,t_{n})=0\) iff \((t_{1},...,t_{n})=(0,...,0)\) then either the solutions are constant or there are no solutions that satisfy assumption (4.1). The most common example is when \(P=|\nabla u|^{2}\). In particular we have **Theorem 5.1**.: _Let \(u\) be an entire solution of_ \[F(u,\nabla u,\nabla^{2}u)=0 \tag{5.1}\] _that satisfy assumption (4.1). If \(P=P(u,\nabla u)\) is a \(P-\) function of (5.1), with \(\mu=\mu(|\nabla u|)\;,\;\mu(t)>0\;,\;\forall\;t>0\) and \(P\) is such that_ \[P=g(\nabla u)\;\;\;,\;\mbox{where}\;\,g:\mathbb{R}^{n}\to[0,+\infty)\;\;\mbox{ and}\;\;\{x\in\mathbb{R}^{n}\;:\;g(x)=0\}=\{(0,...,0)\} \tag{5.2}\] _Then \(u\) is constant._ Proof.: The result of Theorem 5.1 is an immediate application of Theorem 4.1 since \(P(u,0)=0\) and thus \(g(\nabla u)=0\) which gives that \(u\) is constant. **Corollary 5.2**.: _Let \(u\) be an entire solution of_ \[\Delta u=f(u)\;\;\;,\;\mbox{where}\;\;f^{\prime}(u)\geq 0 \tag{5.3}\] _that satisfies (4.1)._ _Then \(u\) is constant._ Proof.: Consider \(P(u,|\nabla u|^{2})=|\nabla u|^{2}\) (i.e. \(P(s,t)=t\)) and observe that by the proof of Theorem 3.1, in particular by (3.14), we have \[|\nabla u|^{2}\Delta P\geq\frac{1}{2}|\nabla P|^{2}+|\nabla u|^{4}f^{\prime}(u )\geq 0 \tag{5.4}\] since \(I(u,|\nabla u|)=2|\nabla u|^{2}f^{\prime}(u)\) and so \(\mu=|\nabla u|^{2}\). Therefore by Theorem 5.1 we conclude that \(u\) is constant. **Remark 5.3**.: _The assumption \(f^{\prime}(u)\geq 0\) implies stability for any solution, since_ \[\int_{\Omega}|\nabla\phi|^{2}+f^{\prime}(u)\phi^{2}\geq 0\;\;,\;\mbox{for any open}\;\;\Omega\subset \mathbb{R}^{n}\;\;\mbox{and}\;\;\phi\in C^{1}_{c}(\Omega).\] _So this assumption makes the problem quite simple. When the condition \(f^{\prime}\geq 0\) is not satisfied the solutions of (5.3) are not necessarily stable and the study of stable solutions in _such cases is very important. Stable solutions have been thoroughly studied for semilinear elliptic equations in [3], [4] and [8] among others._ Note that if \(u\) is solution of (5.3) in a domain \(\Omega\), by (5.4) we have a mean value inequality for the gradient of \(u\) whenever the gradient does not vanish in the domain. Corollary 5.3 can be generalized to the equations of the form \(div(\Phi^{\prime}(|\nabla u|^{2}\nabla u)=F^{\prime}(u)\), studied in [6] assuming that \(F^{\prime\prime}(u)\geq 0\) and in fact even fewer assumptions than (4.1) are needed. We can just assume (i) or (ii) from Theorem 1.6 in [6] and the Liouville-type theorem below still holds. In particular, let \(\Phi\) as in [6], we have the following **Corollary 5.4**.: _Let \(u\) be an entire solution of_ \[div(\Phi^{\prime}(|\nabla u|^{2}\nabla u)=F^{\prime}(u)\ \,\ \mbox{where}\,\ F^{\prime \prime}(u)\geq 0 \tag{5.5}\] _that satisfies (4.1)._ _Then \(u\) is constant._ Proof.: Consider \(P=|\nabla u|^{2}\). Arguing as in the proof of Theorem 2.2 in [6], we obtain \[\sum_{i,j=1}^{n}(a_{ij}(\nabla u)P_{x_{i}})_{x_{j}}\geq 2F^{\prime\prime}(u)| \nabla u|^{2}+\frac{2|\nabla u|^{2}\Phi^{\prime\prime}(|\nabla u|^{2})+\Phi^ {\prime}(|\nabla u|^{2})}{4|\nabla u|^{2}}|\nabla P|^{2} \tag{5.6}\] where \(a_{ij}(\sigma)=2\Phi^{\prime\prime}(|\sigma|^{2})\sigma_{i}\sigma_{j}+\Phi^{ \prime}(|\sigma|^{2})\delta_{ij}\). Thus, \(P=|\nabla u|^{2}\) is a \(P-\)function of (5.5), since \(F^{\prime\prime}\geq 0\), and by Theorem 5.1 we conclude. In [6], it has been proved an analogous result to that of Corollary 5.4 using a monotonicity formula for the energy for positive solutions of (5.5) that vanish at infinity (see Theorem 4.5 in [6]). We will now see a non existence result. If (5.1) do not admit constant solutions then Theorem 5.1 states that the equation (5.1) do not admit any entire solution that satisfy assumption (4.1). **Corollary 5.5**.: _Consider the equation_ \[\begin{split}&\Delta u=G(|\nabla u|^{2})\ \,\ \mbox{where}\,\ u:\mathbb{R}^{n}\to\mathbb{R}\\ &\mbox{and}\,\ G:\mathbb{R}\to\mathbb{R}\,\ \mbox{is such that}\,\ G(0)\neq 0\end{split} \tag{5.7}\] _Then there does not exist any solution of (5.7) that satisfies assumption (4.1)._ Proof.: Suppose that there exist a solution of (5.7) that satisfies assumption (4.1). Let \(P(u,|\nabla u|^{2})=|\nabla u|^{2}\) i.e. \(P(s,t)=t\), then \(P\) satisfies condition (3.4) since \(I(u,|\nabla u|)=0\). Therefore by Theorem 5.1 we conclude that \(u\) is a constant and we have contradiction since \(G(0)\neq 0\). ## 6. \(P-\)functions for the Monge-Ampere equation We will now prove that for any function such that the determinant of its hessian is positive, we can obtain a class of functions that are \(P-\)functions of this differential inequality. Then we obtain a Mean Value type property for the Monge-Ampere equation. **Proposition 6.1**.: _Let \(u:\Omega\subset\mathbb{R}^{n}\to\mathbb{R}\) be a smooth function that satisfies_ \[det(\nabla^{2}u)>0\ \,\ \text{in}\ \,\Omega \tag{6.1}\] _and let \(g=g(t_{1},...,t_{n})\) such that its hessian \(Hes_{t}g\) is positive semi definite._ _Then \(g=g(u_{x_{1}},...,u_{x_{n}})\) is a \(P-function\) of (6.1)._ Proof.: We have \[\begin{split} g_{x_{i}}&=\sum_{j=1}^{n}g_{t_{j}}u_{ x_{j}x_{i}}\ \,\ i=1,...,n.\\ &\Leftrightarrow G_{x}=(\nabla^{2}u)G_{t}\\ \text{where}\ \,G_{x}=(g_{x_{1}},...,g_{x_{n}})^{T}\,,\ G_{t}=(g_{t _{1}},...,g_{t_{n}})^{T}\ \,\text{and}\ \,\nabla^{2}u=Hes\,u\end{split} \tag{6.2}\] by (6.1), \((\nabla^{2}u)\) is invertible, and thus \[\begin{split} G_{t}&=(\nabla^{2}u)^{-1}G_{x}\\ g_{t_{i}}&=\sum_{k=1}^{n}A_{k}^{i}g_{x_{k}}\ \,\ A_{k}^{i}=A_{k}^{i}(\nabla^{2}u)\end{split} \tag{6.3}\] Also, \[g_{x_{i}x_{i}}=\sum_{j=1}^{n}\sum_{k=1}^{n}g_{t_{j}t_{k}}u_{x_{j}x_{i}}u_{x_{k }x_{i}}+\sum_{j=1}^{n}g_{t_{j}}u_{x_{j}x_{i}x_{i}}\geq\sum_{j=1}^{n}g_{t_{j}}u_ {x_{j}x_{i}x_{i}} \tag{6.4}\] since \(Hes_{t}g\) is positive semidefinite. So, by (6.3) we obtain \[\begin{split}\Delta g&\geq\sum_{j=1}^{n}g_{t_{j}} \Delta u_{x_{j}}=\sum_{j=1}^{n}\sum_{k=1}^{n}A_{k}^{j}g_{x_{k}}\Delta u_{x_{j}} =\sum_{k=1}^{n}g_{x_{k}}B_{k}\\ \text{where}\ \,B_{k}&=\sum_{j=1}^{n}A_{k}^{j} \Delta u_{x_{j}}\end{split} \tag{6.5}\] and thus, \[L\,g\leq 0\ \ \,\ \ \text{where}\ \,\,L=-\Delta+\sum_{i=1}^{n}B_{i}\partial_{x_{i}} \tag{6.6}\] **Corollary 6.2**.: _Let \(u:\Omega\subset\mathbb{R}^{n}\to\mathbb{R}\) be a smooth solution of_ \[det(\nabla^{2}u)=f \tag{6.7}\] _where \(f:\mathbb{R}\to(0,+\infty)\) and let \(g:\mathbb{R}^{n}\to\mathbb{R}\) is such that its Hessian is positive semi definite._ _Then_ \[\max_{\overline{\Omega}}g(|\nabla u|)\leq\max_{\partial\Omega}g(|\nabla u|) \tag{6.8}\] _In addition, for any \(x_{0}\in\Omega\) there exists an increasing family \(D_{R}(x_{0})\) which satisfies_ \[B_{cR}(x_{0})\subset D_{R}(x_{0})\subset B_{CR}(x_{0})\;,\;\text{with}\,\;c,C \,\;\text{depending only on}\,\;n\] _and for \(R<S\), we have_ \[|\nabla u(x_{0})|^{2}\geq\frac{1}{|D_{R}(x_{0})|}\int_{D_{R}(x_{0})}|\nabla u |^{2}\geq\frac{1}{|D_{S}(x_{0})|}\int_{D_{S}(x_{0})}|\nabla u|^{2} \tag{6.9}\] Proof.: By Proposition 6.1 and the maximum principle we conclude. For the mean value type inequality, we set \(g(t)=t_{1}^{2}+...+t_{n}^{2}\) and apply Theorem 6.3 in [2]. **Remark 6.3**.: _(1) Such mean value type inequality hold for any \(P-\)function in general related to any equation._ _(2) If_ \(\Omega=\mathbb{R}^{n}\) _and consider_ \(g(t_{1},...,t_{n})=t_{1}^{2}+...+t_{n}^{2}\) _in Proposition_ 6.1_, then as mentioned in the precious section, by Theorem_ 5.1 _we have that that there is no solution of (_6.7_) that satisfies the assumption (_4.1_). Indeed, if we assume that_ \(u\) _is an entire solution of (_6.7_) that satisfies (_4.1_), then by Theorem_ 5.1 _we have that_ \(u\) _is constant in_ \(\mathbb{R}^{n}\)_, which contradicts the fact that it's Hessian has positive determinant. We can also see this as follows, if_ \(|\nabla u|\) _is bounded in_ \(\mathbb{R}^{n}\)_, then_ \(\nabla u\) _can not be a global diffeomorphism and thus_ \(det(\nabla^{2}u)\) _can not be strictly positive in_ \(\mathbb{R}^{n}\)_._ _(3) Note that Theorem_ 2.2 _holds for equation (_6.7_) without assuming that_ \(\inf_{\overline{\Omega}}|\nabla u|>0\)_. The conclusion says that_ \(u\) _will be a solution of the Eikonal equation_ \(|\nabla u|^{2}=c_{0}\)_. If in addition_ \(u_{x_{n}}>0\) _and_ \(F_{i}=\frac{u_{x_{i}}}{u_{x_{n}}}\)_, by Proposition_ 2.1 _in_ _[_10_]__,the function_ \(F=(F_{1},...,F_{n-1})\) _will satisfy the Isobar Euler equation._ ## 7. Higher order nonlinear equations In this last section, we will provide examples of \(P-\)functions for higher order nonlinear equations and their applications. In particular, an analogous version of Theorems 4.1 and 5.1, allow us to obtain properties and pointwise estimates of entire solutions even in this case. Moreover, we establish a method of extracting pointwise estimates for nonlinear equations of order greater than two, through the mean value properties of the \(P-\)functions or utilizing an analogous bound to that of Theorem 4.1 for higher order equations. This method can be applied to many other classes of higher order nonlinear equations. We begin by stating the analogous Theorem 4.1 for equations of general order. **Assumption** \[\begin{array}{l}u\in C^{m}(\mathbb{R}^{n})\cap L^{\infty}(\mathbb{R}^{n})\ \,\ \nabla^{m-1}u\in C^{\alpha}(\mathbb{R}^{n})\ \ \mbox{for some}\ \ \alpha\in(0,1)\\ \mbox{and there exists}\ \ C>0\ \ \mbox{such that}\ \left|\nabla^{l}u\right|\leq C\ \,\ l=1,...,m-1.\end{array} \tag{7.1}\] **Theorem 7.1**.: _Let \(u\) be an entire solution or subsolution of_ \[F(u,\nabla u,...,\nabla^{m}u)=0 \tag{7.2}\] _that satisfies assumption (7.1) and let \(P=P(u,...,\nabla^{m-1}u)=P(u;x)\) be a \(P-\)function of (7.2) such that one of the following holds: (i) \(\mu=\mu(g(\nabla^{k}u))\) for some \(g:\mathbb{R}^{n^{k}}\rightarrow\mathbb{R}\,\ g(z)>0\,\forall\ z\neq 0\,\ g((0,...,0))=0\,\)\(\mu(t)>0\,\ \forall\ t>0\) and \(P(u;x)\leq 0\,\) when \(\nabla^{k}u=(0,...,0)\,\ k\in\{1,...,m-1\}\,\) (ii) \(\mu=\mu(g(\nabla^{k}u))\) for some \(g:\mathbb{R}^{n^{k}}\rightarrow\mathbb{R}\,\ g(z)>0\,\forall\ z\neq 0\,\ g((0,...,0))=0\,\)\(\mu(t)>0\,\ \forall\ t>0\,\)\(P(u;x)\leq 0\,\) when \(\nabla^{l}u=(0,...,0)\,\ k\neq l\,\ k,l\in\{1,...,m-1\}\) and \(g(\nabla^{k}u)>0\,\ \forall\ x\in\mathbb{R}^{n}.\)_ _Then \(P(u,...,\nabla^{m-1}u)\leq 0\ \ \forall\ x\in\mathbb{R}^{n}.\)_ Proof.: The proof is similar to that of Theorem 4.1 with minor modifications. Note that Theorem 4.1 is a special case of Theorem 7.1 assuming (i), \(m=2\,\ k=1\) and \(g(z)=|z|\). The additional assumption \(g(\nabla^{k}u)>0\,\)\(\forall\ x\in\mathbb{R}^{n}\) in (ii) is necessary in order to utilize Theorem 2.2. We now provide the analogous of Theorem 5.1 in the higher order case. **Theorem 7.2**.: _Let \(u\) be an entire solution of_ \[F(u,\nabla u,...,\nabla^{m}u)=0 \tag{7.3}\] _and let \(P=P(u,...,\nabla^{m-1}u)=P(u;x)\) be a \(P-\)function of (7.2) such that \(\mu=\mu(g(\nabla^{k}u))\) for some \(g:\mathbb{R}^{n^{k}}\to\mathbb{R}\:,\:g(z)>0\:,\forall\:z\neq 0\:,\:g((0,...,0))=0 \:,\:\mu(t)>0\:,\:\forall\:t>0\) and_ \[\begin{gathered} P=H(\nabla^{k}u)\:\:,\:\text{where}\:\:H\:: \mathbb{R}^{n^{k}}\to[0,+\infty)\\ \text{and}\:\:\{H=0\}=\{0\in\mathbb{R}^{n^{k}}\}\:\:,\:k\in\{1,...,m-1\} \end{gathered} \tag{7.4}\] _Then \(\nabla^{k-1}u\) is a constant._ Proof.: The proof is direct consequence of Theorem 7.1 since \(P=H(\nabla^{k}u)=0\) when \(\nabla^{k}u\) vanish which gives \(P\equiv 0\) in \(\mathbb{R}^{n}\). Furthermore, we give some examples of \(P-\)functions of the form \(P=P(u,|\nabla u|,\Delta u)\) related to forth order nonlinear equations. **Proposition 7.3**.: _Let \(u\) be a smooth solution of_ \[\begin{gathered} a(\Delta u)[|\nabla u|^{2}\Delta^{2}u-\Delta u( \nabla u\cdot\nabla\Delta u)]=b(u)|\nabla u|^{4}\\ \text{where}\:\:a,b\::\mathbb{R}\to\mathbb{R}\:\:\text{and}\:\:a>0 \:,\:a^{\prime}\geq 0\end{gathered} \tag{7.5}\] _and set \(P(s,t)=A(t)-B(s)\) such that \(A^{\prime}=a\) and \(B^{\prime\prime}=b\)._ _Then \(P=P(u,\Delta u)=A(\Delta u)-B(u)\) is a \(P-\)function of (7.5)._ _In addition, if \(u\) satisfies (7.1) with \(m=4\:,\:B(u)\geq 0\) and \(u_{x_{n}}>0\), then_ \[\Delta u\leq\Gamma(u)\:\:\:\forall\:x\in\mathbb{R}^{n},\:\:\text{where}\:\: \Gamma(u)=A^{-1}(B(u)). \tag{7.6}\] Proof.: We have \[P_{x_{i}}=P_{s}u_{x_{i}}+P_{t}\Delta u_{x_{i}} \tag{7.7}\] and so, \[\begin{gathered}\Delta u(\nabla P\cdot\nabla u)=P_{s}|\nabla u|^ {2}\Delta u+P_{t}\Delta u\sum_{i=1}^{n}u_{x_{i}}\Delta u_{x_{i}}\\ \Leftrightarrow-B^{\prime}(u)|\nabla u|^{2}\Delta u=\Delta u( \nabla P\cdot\nabla u)-A^{\prime}(\Delta u)\Delta u\sum_{i=1}^{n}u_{x_{i}} \Delta u_{x_{i}}\end{gathered} \tag{7.8}\] on the other hand we have \[\begin{gathered} P_{x_{i}x_{i}}=P_{ss}u_{x_{i}}^{2}+2P_{st}u_{x_{ i}}\Delta u_{x_{i}}+P_{tt}(\Delta u_{x_{i}})^{2}+P_{s}u_{x_{i}x_{i}}+P_{t} \Delta u_{x_{i}x_{i}}\\ \Rightarrow\Delta P=(-B^{\prime\prime}(u))|\nabla u|^{2}+A^{ \prime\prime}(\Delta u)\sum_{i=1}^{n}(\Delta u_{x_{i}})^{2}-B^{\prime}(u) \Delta u+A^{\prime}(\Delta u)\Delta^{2}u\end{gathered} \tag{7.9}\] and by (7.8) and the assumptions of \(A\) and \(B\), (7.9) becomes \[|\nabla u|^{2}\Delta P-\Delta u(\nabla P\cdot\nabla u)\geq a(\Delta u)[|\nabla u |^{2}\Delta^{2}u-\Delta u(\nabla u\cdot\nabla\Delta u)]-b(u)|\nabla u|^{4}=0 \tag{7.10}\] For the bound of the Laplacian, we have \(P(u,0)=-B(u)\leq 0\) and \(\mu=|\nabla u|^{2}>0\ \forall x\in\mathbb{R}^{n}\) since \(u_{x_{n}}>0\), so the assumption (i) in Theorem 7.1 is satisfied and we conclude. **Proposition 7.4**.: _Let \(u\) be a smooth solution of_ \[\begin{split}&|Hes\,u|^{2}=F(u,|\nabla u|^{2},\Delta u)+\frac{u}{ 2}\Delta^{2}u\\ &\text{where}\,\,\,F:\mathbb{R}^{3}\to\mathbb{R}\text{ is such that }\,\,F(s,t,w)\geq\frac{1}{2}w^{2}.\end{split} \tag{7.11}\] _Then \(P=P(u,|\nabla u|^{2},\Delta u)=|\nabla u|^{2}-u\Delta u\) is a \(P-\)function of (7.13)._ _In addition, if \(u\) is non negative, convex solution of (7.13) that satisfies assumption (7.1), then_ \[|\nabla u|^{2}\leq u\Delta u\ \,\,\,\forall\,\,x\in\mathbb{R}^{n}. \tag{7.12}\] Proof.: We have that \[P_{x_{i}}=2\sum_{j=1}^{n}u_{x_{j}}u_{x_{j}x_{i}}-u_{x_{i}}\Delta u-u\Delta u_{ x_{i}}\] and \[\Delta P=2|Hes\,u|^{2}+2\nabla u\nabla\Delta u-(\Delta u)^{2}-2\nabla u\nabla \Delta u-u\Delta^{2}u\] so by (7.13), \[\Delta P=2F(u,|\nabla u|^{2},\Delta u)-(\Delta u)^{2}\geq 0\] For the gradient bound we see that \(P(u,0,\Delta u)=-u\Delta u\leq 0\) since \(u\) is non negative and convex, so the assumption (i) of Theorem 7.1 is satisfied and we conclude. As a result, we have the following pointwise estimate **Corollary 7.5**.: _Let \(u:B_{2}\subset\mathbb{R}^{n}\to\mathbb{R}\) be a smooth solution of_ \[\begin{split}&|Hes\,u|^{2}=F(u,|\nabla u|^{2},\Delta u)+\frac{u}{ 2}\Delta^{2}u\\ &\text{where}\,\,\,F:\mathbb{R}^{3}\to\mathbb{R}\text{ is such that }\,\,F(s,t,w)\geq\frac{1}{2}w^{2}.\end{split} \tag{7.13}\] _Then_ \[\begin{split}&|\nabla u(x)|^{2}-u(x)\Delta u(x)\leq C(||u||_{H^{1}(B_{ 2})}+||\Delta u||_{L^{2}(B_{2})})\;,\\ &\forall\,x\in B_{1}=\{y\in\mathbb{R}^{n}\,:\,|y|<1\}\;,\text{ and }\,\,C\,\text{ depends only on }\,n.\end{split} \tag{7.14}\] Proof.: By Proposition 7.4, we have that \(P=|\nabla u|^{2}-u\Delta u=P(u;x)\) is subharmonic. Therefore we have \[P(u;x)\leq\frac{1}{|B(x,r)|}\int_{B(x,r)}P(u;y)dy\;\;,\;\forall\;B(x,r)\subset B_ {2} \tag{7.15}\] Also, \(P\leq|\nabla u|^{2}+\frac{1}{2}(u^{2}+(\Delta u)^{2})\). So, \[\int_{B(x,r)}P(u;y)dy\leq||u||_{H^{1}(B_{2})}+||\Delta u||_{L^{2}(B_{2})}\;,\; \forall\;B(x,r)\subset B_{2} \tag{7.16}\] Thus, for any \(x\in B_{1}\) (since \(B(x,1)\subset B_{2}\)), we have \[P(u;x)\leq\frac{1}{|B_{1}|}(||u||_{H^{1}(B_{2})}+||\Delta u||_{L^{2}(B_{2})}) \tag{7.17}\] **Remark 7.6**.: _Note that if \(F(u,|\nabla u|^{2},\Delta u)=\frac{1}{2}(\Delta u)^{2}\), we have a reduction of order result, that is, if \(u\) is a smooth and bounded entire solution of_ \[2|Hes\;u|^{2}=(\Delta u)^{2}+u\Delta^{2}u \tag{7.18}\] _such that \(\nabla u,\Delta u\in L^{\infty}(\mathbb{R}^{n})\), then \(u\) satisfies \(u\Delta u=|\nabla u|^{2}+c\) for some \(c\in\mathbb{R}\). We can see this from the proof of Proposition 7.4, where \(P=|\nabla u|^{2}-u\Delta u\) will be harmonic for this particular equation. Also, \(|P|\leq M\) for some \(M=M(||u||_{L^{\infty}(\mathbb{R}^{n})},||\nabla u||_{L^{\infty}(\mathbb{R}^{n} )},||\Delta u||_{L^{\infty}(\mathbb{R}^{n})})>0\) and thus \(P\equiv\)constant._ We also provide a consequence of Theorem 7.2 **Corollary 7.7**.: _Let \(u\) be a convex subsolution of_ \[c|Hes\;u|^{2}-\Delta^{2}u=0\;\;,\;\text{with}\;\;c\geq 0 \tag{7.19}\] _that satisfies assumption (7.1) with \(m=4\)._ _Then \(u\) is constant._ Proof.: Consider \(P=P(u,\nabla u,\Delta u)=(\Delta u)^{2}\), so \[P_{x_{i}}=2\Delta u\Delta u_{x_{i}}\] \[P_{x_{i}x_{i}}=2(\Delta u_{x_{i}})^{2}+2\Delta u\Delta u_{x_{i} x_{i}}\] \[\Rightarrow\Delta P=2|\nabla\Delta u|^{2}+2\Delta u\Delta^{2}u \geq 2|\nabla\Delta u|^{2}+2c|Hes\;u|^{2}\Delta u\geq 0 \tag{7.20}\] and \(P(u,\nabla u,0)=0\) with \(\mu=1\), so by Theorem 7.2 we obtain \(\Delta u\equiv 0\) in \(\mathbb{R}^{n}\) and \(u\) is bounded by (7.1), so \(u\) is constant. Finally, we have the following De Giorgi-type property **Proposition 7.8**.: _Let \(u:\mathbb{R}^{2}\to\mathbb{R}\) be a smooth and bounded solution of_ \[F(u,\nabla u,\nabla^{2}u,\nabla^{3}u,\nabla^{4}u)=0 \tag{7.21}\] _such that \(u_{y}>0\) and assume \(P=P(u,\Delta u)\) is a \(P-\)function of (7.21), such that \(P_{t}>0\)\((P=P(s,t))\) with \(\mu=\mu(|\nabla u|)\;,\;\mu(t)>0\;,\;\forall\;t>0\)._ _If there exists \(x_{0}\in\mathbb{R}^{2}\) such that_ \[P(u(x_{0}),\Delta u(x_{0}))=\sup_{\mathbb{R}^{n}}P(u,\Delta u)<+\infty \tag{7.22}\] _then there exists a function \(g:\mathbb{R}\to\mathbb{R}\) such that_ \[u(x)=g(ax+by)\;\;\;,\;\;\mbox{for}\;\;a,b\in\mathbb{R} \tag{7.23}\] Proof.: Arguing as in the proof of Theorem 2.4 we obtain that \[P(u,\Delta u)\equiv c_{0}\;\;\;,\;\;\mbox{where}\;\;c_{0}=\sup_{\mathbb{R}^{n} }P(u,\Delta u) \tag{7.24}\] since \(P_{t}>0\) we have \[\Delta u=f(u)\;\;\;,\;\;\mbox{for some}\;\;f:\mathbb{R}\to\mathbb{R} \tag{7.25}\] and \(u\) is bounded entire solution of (7.25) such that \(u_{y}>0\). Therefore, by Theorem 1.1 in [12], we conclude that \[u(x)=g(ax+by)\;\;\;,\;\;\mbox{for some}\;\;g:\mathbb{R}\to\mathbb{R} \tag{7.26}\] **Acknowledgments:** I would like to thank my advisor professors N. Alikakos and C. Makridakis for their support. Also, I would like to thank professor N. Alikakos for his useful suggestions that improved this work.
2308.09208
A hybrid PML formulation for the 2D three-field dynamic poroelastic equations
Simulation of wave propagation in poroelastic half-spaces presents a common challenge in fields like geomechanics and biomechanics, requiring Absorbing Boundary Conditions (ABCs) at the semi-infinite space boundaries. Perfectly Matched Layers (PML) are a popular choice due to their excellent wave absorption properties. However, PML implementation can lead to problems with unknown stresses or strains, time convolutions, or PDE systems with Auxiliary Differential Equations (ADEs), which increases computational complexity and resource consumption. This article presents two new PML formulations for arbitrary poroelastic domains. The first formulation is a fully-mixed form that employs time-history variables instead of ADEs, reducing the number of unknowns and mathematical operations. The second formulation is a hybrid form that restricts the fully-mixed formulation to the PML domain, resulting in smaller matrices for the solver while preserving governing equations in the interior domain. The fully-mixed formulation introduces three scalar variables over the whole domain, whereas the hybrid form confines them to the PML domain. The proposed formulations were tested in three numerical experiments in geophysics using realistic parameters for soft sites with free surfaces. The results were compared with numerical solutions from extended domains and simpler ABCs, such as paraxial approximation, demonstrating the accuracy, efficiency, and precision of the proposed methods. The article also discusses the applicability of these methods to complex media and their extension to the Multiaxial PML formulation. The codes for the simulations are available for download from \url{https://github.com/hmella/POROUS-HYBRID-PML}.
Hernán Mella, Esteban Sáez, Joaquín Mura
2023-08-17T23:27:49Z
http://arxiv.org/abs/2308.09208v1
# A mixed and hybrid PML formulation for the 2D three-field dynamic poroelastic equations ###### Abstract Simulation of wave propagation in poroelastic half-spaces presents a common challenge in fields like geomechanics and biomechanics, requiring Absorbing Boundary Conditions (ABCs) at the semi-infinite space boundaries. Perfectly Matched Layers (PML) are a popular choice due to their excellent wave absorption properties. However, PML implementation can lead to problems with unknown stresses or strains, time convolutions, or PDE systems with Auxiliary Differential Equations (ADEs), which increases computational complexity and resource consumption. This article presents two new PML formulations for arbitrary poroelastic domains. The first formulation is a fully-mixed form that employs time-history variables instead of ADEs, reducing the number of unknowns and mathematical operations. The second formulation is a hybrid form that restricts the fully-mixed formulation to the PML domain, resulting in smaller matrices for the solver while preserving governing equations in the interior domain. The fully-mixed formulation introduces three scalar variables over the whole domain, whereas the hybrid form confines them to the PML domain. The proposed formulations were tested in three numerical experiments in geophysics using realistic parameters for soft sites with free surfaces. The results were compared with numerical solutions from extended domains and simpler ABCs, such as paraxial approximation, demonstrating the accuracy, efficiency, and precision of the proposed methods. The article also discusses the applicability of these methods to complex media and their extension to the Multiaxial PML formulation. The codes for the simulations are available for download from [https://github.com/hmella/POROUS-HYBRID-PML](https://github.com/hmella/POROUS-HYBRID-PML). keywords: Perfectly Matched Layers, Poroelastic Wave Propagation, Absorbing Boundary Condition, Three-field Biot's Equations + Footnote †: journal: ## 1 Introduction Fluid-saturated porous media are a common occurrence in nature. For instance, soils and rocks are often saturated with water in practical cases, while living tissues are saturated with blood and air. In both cases, if the solid skeleton's displacements and strains are relatively small, linear elasticity provides an accurate representation of the underlying dynamics. Additionally, when loads are applied quickly and inertial forces play a significant role, a proper modeling strategy for wave propagation in poroelastic media is necessary. As noted by Zienkiewicz et al. [41], Biot's poroelastic theory can be employed to describe wave propagation in poroelastic media, such as in problems of traffic-induced vibrations or geophysical applications involving seismic wave propagation. The main challenge in these types of problems is properly handling outgoing waves. In the directions where outgoing waves travel, finite energy considerations lead to the so-called "radiation conditions" towards infinity, which are used by Integral Equations or Boundary-Element methods to determine Green kernels and solve the problem rigorously. However, these conditions are often difficult to calculate and are limited to homogeneous and isotropic material properties at infinity [20]. An alternative solution is to use a foam-like subdomain to confine the region of interest, creating virtual windows in space to focus computational efforts on a specific area of the problem. The subdomain must effectively absorb the outgoing waves from the virtual window. Numerous numerical methods have been proposed as Absorbing Boundary Conditions (ABCs). Local ABCs are often used for dry elastic problems or single-phase media due to their ease of implementation and local character in both time and space [24]. However, fluid-saturated porous media or two-phase media presents a different challenge due to an interaction between the solid skeleton and fluid flow, which depends on the loading rate. According to Biot's theory, high-frequency loading generates two dilatational waves and one shear wave. When the porous media has low permeability and the loading is within the low frequency range, the fluid's relative motion with respect to the soil is negligible, and viscous coupling dampens out the second dilatational wave [9; 16]. In this case, the fluid-saturated porous media behaves like a single-phase medium, where only one dilatational and one shear wave propagate. In recent decades, the Perfectly Matched Layer (PML) has gained popularity as an ABC due to its excellent energy-absorbing properties. PML was first developed by Berenger [8] in the context of electromagnetism. Although the initial development was for Maxwell's equations, its use as an ABC for acoustic [35], elastic [11], and poroelastic [38] domains was later extended. Since then, the technique has been widely used to simulate the propagation of elastic [28; 37; 7; 13; 26; 27; 18; 40] and poroelastic waves [36; 30; 22; 21] and new and novel formulations have been introduced. These forms can be divided into split-field and unsplit-field approaches, both of which have drawbacks. Split-field formulations often result in mixed problems where stresses or strains are unknowns, increasing the computational cost of solving the problem [15; 38; 39; 17]. Unsplit-field formulations typically require the estimation of convolutions or solving Auxiliary Differential Equations (ADEs) [30; 22], which can also be expensive due to the increased number of mathematical operations or the introduction of auxiliary variables. Additionally, little attention has been paid to simulating poroelastic waves in arbitrary domains with realistic subsoil properties. In this article, we propose two new formulations of the Perfectly Matched Layer (PML) method for the second order three-field Biot's equations to address the previously mentioned limitations. Our fully-mixed and hybrid formulations maintain the second-order in time structure of the original equations, which makes them compatible with most time integration schemes. Furthermore, both methods only introduce three additional scalar variables, which is at least 50% less computationally expensive than previous developments [22]. The hybrid formulation modifies Biot's equations only in the PML region, resulting in significant computational cost savings. Our proposed methods perform well under challenging conditions, such as free-surface wave propagation, transitions between water and air-filled soft media, and complex geometries, making them suitable for simulating realistic media. ## 2 Poro-elastodynamic equations The three-field model proposed by Biot [10; 41; 42] considers a domain \(\Omega\subseteq\mathbb{R}^{2}\) where the solid displacement \(\mathbf{u}(\mathbf{x},t)\), the displacement of the fluid phase relative to the solid \(\mathbf{w}(\mathbf{x},t)\), and the pore pressure in the fluid \(p(\mathbf{x},t)\) interacts at any position \(\mathbf{x}\) and time \(t\) such that \((\mathbf{x},t)\in\Omega\times T\) with \(T=(0,\infty)\), according to: \[\rho\,\ddot{\mathbf{u}}+\rho_{f}\,\ddot{\mathbf{w}} =\nabla\cdot\mathbf{\sigma} \text{in }\Omega\times T \tag{1a}\] \[\rho_{f}\,\ddot{\mathbf{u}}+\rho_{w}\,\ddot{\mathbf{w}}+\frac{\eta}{ \kappa}\dot{\mathbf{w}} =-\nabla p \text{in }\Omega\times T\] (1b) \[-\dot{p} =\nabla\cdot\{M(\alpha\dot{\mathbf{u}}+\dot{\mathbf{w}})\} \text{in }\Omega\times T \tag{1c}\] where the effective density \(\rho=\rho_{s}(1-\phi)+\rho_{f}\phi\) depends on the solid (\(\rho_{s}\)) and fluid densities (\(\rho_{f}\)), as well as the porosity \(\phi\). The fluid density \(\rho_{w}=\tau\rho_{f}/\phi\) depends on the tortuosity \(\tau\), the dynamic viscosity of the fluid \(\eta\), and the saturated permeability of the porous media \(\kappa\). The parameter \(\alpha\) represents the Biot-Willis coefficient and \(M\) the fluid-solid coupling bulk modulus, defined as \[\alpha=1-\frac{K_{b}}{K_{s}},\quad M=\left(\frac{\phi}{K_{f}}+\frac{\alpha- \phi}{K_{s}}\right)^{-1} \tag{2}\] where \(K_{b}\), \(K_{s}\), \(K_{f}\) are the bulk moduli of the dry porous skeleton, solid, and fluid, respectively. The constitutive law for the poroelastic media in (1) is given by \[\mathbf{\sigma}(\mathbf{u},p) =C\mathbf{e}(\mathbf{u})-\alpha p\mathbf{I} \tag{3a}\] \[\mathbf{e}(\mathbf{u}) =\frac{1}{2}\left\{\nabla\mathbf{u}+(\nabla\mathbf{u})^{T}\right\} \tag{3b}\] where \(C\) is the fourth-order elastic tensor with components \(C_{ijkl}=\lambda_{b}\delta_{ij}\delta_{kl}+\mu_{b}(\delta_{ik}\delta_{jl}+ \delta_{il}\delta_{jk})\) (with \(\delta_{ij}\) the Kronecker delta), \(\mathbf{e}\) the linear strain tensor, and \(\mathbf{I}\) the identity tensor. The domain \(\Omega\) is intended to be an unbounded semi-space, with the top boundary consisting of two parts: \(\Gamma=\Gamma_{N}\cup\Gamma_{g}\). The free surface, denoted by \(\Gamma_{N}\), is the region where tractions must vanish [33]. On the other hand, \(\Gamma_{g}\) is the part of the top boundary where an external load \(\mathbf{g}\) is being applied. Therefore, the boundary conditions of the problem are: \[\mathbf{\sigma}(\mathbf{u},p)\cdot\mathbf{n} =\mathbf{g} \text{in }\Gamma_{g}\times T \tag{4a}\] \[\mathbf{\sigma}(\mathbf{u},p)\cdot\mathbf{n} =\mathbf{0} \text{in }\Gamma_{N}\times T\] (4b) \[p =0 \text{in }\Gamma_{N}\times T \tag{4c}\] The variables \(\mathbf{u}\), \(\mathbf{w}\), and \(p\) must vanish as the distance from the source tends to infinity. ## 3 Derivation of PML formulas The region of interest where boundary-reflected waves are not desired (namely Regular Domain) \(\Omega^{\mathrm{RD}}\) is surrounded and truncated by a thin Perfectly Matched Layer \(\Omega^{\mathrm{PML}}\) such that \(\Omega=\Omega^{\mathrm{RD}}\cup\Omega^{\mathrm{PML}}\) (see Figure 0(b)). Thus, \(\Omega^{\mathrm{PML}}\) is defined as an extension of \(\Omega^{\mathrm{RD}}\) where the outgoing waves are attenuated by a complex coordinate stretching (see Figure 1). ### Complex-coordinate stretching A complex-coordinate stretching applied to Biot's equations (1) leads to a modified set of equations within \(\Omega\). Thus, the complex-coordinate stretching follows: \[r\longmapsto\int_{0}^{r}\varepsilon_{r}(r^{\prime},s)\ dr^{\prime} \tag{5}\] where \(r\) denotes the spatial coordinate being transformed (namely \(x\) or \(y\) in the two-dimensional case), \(s\) the dual variable of the time in the Laplace domain, and \(\varepsilon_{r}\) a complex-coordinate stretching function along the coordinate \(r\). The coordinate transformation given in (5) implies: \[\frac{\partial}{\partial r}\longmapsto\frac{1}{\varepsilon_{r}(r)}\frac{ \partial}{\partial r} \tag{6}\] which is the fundamental relation used to transform the governing equations. The function \(\varepsilon_{r}\) in (7) can adopt diverse forms to achieve different absorption properties. For instance, Kuzuoglu and Mitra [28] introduced the Convolutional Frequency Shifted (CFS) PML method by defining a frequency-dependent stretching function to improve the absorption efficiency at different frequencies and also to improve the time stability. Later, Correia and Jin [13] introduced the higher-order PML with a better absorption rate than CFS-PML. However, both approaches make the real and imaginary parts of \(\varepsilon_{r}\) frequency-dependent, leading to convolution terms in the PML formulation. Later, Meza-Fajardo and Papageorgiou developed the Multiaxial PML (M-PML) method [32] and introduced multi-directional Figure 1: (a) Semi-infinite domain \(\Omega\) and (b) truncated regular and PML domains \(\Omega^{\mathrm{RD}}\) and \(\Omega^{\mathrm{PML}}\) respectively. attenuation functions with better time stability properties. Recently, Francois et al. [18] proposed a non-convolutional version of the CFS-PML method in elastodynamics by introducing auxiliary variables. A standard selection for \(\varepsilon_{r}\) would be: \[\varepsilon_{r}(r,s)=\alpha_{r}(r)+\frac{\beta_{r}(r)}{s}, \tag{7}\] with \(\alpha_{r}\) and \(\beta_{r}\) denoting real-valued scaling and attenuation functions, respectively. The real part of \(\varepsilon_{r}\) scales the spatial coordinate \(r\) while the imaginary part is responsible for the amplitude decay of the waves entering the PML region (see Figure 1). To avoid modifying the propagating waves inside \(\Omega^{\mathrm{RD}}\) and ensure the wave attenuation inside \(\Omega^{\mathrm{PML}}\), the following conditions must be fulfilled: \(\alpha_{r}(r)\) and \(\beta_{r}(r)\) are constant inside \(\Omega^{\mathrm{RD}}\) and take values of \(1\) and \(0\) respectively, and both functions increases monotonically with \(r\) within \(\Omega^{\mathrm{PML}}\). Several ways of defining \(\alpha_{r}\) and \(\beta_{r}\) have been proposed in the literature in different contexts [8; 12; 26; 22], but in this investigation, polynomial profiles were chosen according to: \[\alpha_{r}(r) =\left\{\begin{array}{rl}1&\text{if }0\leq r\leq r_{0}\\ 1+\alpha_{0}\left\{\frac{(r-r_{0})n_{r}}{L_{\mathrm{PML}}}\right\}^{m}&\text{ if }r_{0}\leq r\leq r_{t}\end{array}\right. \tag{8a}\] \[\beta_{r}(r) =\left\{\begin{array}{rl}0&\text{if }0\leq r\leq r_{0}\\ \beta_{0}\left\{\frac{(r-r_{0})n_{r}}{L_{\mathrm{PML}}}\right\}^{m}&\text{ if }r_{0}\leq r\leq r_{t}\end{array}\right. \tag{8b}\] where \(n_{r}\) denotes the \(r\)-th component of the outward normal to the interface between \(\Omega^{\mathrm{RD}}\) and \(\Omega^{\mathrm{PML}}\), \(L_{\mathrm{PML}}\) the width of the PML layer, and \(m\) the order of the attenuation profiles, \(r_{0}\) and \(r_{t}\) represent the start and end of the absorbing layer (see Figure 1). The constants \(\alpha_{0}\) and \(\beta_{0}\) define the absorption rate in \(\Omega^{\mathrm{PML}}\), and can be chosen as [26]: \[\alpha_{0}=\frac{(m+1)b}{2L_{\mathrm{PML}}}\log\left(\frac{1}{|R|}\right), \qquad\beta_{0}=\frac{(m+1)c}{2L_{\mathrm{PML}}}\log\left(\frac{1}{|R|}\right) \tag{9}\] where \(b\) denotes a characteristic length (e.g., the width of the distributed load applied in the Neumann boundary or the cell size of the finite element mesh), \(c_{p}\) the propagation velocity of the fastest wave, and \(R\) a reflection coefficient. ### PML derivation in the Laplace domain The complex-coordinate stretching is enforced by introducing the relation (6) into the Laplace-transformed motion equations. Applying the Laplace transform to Biot's equations leads: \[s^{2}\rho\hat{\mathbf{u}}+s^{2}\rho_{f}\hat{\mathbf{w}} =\nabla\cdot\hat{\mathbf{\sigma}} \text{(linear momentum conservation)} \tag{10a}\] \[s^{2}\rho_{f}\hat{\mathbf{u}}+s^{2}\rho_{w}\hat{\mathbf{w}}+s\frac{\eta} {\kappa}\hat{\mathbf{w}} =-\nabla\hat{p} \text{(Darcy law)}\] (10b) \[-s\hat{p} =\nabla\cdot\{M(\alpha s\hat{\mathbf{u}}+s\hat{\mathbf{w}})\}\] (10c) \[\hat{\mathbf{\sigma}} =C\hat{\mathbf{e}}-\alpha\hat{p}\mathbf{I} \text{(constitutive relations)}\] (10d) \[\hat{\mathbf{e}} =\frac{1}{2}\left\{\nabla\hat{\mathbf{u}}+(\nabla\hat{\mathbf{u}})^{T}\right\} \tag{10e}\] where \(\hat{f}\) denotes the variable \(f\) in the Laplace domain. #### 3.2.1 Linear momentum conservation Replacing (6) into (10a) for the plane strain case gives: \[s^{2}\rho\hat{u}_{x}+s^{2}\rho_{f}\hat{w}_{x} =\frac{1}{\varepsilon_{x}}\frac{\partial\sigma_{xx}}{\partial x}+ \frac{1}{\varepsilon_{y}}\frac{\partial\sigma_{xy}}{\partial y}-\frac{1}{ \varepsilon_{x}}\frac{\partial\alpha\hat{p}}{\partial x} \tag{11a}\] \[s^{2}\rho\hat{u}_{y}+s^{2}\rho_{f}\hat{w}_{y} =\frac{1}{\varepsilon_{x}}\frac{\partial\sigma_{yx}}{\partial x}+ \frac{1}{\varepsilon_{y}}\frac{\partial\sigma_{yy}}{\partial y}-\frac{1}{ \varepsilon_{y}}\frac{\partial\alpha\hat{p}}{\partial y} \tag{11b}\] which after multiplying both equations by \(\varepsilon_{x}\varepsilon_{y}\), introducing the variables \(a=\alpha_{x}\alpha_{y}\), \(b=\alpha_{x}\beta_{y}+\alpha_{y}\beta_{x}\), \(c=\beta_{x}\beta_{y}\), and rearranging terms can be rewritten as: \[(s^{2}a+sb+c)(\rho\hat{\mathbf{u}}+\rho_{f}\hat{\mathbf{w}})=\nabla\cdot\left\{\mathbf{ \sigma}(\mathbf{u},p)\left(\mathbf{\Lambda}_{e}+\frac{1}{s}\mathbf{\Lambda}_{p}\right)\right\} \tag{12}\] where the tensors \(\mathbf{\Lambda}_{e}\) and \(\mathbf{\Lambda}_{p}\) are defined as \[\left[\begin{array}{cc}\alpha_{y}&0\\ 0&\alpha_{x}\end{array}\right]+\frac{1}{s}\left[\begin{array}{cc}\beta_{y}&0 \\ 0&\beta_{x}\end{array}\right]=\mathbf{\Lambda}_{e}+\frac{1}{s}\mathbf{\Lambda}_{p} \tag{13}\] #### 3.2.2 Darcy equation Proceeding similarly with (10b) results in: \[s^{2}\rho_{f}\hat{u}_{x}+s^{2}\rho_{w}\hat{w}_{x}+s\frac{\eta}{ \kappa}\hat{w}_{x} =-\frac{1}{\varepsilon_{x}}\frac{\partial\hat{p}}{\partial x} \tag{14a}\] \[s^{2}\rho_{f}\hat{u}_{y}+s^{2}\rho_{w}\hat{w}_{y}+s\frac{\eta}{ \kappa}\hat{w}_{y} =-\frac{1}{\varepsilon_{y}}\frac{\partial\hat{p}}{\partial y} \tag{14b}\] which after multiplying the first equation by \(\varepsilon_{x}\) and the second by \(\varepsilon_{y}\) lead to the following modified equation: \[(\tilde{\mathbf{\Lambda}}_{e}s+\tilde{\mathbf{\Lambda}}_{p})\left(s\rho_{f}\hat{\mathbf{u }}+s\rho_{w}\hat{\mathbf{w}}+\frac{\eta}{\kappa}\hat{\mathbf{w}}\right)=-\nabla\hat{p}, \tag{15}\] where the tensor \(\tilde{\mathbf{\Lambda}}_{e}\) and \(\tilde{\mathbf{\Lambda}}_{p}\) are defined as: \[\left[\begin{array}{cc}\alpha_{x}&0\\ 0&\alpha_{y}\end{array}\right]+\frac{1}{s}\left[\begin{array}{cc}\beta_{x}&0 \\ 0&\beta_{y}\end{array}\right]=\tilde{\mathbf{\Lambda}}_{e}+\frac{1}{s}\tilde{\mathbf{ \Lambda}}_{p}, \tag{16}\] The tensors \(\tilde{\mathbf{\Lambda}}_{e}\) and \(\tilde{\mathbf{\Lambda}}_{p}\) are similar to \(\mathbf{\Lambda}_{e}\) and \(\mathbf{\Lambda}_{p}\) (respectively) but have their diagonal reversed. #### 3.2.3 Constitutive relations Finally, multiplying the equations (10c) by \(\varepsilon_{x}\varepsilon_{y}\) and (10e) by \(s\varepsilon_{x}\varepsilon_{y}\), and rearranging terms, we obtain the following PML equations in the Laplace domain: \[sa\hat{\mathbf{e}}+b\hat{\mathbf{e}}+\frac{1}{s}c\hat{\mathbf{e}} =\frac{1}{2}s\left\{\nabla\hat{\mathbf{u}}\mathbf{\Lambda}_{e}+(\nabla \hat{\mathbf{u}}\mathbf{\Lambda}_{e})^{T}\right\}+\frac{1}{2}\left\{\nabla\hat{\mathbf{u}} \mathbf{\Lambda}_{p}+(\nabla\hat{\mathbf{u}}\mathbf{\Lambda}_{p})^{T}\right\} \tag{17a}\] \[-\left(sa\hat{p}+b\hat{p}+\frac{1}{s}c\hat{p}\right) =\nabla\cdot\left\{M(\mathbf{\Lambda}_{e}s+\mathbf{\Lambda}_{p})(\alpha \hat{\mathbf{u}}+\hat{\mathbf{w}})\right\} \tag{17b}\] ### Fully-mixed time-domain formulation of the PML equations Applying the inverse Laplace transform to the Equations (12), (15), (17a), and (17b) we obtain the time domain PML formulation of the Biot's equations: \[\rho(a\ddot{\mathbf{u}}+b\dot{\mathbf{u}}+c\mathbf{u})+\rho_{f}(a\ddot{\mathbf{w}}+b \dot{\mathbf{w}}+c\mathbf{w}) =\nabla\cdot\left(\sigma\mathbf{\Lambda}_{e}+\int_{0}^{t}\sigma d\tau \mathbf{\Lambda}_{p}\right) \tag{18a}\] \[a\dot{\mathbf{e}}+b\mathbf{e}+c\int_{0}^{t}\mathbf{e}\ d\tau =\frac{1}{2}\left\{\nabla\dot{\mathbf{u}}\mathbf{\Lambda}_{e}+(\nabla\dot{ \mathbf{u}}\mathbf{\Lambda}_{e})^{T}+\nabla\mathbf{u}\mathbf{\Lambda}_{p}+(\nabla\mathbf{u}\mathbf{ \Lambda}_{p})^{T}\right\}\] (18b) \[-\nabla p =\left(\tilde{\mathbf{\Lambda}}_{e}\frac{\partial}{\partial t}+\tilde {\mathbf{\Lambda}}_{p}\right)\left(\rho_{f}\dot{\mathbf{u}}+\rho_{w}\dot{\mathbf{w}}+ \frac{\eta}{\kappa}\mathbf{w}\right)\] (18c) \[-\left(a\dot{p}+bp+c\int_{0}^{t}p\ d\tau\right) =\nabla\cdot\left\{M\left(\mathbf{\Lambda}_{e}\frac{\partial}{ \partial t}+\mathbf{\Lambda}_{p}\right)(\alpha\mathbf{u}+\mathbf{w})\right\} \tag{18d}\] To avoid using auxiliary differential equations or the discrete evaluation of time integrals, we introduce the auxiliary memory variables \(\mathbf{S}(\mathbf{x},t)\), \(\mathbf{E}(\mathbf{x},t)\), and \(\pi(\mathbf{x},t)\) for the stress, strain, and pressure (respectively), defined as: \[\mathbf{S}(\mathbf{x},t)=\int_{0}^{t}C\mathbf{e}(\mathbf{x},\tau)\ d\tau,\quad\mathbf{ E}(\mathbf{x},t)=\int_{0}^{t}\mathbf{e}(\mathbf{x},\tau)\ d\tau,\quad\pi(\mathbf{x},t)=\int_{0}^{t}p( \mathbf{x},\tau)\ d\tau, \tag{19}\] Consequently \[\dot{\mathbf{S}}(\mathbf{x},t) =C\mathbf{e}(\mathbf{x},t),\quad\ddot{\mathbf{S}}(\mathbf{x},t) =C\dot{\mathbf{e}}(\mathbf{x},t), \tag{20a}\] \[\dot{\mathbf{E}}(\mathbf{x},t) =\mathbf{e}(\mathbf{x},t),\quad\ddot{\mathbf{E}}(\mathbf{x},t) =\dot{\mathbf{e}}(\mathbf{x},t),\] (20b) \[\dot{\pi}(\mathbf{x},t) =p(\mathbf{x},t),\quad\ddot{\pi}(\mathbf{x},t) =\dot{p}(\mathbf{x},t). \tag{20c}\] For the sake of simplicity, we introduce the following definitions: \[\mathcal{J}f =a\ddot{f}+b\dot{f}+cf \tag{21a}\] \[\mathbf{\sigma}^{\mathrm{PML}}(\mathbf{S},\pi) =(\dot{\mathbf{S}}-\alpha\dot{\pi}\mathbf{I})\mathbf{\Lambda}_{e}+( \mathbf{S}-\alpha\pi\mathbf{I})\mathbf{\Lambda}_{p} \tag{21b}\] where \(\mathcal{J}\) denotes an operator that acts on any scalar, vector, or tensor function \(f\) and \(\mathbf{\sigma}^{\mathrm{PML}}\) is the PML stress tensor. Thus, replacing the relations given in (20) into (18) and introducing the previous definitions, the fully-mixed PML formulation becomes: find \(\mathbf{u}\), \(\mathbf{w}\), \(\pi\), and \(\mathbf{S}\) satisfying: \[\rho\mathcal{J}\mathbf{u}+\rho_{f}\mathcal{J}\mathbf{w} =\nabla\cdot\mathbf{\sigma}^{\mathrm{PML}}(\mathbf{S},\pi) \text{in }\Omega\times T \tag{22a}\] \[\mathcal{D}(\mathcal{J}\mathbf{S}) =\frac{1}{2}\left\{\nabla\mathbf{u}\mathbf{\Lambda}_{p}+\mathbf{\Lambda}_{p}( \nabla\mathbf{u})^{T}+\nabla\dot{\mathbf{u}}\mathbf{\Lambda}_{e}+\mathbf{\Lambda}_{e}(\nabla \dot{\mathbf{u}})^{T}\right\} \text{in }\Omega\times T\] (22b) \[-\nabla\dot{\pi} =\left(\tilde{\mathbf{\Lambda}}_{e}\frac{\partial}{\partial t}+ \tilde{\mathbf{\Lambda}}_{p}\right)\left(\rho_{f}\dot{\mathbf{u}}+\rho_{w}\dot{\mathbf{w}}+ \frac{\eta}{\kappa}\mathbf{w}\right) \text{in }\Omega\times T\] (22c) \[-\mathcal{J}\pi =\nabla\cdot\left\{M\left(\mathbf{\Lambda}_{e}\frac{\partial}{ \partial t}+\mathbf{\Lambda}_{p}\right)(\alpha\mathbf{u}+\mathbf{w})\right\} \text{in }\Omega\times T \tag{22d}\] where \(\mathcal{D}\) denotes the compliance operator, which takes the stress tensor as argument and returns the strain tensor (\(\mathbf{e}=\mathcal{D}(\mathbf{\sigma})\)). **Remark:** the tensor \(\mathbf{S}\) is symmetric and, therefore, only three additional scalar functions are introduced as unknowns by the mixed PML formulation. ## 4 Hybrid formulation With the fully-mixed formulation given in (22), the vector functions \(\mathbf{u},\mathbf{w}\), the scalars \(p,\pi\), and the symmetric tensor field \(\mathbf{S}\) must be solved simultaneously on the whole domain \(\Omega=\Omega^{\mathrm{RD}}\cup\Omega^{\mathrm{PML}}\subset\mathbb{R}^{2}\), which may lead to computationally expensive problems. Therefore, we define a hybrid formulation where the problem given in (22) is split into two sub-problems defined separately on \(\Omega^{\mathrm{RD}}\) and \(\Omega^{\mathrm{PML}}\) but coupled through boundary conditions on \(\Gamma_{I}\) (see Figure 1). Let \(\{\mathbf{u}_{1},\mathbf{w}_{1}\}\) and \(\{\mathbf{u}_{2},\mathbf{w}_{2}\}\) be the solid and relative fluid displacements defined separately on \(\Omega^{\mathrm{RD}}\) and \(\Omega^{\mathrm{PML}}\), respectively. The hybrid PML formulation for the solid and fluid displacements, pore pressure, stress history, and pore pressure history reads: find \(\{\mathbf{u}_{1},\mathbf{w}_{1},p\}\) and \(\{\mathbf{u}_{2},\mathbf{w}_{2},\pi,\mathbf{S}\}\) satisfying: \[\rho\ddot{\mathbf{u}}_{1}+\rho_{f}\ddot{\mathbf{w}}_{1} =\nabla\cdot\mathbf{\sigma}(\mathbf{u}_{1},p) \text{in }\Omega^{\mathrm{RD}}\times T \tag{23a}\] \[-\nabla p =\rho_{f}\ddot{\mathbf{u}}_{1}+\rho_{w}\ddot{\mathbf{w}}_{1}+\frac{\eta} {\kappa}\dot{\mathbf{w}}_{1} \text{in }\Omega^{\mathrm{RD}}\times T\] (23b) \[-\dot{p} =\nabla\cdot\{M(\alpha\dot{\mathbf{u}}_{1}+\dot{\mathbf{w}}_{1})\} \text{in }\Omega^{\mathrm{RD}}\times T\] (23c) \[\rho_{\mathcal{J}}\mathbf{u}_{2}+\rho_{f}\mathcal{J}\mathbf{w}_{2} =\nabla\cdot\mathbf{\sigma}^{\mathrm{PML}}(\mathbf{S},\pi) \text{in }\Omega^{\mathrm{PML}}\times T\] (23d) \[\mathcal{D}(\mathcal{J}\mathbf{S}) =\frac{1}{2}\left\{\nabla\mathbf{u}_{2}\mathbf{\Lambda}_{p}+\mathbf{\Lambda}_ {p}(\nabla\mathbf{u}_{2})^{T}+\nabla\dot{\mathbf{u}}_{2}\mathbf{\Lambda}_{e}+\mathbf{\Lambda }_{e}(\nabla\dot{\mathbf{u}}_{2})^{T}\right\} \text{in }\Omega^{\mathrm{PML}}\times T\] (23e) \[-\nabla\dot{\pi} =\left(\tilde{\mathbf{\Lambda}}_{e}\frac{\partial}{\partial t}+ \tilde{\mathbf{\Lambda}}_{p}\right)\left(\rho_{f}\dot{\mathbf{u}}_{2}+\rho_{w}\dot{ \mathbf{w}}_{2}+\frac{\eta}{\kappa}\mathbf{w}_{2}\right) \text{in }\Omega^{\mathrm{PML}}\times T\] (23f) \[-\mathcal{J}\pi =\nabla\cdot\left\{M\left(\mathbf{\Lambda}_{e}\frac{\partial}{ \partial t}+\mathbf{\Lambda}_{p}\right)(\alpha\mathbf{u}_{2}+\mathbf{w}_{2})\right\} \text{in }\Omega^{\mathrm{PML}}\times T \tag{23g}\] subject to zero initial values and the Dirichlet and Neumann boundary conditions listed below (see Figure 1b). \[\mathbf{\sigma}(\mathbf{u}_{1},p)\cdot\mathbf{n}_{1} =\mathbf{g} \text{in }\Gamma_{g}\times T \tag{24a}\] \[\mathbf{\sigma}(\mathbf{u}_{1},p)\cdot\mathbf{n}_{1} =p\mathbf{I}\cdot\mathbf{n}_{1} =\mathbf{0} \text{in }\Gamma_{N}^{\mathrm{RD}}\times T\] (24b) \[\mathbf{\sigma}^{\mathrm{PML}}(\mathbf{S},\pi)\cdot\mathbf{n}_{2} =\dot{\pi}\mathbf{I}\cdot\mathbf{n}_{2} =\mathbf{0} \text{in }\Gamma_{N}^{\mathrm{PML}}\times T\] (24c) \[\mathbf{u}_{2} =\mathbf{0} \text{in }\Gamma_{D}^{\mathrm{PML}}\times T\] (24d) \[\mathbf{w}_{2} =\mathbf{0} \text{in }\Gamma_{D}^{\mathrm{PML}}\times T\] (24e) \[\pi =0 \text{in }\Gamma_{D}^{\mathrm{PML}}\times T \tag{24f}\] where \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) are outward pointing normal vectors to \(\Omega^{\mathrm{RD}}\) and \(\Omega^{\mathrm{PML}}\) (\(\mathbf{n}_{1}=-\mathbf{n}_{2}\) in \(\Gamma_{I}\)), respectively (see Figure 1a). Finally, to couple both equations, the continuity of displacements, tractions, and pressures must be imposed on the interface as follows: \[\mathbf{\sigma}^{\mathrm{PML}}(\mathbf{S},\pi)\mathbf{n}_{1}+\mathbf{\sigma}( \mathbf{u}_{1},p)\mathbf{n}_{2} =0 \text{in }\Gamma_{I}\times T \tag{25a}\] \[\mathbf{u}_{1} =\mathbf{u}_{2} \text{in }\Gamma_{I}\times T\] (25b) \[\mathbf{w}_{1} =\mathbf{w}_{2} \text{in }\Gamma_{I}\times T\] (25c) \[p =\dot{\pi} \text{in }\Gamma_{I}\times T \tag{25d}\] ### Extension to M-PML The previous formulations are prone to time instabilities depending of the media properties and the form of the stretching functions. To transform the PML layer in the fully-mixed and hybrid problems to the multiaxial case, the functions \(\alpha\) and \(\beta\) (see Equation (7)) must be redefined as: \[\alpha(x) =\alpha(y)=1 \tag{26a}\] \[\beta^{*}(x,y) =\beta(x)+p^{(y/x)}\beta(y)\] (26b) \[\beta^{*}(x,y) =\beta(y)+p^{(x/y)}\beta(x) \tag{26c}\] where \(p^{(y/x)}\) and \(p^{(x/y)}\) are constant parameters that allow the fine-tuning of the M-PML layer. This modified definition of the scaling and attenuation functions does not introduce changes to the previous formulation. ### Variational formulation The interface conditions (25a) and (25b) are fulfilled by using a single continuous function space for \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\). Similarly, (25c) is fulfilled by using a single continuous function space for \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\). A Lagrange multiplier is used for the condition (25d). Thus, using the following function spaces \[V =\{\mathbf{u}\in[H^{1}(\Omega)]^{2}\text{ s.t. }\mathbf{u}=\mathbf{0}\text{ on } \Gamma_{D}^{\mathrm{PML}}\times T\} \tag{27a}\] \[Q =\{p\in L^{2}(\Omega^{\mathrm{RD}})\}\] (27b) \[Q_{0} =\{p\in L^{2}(\Omega^{\mathrm{PML}})\text{ s.t. }p=0\text{ on } \Gamma_{D}^{\mathrm{PML}}\times T\}\] (27c) \[T =\{S\in[L^{2}(\Omega^{\mathrm{PML}})]^{2\times 2}\}\] (27d) \[L =\{l\in L^{2}(\Gamma_{I})\} \tag{27e}\] the weak form of the system of PDEs given in (23) reads: find \((\mathbf{u},\mathbf{w},p,\pi,\mathbf{S},\lambda_{p})\) for all \((\tilde{\mathbf{u}},\tilde{\mathbf{w}},\tilde{p},\tilde{\pi},\tilde{\mathbf{S}}, \tilde{\lambda}_{p})\in\mathbb{R}^{2}\). \(V\times V\times Q\times Q_{0}\times T\times L\) solution to: \[\int_{\Omega^{\text{RD}}}(\rho\ddot{\mathbf{u}}+\rho_{w}\ddot{\mathbf{w}}) \cdot\tilde{\mathbf{u}}\ d\Omega+\int_{\Omega^{\text{RD}}}\mathbf{\sigma}(\mathbf{u},p): \nabla\tilde{\mathbf{u}}\ d\Omega=\int_{\Gamma_{N}^{\text{RD}}}\mathbf{g}\cdot\tilde{\bm {u}}d\Gamma \tag{28a}\] \[\int_{\Omega^{\text{PML}}}(\rho\mathcal{J}\mathbf{u}+\rho_{f} \mathcal{J}\mathbf{w})\cdot\tilde{\mathbf{u}}\ d\Omega+\int_{\Omega^{\text{PML}}}\mathbf{ \sigma}^{\text{PML}}(\mathbf{S},\pi):\nabla\tilde{\mathbf{u}}\ d\Omega=0\] (28b) \[\int_{\Omega^{\text{RD}}}\left(\rho_{f}\ddot{\mathbf{u}}+\rho_{w} \ddot{\mathbf{w}}+\frac{\eta}{\kappa}\dot{\mathbf{w}}\right)\cdot\ddot{\mathbf{w}}\ d\Omega-\int_{\Omega^{ \text{RD}}}p\ \nabla\cdot\tilde{\mathbf{w}}\ d\Omega+\int_{\Gamma_{g}}p\mathbf{n}\cdot\tilde{\mathbf{w}}d\Gamma=0\] (28c) \[\int_{\Omega^{\text{PML}}}\left(\tilde{\mathbf{\Lambda}}_{e}\frac{ \partial}{\partial t}+\tilde{\mathbf{\Lambda}}_{p}\right)\left(\rho_{f}\dot{\mathbf{u }}+\rho_{w}\dot{\mathbf{w}}+\frac{\eta}{\kappa}\mathbf{w}\right)\cdot\tilde{\mathbf{w}}\ d \Omega-\int_{\Omega^{\text{PML}}}\dot{\pi}\ \nabla\cdot\tilde{\mathbf{w}}\ d\Omega=0\] (28d) \[\int_{\Omega^{\text{RD}}}\dot{p}\ \tilde{p}\ d\Omega+\int_{ \Omega^{\text{RD}}}\nabla\cdot\left\{M(\alpha\dot{\mathbf{u}}+\dot{\mathbf{w}})\right\} \ \tilde{p}\ d\Omega=0\] (28e) \[\int_{\Omega^{\text{PML}}}\mathcal{J}\pi\ \tilde{\pi}\ d\Omega+\int_{ \Omega^{\text{PML}}}\nabla\cdot\left\{M\left(\mathbf{\Lambda}_{e}\frac{\partial}{ \partial t}+\mathbf{\Lambda}_{p}\right)(\alpha\mathbf{u}+\mathbf{w})\right\}\ \tilde{\pi}\ d\Omega=0\] (28f) \[\int_{\Omega^{\text{PML}}}\mathcal{D}(\mathcal{J}\mathbf{S}): \tilde{\mathbf{S}}\ d\Omega-\frac{1}{2}\int_{\Omega^{\text{PML}}}\{\nabla\mathbf{u }\mathbf{\Lambda}_{p}+\mathbf{\Lambda}_{p}(\nabla\mathbf{u})^{T}+\nabla\dot{\mathbf{u}}\mathbf{ \Lambda}_{e}+\mathbf{\Lambda}_{e}(\nabla\dot{\mathbf{u}})^{T}\}:\tilde{\mathbf{S}}\ d\Omega=0\] (28g) \[\int_{\Gamma_{I}}\tilde{\lambda}_{p}\ (p-\dot{\pi})\ d\Gamma+\int_{ \Gamma_{I}}(\tilde{p}-\tilde{\pi})\ \lambda_{p}\ d\Gamma=0 \tag{28h}\] where \(\lambda_{p}\) is a Lagrange multiplier used to impose the coupling condition (25d). Figure 2: Poroelastic domains used for the numerical experiments in (a) homogeneous, (b) horizontally layered, and (c) horizontally layered with realistic interface to bedrock. The water table is marked with an inverted blue triangle. The red dots denote the locations where measurements of solid displacement (\(\mathbf{u}\)), fluid velocity (\(\mathbf{w}\)), and pressure (\(p\)) were taken. In Figure (d), a generic representation of the extended domains used in layered media is shown. ## 5 Numerical experiments Three experiments were developed to evaluate the performance and accuracy of the proposed fully-mixed and hybrid PML formulations. The first experiment (referred as Experiment 1) considers a homogeneous poroelastic half-space where the water table is located at the free surface. In the second (Experiment 2), a three horizontally layered media over a half-space and water table below the second layer was considered (Figure 1(b)). The third (Experiment 3) adds a more realistic stratification including an outcropping (see Figure 1(c)). The material parameters used in the three cases are listed in Table 1. Set 1 corresponds approximately to a soft rock, while sets 2 to 5 represent standard soil parameters, from loose to dense sands. To obtain a reference solution, we solved Biot's equations given in (1) on an extended domain with dimensions large enough to avoid wave reflections at the exterior boundaries \(\Omega^{\mathrm{RD}}\). A zero displacement Dirichlet condition was imposed to both \(\mathbf{u}\) and \(\mathbf{w}\) in the exterior boundary \(\Gamma_{D}\). A simulation time of 2.0 seconds (half of the runtime used for the PML experiments) was enough for comparison purposes in the extended domain simulations. For layered medium, the regular domain was embedded in the extended domain, and layers were extended to the exterior boundary (see Figure 1(d)). Additionally, a first-order paraxial boundary condition [1] was implemented to solve Biot's equations also for comparison purposes. The PML domain shown in Figure 2 was removed for paraxial simulations and replaced by this ABC at this boundary. A vertical load defined by a Ricker wavelet was applied on a 0.3 m width stripe of the free surface in the three experiments (see Figure 1(a)). The expression defining the source is as follows: \[\mathbf{g}(t)=A\left[\begin{array}{c}0\\ S(t)\end{array}\right]\quad\text{with }S(t)=\frac{(0.25u^{2}-0.5)e^{-0.25u^{2}}-13e^{13.5}}{0.5+13e^{13.5}}\quad\text{and }0\leq t\leq\frac{6\sqrt{6}}{\omega_{r}} \tag{29}\] \begin{table} \begin{tabular}{l c c c c c} \hline & Set 1 & Set 2 & Set 3 & Set 4 & Set 5 \\ \hline \(\rho_{s}\) (kg/m\({}^{3}\)) & 2650 & 2600 & 2600 & 2600 & 2600 \\ \(\rho_{f}\) (kg/m\({}^{3}\)) & 900 & 1.29 & 1000 & 1000 & 1.29 \\ \(K_{s}\) (N/m\({}^{2}\)) & \(12\times 10^{9}\) & \(2.3\times 10^{8}\) & \(2.3\times 10^{8}\) & \(2.5\times 10^{8}\) & \(4.6\times 10^{8}\) \\ \(K_{f}\) (N/m\({}^{2}\)) & \(2\times 10^{9}\) & \(1.4\times 10^{5}\) & \(2\times 10^{9}\) & \(2\times 10^{9}\) & \(1.4\times 10^{5}\) \\ \(K_{b}\) (N/m\({}^{2}\)) & \(10\times 10^{9}\) & \(1.5\times 10^{8}\) & \(1.5\times 10^{8}\) & \(1.7\times 10^{8}\) & \(4.2\times 10^{8}\) \\ \(\mu_{b}\) (N/m\({}^{2}\)) & \(5\times 10^{9}\) & \(1.33\times 10^{8}\) & \(1.33\times 10^{8}\) & \(2.84\times 10^{8}\) & \(4.44\times 10^{8}\) \\ \(T\) & 1.2 & 1 & 1 & 1 & 1 \\ \(\kappa\) (m\({}^{2}\)) & \(1\times 10^{-12}\) & \(8\times 10^{-9}\) & \(8\times 10^{-9}\) & \(1\times 10^{-10}\) & \(1\times 10^{-11}\) \\ \(\phi\) & 0.3 & 0.2 & 0.2 & 0.2 & 0.2 \\ \(\eta\) (Pa s) & \(1\times 10^{-3}\) & \(2\times 10^{-5}\) & \(1\times 10^{-3}\) & \(1\times 10^{-3}\) & \(2\times 10^{-5}\) \\ \hline \(f_{c}\) (Hz) & 44210 & 62 & 4 & 318 & 49350 \\ \(c_{s}\) (m/s) & 960 & 300 & 300 & 400 & 500 \\ \(c_{1p}\) (m/s) & 2366 & 470 & 574 & 636 & 755 \\ \(c_{2p}\) (m/s) & 775 & 329 & 425 & 513 & 330 \\ \hline \multicolumn{5}{l}{\(f_{c}\): characteristic frequency of the medium, \(c_{1p}\): fast primary wave velocity,} \\ \multicolumn{5}{l}{\(c_{2p}\): slow primary wave velocity, \(c_{s}\): shear wave velocity} \\ \end{tabular} \end{table} Table 1: Material parameters, characteristic frequency of the medium, and wave propagation velocities of the porous media considered in the experiments. The set 1 was taken from Tables 1 and 2 in [16], whereas sets from 2 to 5 were defined by the authors to obtain soil-like wave velocities. where \(A\) denotes the pulse amplitude and \(u=\omega_{r}t-3\sqrt{6}\). In the previous expression, \(\omega_{r}=2\pi f_{r}\) is the characteristic central angular frequency of the pulse. In all the experiments \(A=10^{4}\) N/m and \(f_{r}=15\) Hz were used. The frequency of this source was adjusted to obtain a frequency spectrum similar to those obtained from geophysical seismic surveys, which makes it suitable for the simulations. The porous media utilized in the simulations exhibit primarily dispersive behavior, indicated by the characteristic frequency of the media (\(f_{c}\)) satisfying the inequality \(f_{c}=(\eta\phi)/(2\pi\rho_{f}T\kappa)>f_{r}=15\) Hz, where the slowest primary wave does not propagate [9; 16]. Although the medium generated by the physical parameters in Set 3 (see Table 1) does not display dispersive behavior, its shear wave velocity is slower than its slowest volumetric wave. Therefore, the discretization parameters were chosen by considering only the fastest primary wave and the shear wave velocities for all media. The domain was discretized using triangular cells, and the element size (\(\Delta x\)) was adjusted to have a minimum of 12 elements per shortest wavelength, with the biggest possible element being chosen. In all simulations, elements in the vicinity of the surface load were refined to a size of \(\Delta x=0.15\) m. A third-order version of the scaling and attenuation profiles given in (8) (with \(m=3\)) was used for the simulations. The width of the PML, denoted by \(L_{\rm PML}\), was chosen to be ten times the element size once \(\Delta x\) was fixed (see Table 1). The constant \(\beta_{0}\) was calculated using the expression (9) with \(R=10^{-4}\), while \(\alpha_{0}\) was fixed at 5. For all the experiments considering M-PML stretching functions, \(p^{(y/x)}\) and \(p^{(x/y)}\) were used as 0.01. A summary of the discretization and PML parameters is presented in Table 2. The Newmark-\(\beta\) method with \(\beta=1/4\) and \(\gamma=1/2\) (i.e., without numerical damping) was used for time discretization [34]. The time-step \(\Delta t\) was calculated using the Courant-Friedrichs-Lewy criteria: \[\Delta t<{\rm CFL}\frac{\Delta x}{c_{1p}} \tag{30}\] where \(c_{1p}\) represents the velocity of the fast primary wave and \({\rm CFL}=0.75\) is the Courant-Friedrichs-Lewy number. \begin{table} \begin{tabular}{l c c c|c c c} \hline \multicolumn{6}{c}{General parameters} & \multicolumn{4}{c}{PML parameters} \\ \hline Experiment & \(\Delta x_{\rm global}\) (m) & \(\Delta x_{\rm source}\) (m) & \(\Delta t\) (s) & \(R\) & \(L_{\rm PML}\) (m) & \(\alpha_{0}\) \\ \hline 1 & 7.8 & 0.15 & \(10^{-3}\) & \(10^{-4}\) & 78 & 5 \\ 2 and 3 & 1.4 & 0.15 & \(10^{-3}\) & \(10^{-4}\) & 14 & 5 \\ \hline \multicolumn{6}{l}{\(\Delta x_{\rm global}\): global element size; \(\Delta x_{\rm local}\): element size refinement near to external source} \\ \multicolumn{6}{l}{\(\Delta t\): time step; \(R\): reflection coefficient; \(L_{\rm PML}\): width of the PML layer} \\ \end{tabular} \end{table} Table 2: Element sizes and PML parameters used in the three experiments with homogeneous and layered media ### Metrics for performance evaluation To evaluate the performance of the fully-mixed PML, hybrid PML, and paraxial methods, the poroelastic energy on \(\Omega^{\text{RD}}\) was estimated and compared against the reference solution using the following expression: \[E(t_{k}) =\frac{1}{2}\int_{\Omega^{\text{RD}}}\rho\dot{\mathbf{u}}(\mathbf{x},t_{k}) \cdot\dot{\mathbf{u}}(\mathbf{x},t_{k})\ d\Omega+\frac{1}{2}\int_{\Omega^{\text{RD}}} \rho C\mathbf{e}(\mathbf{u}(\mathbf{x},t_{k}),t_{k}):\mathbf{e}(\mathbf{u}(\mathbf{x},t_{k}),t_ {k})\ d\Omega\] \[+\frac{1}{2}\int_{\Omega^{\text{RD}}}\rho_{w}\dot{\mathbf{w}}(\mathbf{x},t _{k})\cdot\dot{\mathbf{w}}(\mathbf{x},t_{k})\ d\Omega+\frac{1}{2}\int_{\Omega^{\text{RD} }}\frac{1}{M}p(\mathbf{x},t_{k})\cdot p(\mathbf{x},t_{k})\ d\Omega \tag{31}\] \[+\int_{\Omega^{\text{RD}}}\rho_{f}\dot{\mathbf{u}}(\mathbf{x},t_{k})\cdot \dot{\mathbf{w}}(\mathbf{x},t_{k})\ d\Omega\] Finally, we obtained traces of \(\mathbf{u}\), \(\mathbf{w}\), and \(p\) at different locations \(\mathbf{x}_{i}\) in \(\Omega^{\text{RD}}\) (see Figure 2). Normalized error metric are defined as: \[e_{\mathbf{u}}(\mathbf{x}_{i},t_{k}) =\frac{\|\mathbf{u}_{\text{ref}}(\mathbf{x}_{i},t_{k})-\mathbf{u}(\mathbf{x}_{i}, t_{k})\|_{2}}{\max_{t_{k}}\|\mathbf{u}_{\text{ref}}(\mathbf{x}_{i},t_{k})\|_{2}} \tag{32a}\] \[e_{\mathbf{w}}(\mathbf{x}_{i},t_{k}) =\frac{\|\mathbf{w}_{\text{ref}}(\mathbf{x}_{i},t_{k})-\mathbf{w}(\mathbf{x}_{i}, t_{k})\|_{2}}{\max_{t_{k}}\|\mathbf{w}_{\text{ref}}(\mathbf{x}_{i},t_{k})\|_{2}}\] (32b) \[e_{p}(\mathbf{x}_{i},t_{k}) =\frac{|p_{\text{ref}}(\mathbf{x}_{i},t_{k})-p(\mathbf{x}_{i},t_{k})|}{ \max_{t_{k}}\|p_{\text{ref}}(\mathbf{x}_{i},t_{k})|} \tag{32c}\] where \(|\cdot|\) is the absolute value and \(\|\mathbf{f}\|_{2}=\sqrt{\sum_{i}f_{i}^{2}}\) the vector 2-norm. The sub-index ()\({}_{\text{ref}}\) denotes the reference solution obtained in the extended domain simulations. ### Implementation All experiments were solved using the open-source computing platform FEniCS [2; 29]. To implement the hybrid PML problem, Multiphenics [6] was used as a complementary tool. The finite element meshes were generated using the Frontal-Delaunay algorithm in Gmsh [19]. Discretization of the displacements (\(\mathbf{u}\) and \(\mathbf{w}\)) and pressures (\(p\) and \(\pi\)) was carried out using continuous Lagrange polynomials of second and first order, respectively. The stress history \(\mathbf{S}\) was discretized using discontinuous Lagrange polynomials of first order. The Multifrontal Massively Parallel sparse direct Solver (MUMPS) and the Generalized Minimal Residual Method (GMRES) solvers were used to solve the linear systems obtained after assembling the discrete weak forms of the problems [4; 5; 3]. FEniCS is built with PETSc as linear algebra backend [4; 5; 3] and supports both solvers by default. The iterative solver was used only for the extended domain simulations with relative and absolute tolerances of \(10^{-7}\) and \(10^{-9}\), respectively [4]. To accelerate the convergence, the linear system of the extended domain simulation was right-preconditioned using the Parallel ILU preconditioner HYPRE-Euclid [23]. The direct solver was used with default parameters [2]. ## 6 Results In the upcoming sections, we will present and analyze energy graphs, traces, error traces, and snapshots of the propagating waves for all three experiments. Through these analyses, we aim to provide a comprehensive understanding of the simulations and their outcomes. ### Experiment 1: homogeneous half-space Figure (a)a shows the poroelastic energy estimated using (31) for extended, paraxial, and PML simulations in a homogeneous half-space domain. The results demonstrate good agreement between the PML and reference solutions in the first 2 seconds of simulation time, although small differences are observed. The energy decay rate of the PML simulations is greater than that of the paraxial case, which consistently decays but a lower rate. The energy obtained from the hybrid and fully-mixed PML formulations showed no observable differences, indicating that both methods provide equivalent solutions (see Figure (a)a). Additionally, the number of degrees of freedom (DOFs) in the hybrid case is approximately 1.7 times less than the fully-mixed problem (as shown in Table 3) because the tensor \(\mathbf{S}\) does not need to be solved in \(\Omega^{\mathrm{RD}}\). Consequently, the hybrid formulation is significantly less computationally expensive than the fully-mixed form while maintaining the same properties. Figure (b)b compares the energies obtained from the hybrid PML and M-PML formulations. During the first second of the simulation, the results are similar, and only small differences are observed. As the simulation progresses, the energy obtained with M-PML stretching functions is slightly larger than that obtained with uniaxial functions, indicating slightly worse performance. However, during the last second of simulation, the energy of the uniaxial case stops decaying showing even a slightly increase, while the energy of the M-PML case continues decaying. The reduced performance of M-PML (compared to PML) during the first seconds of simulation is because the stretching functions are not perfectly matched in \(\Gamma_{I}\), generating spurious reflections. In fact, a M-PML absorbing layer can be interpreted as a sponge rather than a PML [31]. This is because the coupling of two damping directions causes the loss of the perfectly matched layer characteristic of Berenger's technique [8]. Thus, the theoretical reflection coefficient for an infinite M-PML \begin{table} \begin{tabular}{l r r r} \hline \hline & Experiment 1 & Experiment 2 & Experiment 3 \\ \hline Extended & 8,550,901 & 13,043,765 & 12,759,950 \\ Paraxial & 425,518 & 462,513 & 480,229 \\ Fully-mixed PML & 1,054,006 & 1,137,430 & 1,181,065 \\ Hybrid PML & 607,424 & 651,758 & 676,810 \\ \hline \hline \end{tabular} \end{table} Table 3: Number of degrees of freedom (DOFs) solved for extended, paraxial, fully-mixed PML, and hybrid PML formulations. Figure 3: Energies estimated on \(\Omega^{\mathrm{RD}}\) using Equation (31) for the horizontally layered experiment. (a) Results show the extended, paraxial, fully-mixed, and hybrid PML results, and (b) a comparison between the hybrid PML and hybrid M-PML simulations. Both fully-mixed and hybrid formulations give identical results. is not longer zero prior the discretization. Traces of the solutions and errors calculated using (32) for the two locations shown in Figure 1(a) are presented in Figures 4 and 5. There is a good match between the hybrid PML and the extended domain simulations at both locations, in contrast to the paraxial case where reflections are observed. Looking at the errors, the superior performance of the hybrid PML method is evident, showing an improvement of at least three orders of magnitude compared to the paraxial case. In the same figure, the error obtained by using M-PML stretching functions in the hybrid simulation is depicted. As mentioned previously, results Figure 4: Traces of \(\mathbf{u}\), \(\mathbf{w}\), and \(p\) for Experiment 1 at the points highlighted in Figure 1(a). Results obtained using the hybrid PML formulation show good agreement with the reference solution and no spurious reflections are observed. Figure 5: Errors in the traces of \(\mathbf{u}\), \(\mathbf{w}\), and \(p\) estimated using (32) for the Experiment 1 at the highlighted locations in Figure 1(a). The vertical axis is logarithmic to facilitate visualization of differences. At the beginning of the simulation, all formulations yielded errors close to zero, which are omitted in the plot. As the simulation progressed, errors obtained using the hybrid PML and M-PML are smaller than those obtained using the paraxial case, although M-PML showed slightly worse performance between 0.5 and 1.5 s approximately because the stretching functions are not perfectly matched in this case. 2(b). obtained using M-PML compared to PML are slightly worse between \(0.5\) and \(1.5\) s approximately because the stretching functions are not perfectly matched in this case. However, the results are still better than those obtained with the paraxial boundary conditions and more stable in time compared to the hybrid PML simulation (see Figure 3). Finally, screenshots of the solutions at different times can be found in Figure 6. ### Experiment 2: horizontally-layered domain Figure (a)a shows the poroelastic energies for extended, paraxial, and PML simulations in the horizontally-layered domain. Similarly, the results indicate good agreement between the PML and reference solutions within the first 2 seconds of simulation, although some small differences are observed. The energy decay rate of the PML simulations is greater than that of the paraxial case, which decays slowly but consistently. In this experiment, it is not evident from the plots that outgoing waves are reaching the boundaries of \(\Omega^{\rm RD}\), as Figure 6: Screenshots of the hybrid PML simulations at different time steps. The first row in the figure shows the solid displacement (\(\mathbf{u}\)), the second the relative fluid displacement (\(\mathbf{w}\)), and the third row the fluid pressure (\(p\)). No evident spurious reflections are observed at the interface between \(\Omega^{\rm RD}\) and \(\Omega^{\rm PML}\). Figure 7: Energies estimated on \(\Omega^{\rm RD}\) using Equation (31) for the horizontally layered experiment. (a) Results show the extended, paraxial, fully-mixed, and hybrid PML results, and (b) a comparison between the hybrid PML and hybrid M-PML simulations. Both fully-mixed and hybrid formulations give identical results. the plateaus observed in Figure 3 are absent. This is due to reflections at the interfaces between layers (see Figures 1(b) and 10). The energies obtained using the hybrid and fully-mixed PML formulations produced almost the same results, illustrating that both methods provide similar solutions (see Figure 2(a)). Additionally, the number of DOFs in the hybrid case is approximately 1.8 times fewer than the fully-mixed problem (as shown in Table 3). Therefore, the hybrid formulation is less computationally expensive than the fully-mixed form while providing equivalent solutions. Regarding Figure 6(b), in terms of decay rate, during the first 2.5 seconds of the simulation, the results obtained using PML and M-PML in the hybrid formulation are similar, and slightly greater errors are observed with M-PML between 1.5 and 2.5s. The worst performance of M-PML in this time window is because the stretching functions are not in perfectly matched at \(\Gamma_{I}\), as mentioned in previous paragraphs, which generates spurious reflections [31]. However, during the last second, the energy of the uniaxial case decays more slowly than the M-PML case, which keeps a constant rate. Figures 8 and 9 present traces of the solutions and errors calculated using (32) for two locations shown in Figure 1(b). The results show a good match between the hybrid PML and extended domain simulations at both locations, in contrast to the paraxial case where reflections are observed. The error analysis confirms the marginally superior performance of the hybrid PML method with respect to the M-PML case, but shows a considerable improvement compared to the paraxial case. The difference between PML and M-PML results is due to imperfect matching of stretching functions in the last case. Nevertheless, the results obtained with M-PML are still superior to those obtained with paraxial boundary conditions and more stable over time than the hybrid PML simulation (see Figure 7). Finally, Figure 10 presents screenshots that depict the propagation of waves at different times. These snapshots clearly show the transitions between different layers and the behavior of waves within each medium. For example, the layer with lower permeabilities (as shown in Table 1 and Figure 1(b)) exhibited smaller relative fluid displacements (\(\mathbf{w}\)) due to the small hydraulic conductivity. Additionally, media filled with air had lower pressures. Despite the complex behavior of waves in different media, the PML effectively absorbed Figure 8: Traces of \(\mathbf{u}\), \(\mathbf{w}\), and \(p\) for the third experiment at the locations highlighted in Figure 1(b). Results obtained using the hybrid PML formulation show good agreement with the extended domain solution. and attenuated the waves during the simulation. ### Experiment 3: layered domain with outcropping As observed in the previous experiments, both hybrid and fully-mixed PML formulations produced superior results compared to the paraxial case. The energy plots for this experiment also shows an excellent agreement between the PML and reference solutions within the initial 2 seconds of simulation, with negligible differences, as illustrated in Figure (a)a. Although the decay of energy was consistent over time in all methods, PML exhibited faster energy decay compared to the paraxial case. The energy plots indicate that Figure 10: Screenshots of the hybrid PML simulations at different time steps. The first row of the figure shows the solid displacement (\(\mathbf{u}\)), the second the relative fluid displacement (\(\mathbf{w}\)), and the third row the fluid pressure (\(p\)). No spurious reflections are observed at the interface between \(\Omega^{\text{RD}}\) and \(\Omega^{\text{PML}}\). Figure 9: Errors of the traces of \(\mathbf{u}\), \(\mathbf{w}\), and \(p\) estimated using (32) for the Experiment 2 at the locations highlighted in Figure (b)b. The vertical axis is in logarithmic scale to improve differences visualization. At the beginning of the simulation, errors obtained with both methods are close to zero and therefore are out from the vertical axis range. Results obtained with the hybrid PML formulation do not show observable differences compared to the reference solution. the solutions obtained using the hybrid and fully-mixed PML formulations are almost indistinguishable, indicating that both methods produce nearly identical results. This observation is consistent with the results of the previous experiments. Additionally, the hybrid case had a reduction in the number of DOFs by almost 1.8 times compared to the fully-mixed case (as shown in Table 3). Figure (b)b compares the energies obtained using the hybrid PML and M-PML formulations. The results show that the two methods provide almost identical solutions up to seconds of the simulation, while some differences can be observed between 2 and 3 seconds, where PML slightly surpasses the multiaxial solution.. Interestingly, after 3 seconds of the simulation, the energy decay rate using M-PML is better with respect to the uniaxial case, highlighting the improved time-stability of M-PML stretching functions. Traces of the solutions and their corresponding errors, calculated using (32), for the two locations illustrated in Figure (c)c, are presented in Figures 12 and 13, respectively. These figures confirm the excellent agreement between the hybrid PML and the extended domain simulation at both locations, with no observable reflections unlike the paraxial case. The error analysis shows a considerable improvement compared to Figure 11: Energy estimated on \(\Omega^{\text{RD}}\) using (31) for the Experiment 3. Results show the extended, paraxial, fully-mixed, and hybrid PML cases. Both fully-mixed and hybrid formulations give identical results. Figure 12: Traces of \(\mathbf{u}\), \(\mathbf{w}\), and \(p\) for the third experiment at the locations highlighted in Figure (c)c. Results obtained using the hybrid PML formulation show good agreement with the reference solution. the paraxial case and slightly superior performance of the hybrid PML method with respect to the M-PML case. However, it is worth noting that the results obtained with M-PML are still better than those obtained with paraxial boundary conditions and exhibit better stability over time than the hybrid PML simulation. Figure 14 shows snapshots of the solutions at different times. Due to the complex interfaces between materials in the medium, the waves interact in intricate ways generating complex patterns of reflected waves at interfaces as can be seen in the figure. Despite this complexity, the PML effectively absorbed the waves at the boundary of \(\Omega^{\text{RD}}\), and no unwanted reflections were observed. Figure 14: Screenshots of the hybrid PML simulations at different time steps. The first row shows the solid displacement (\(\mathbf{u}\)), the second the relative fluid displacement (\(\mathbf{w}\)), and the third row the fluid pressure (\(p\)). No spurious reflections are observed at the interface between \(\Omega^{\text{RD}}\) and \(\Omega^{\text{PML}}\). Figure 13: Errors of the traces of \(\mathbf{u}\), \(\mathbf{w}\), and \(p\) estimated using (32) for the Experiment 3 at the locations highlighted in Figure 1(c). Results obtained with the hybrid PML formulation do not show differences compared to the reference solution at Point A, while the paraxial approximation also performs reasonably well at this location.However, the advantages of the proposed method are much more evident at Point B. ## 7 Conclusions We proposed fully-mixed and hybrid formulations of the PML method for the simulation of poroelastic waves in truncated domains. Compared to other methods, both introduce only three additional scalar unknowns, i.e., the components of the symmetric stress-history tensor, reducing the number of unknowns with respect to current split- and unsplit-field formulations. However, the hybrid formulation considerably reduced the degrees of freedom required to solve the propagation problem when compared to the fully-mixed and extended domain simulations, because new unknowns are only defined in the boundary layer. On average, the hybrid PML formulation reduced the number of DOFs by approximately 1.8 times compared to the fully-mixed form and by 18 times compared to the extended domain simulations. However, in comparison to the paraxial case, the hybrid PML formulation increased the number of DOFs by almost 41%. However, although effective for some applications, paraxial boundary conditions are not ideal for absorbing surface waves. Therefore, they are not recommended in problems where surface waves are predominant. The proposed formulations of the PML method are prone to the same issues of other PML formulations, and they may suffer instability over time under certain circumstances. However, a significant advantage of the proposed methods is that they enable the scaling and attenuation functions to be redefined using stretching functions with superior absorbing properties, such as M-PML, without modifying the underlying partial differential equations. Moreover, the time-integration scheme used for the simulations did not consider numerical damping, making the problem conditions even more demanding compared to other methods [21; 22]. Only small time instabilities were observed, and they were fixed using M-PML. In terms of discretization, the element size was selected as large as possible to reduce the number of DOFs. Additionally, only P2-P2-P1 finite element triplets were utilized to represent the solutions to the problems. This approach considerably reduced the computational effort required to solve the problem, unlike in previous studies. As a result, the hybrid and fully-mixed PML formulations proposed in this article have demonstrated robustness for demanding discretization and physical media conditions. The following steps regarding this investigation are: (1) to extend fully-mixed and hybrid formulations to the 3D case to simulate more realistic scenarios for exmaple for complex seismic geophysical applications. (2) To consider variable and discontinuous porosities in space which would introduce discontinuities in the relative fluid displacement. (3) To apply the 2D and 3D formulations to solve inverse problems in porous media. And (4), to simulate and understand the propagation of poroelastic waves in the human body, which is particularly interesting in the fields of Magnetic Resonance Imaging and Ultrasound and is closely related to the elastography problem [14; 25]. ## 8 Acknowledgments HM and JM acknowledge the financial support given by ANID through the projects ANID-FONDECYT Postdoctorado #3220266 and ANID-FONDECYT Regular #1230864, respectively. ES was partially funded by a grant from the Research Center for Integrated Disaster Risk Management CIGIDEN Project 1522A0005 FONDAP 2022.
2303.13122
Exploring Visual Prompts for Whole Slide Image Classification with Multiple Instance Learning
Multiple instance learning (MIL) has emerged as a popular method for classifying histopathology whole slide images (WSIs). However, existing approaches typically rely on pre-trained models from large natural image datasets, such as ImageNet, to generate instance features, which can be sub-optimal due to the significant differences between natural images and histopathology images that lead to a domain shift. In this paper, we present a novel, simple yet effective method for learning domain-specific knowledge transformation from pre-trained models to histopathology images. Our approach entails using a prompt component to assist the pre-trained model in discerning differences between the pre-trained dataset and the target histopathology dataset, resulting in improved performance of MIL models. We validate our method on two publicly available datasets, Camelyon16 and TCGA-NSCLC. Extensive experimental results demonstrate the significant performance improvement of our method for different MIL models and backbones. Upon publication of this paper, we will release the source code for our method.
Yi Lin, Zhongchen Zhao, Zhengjie ZHU, Lisheng Wang, Kwang-Ting Cheng, Hao Chen
2023-03-23T09:23:52Z
http://arxiv.org/abs/2303.13122v1
# Exploring Visual Prompts for Whole Slide Image Classification with Multiple Instance Learning ###### Abstract Multiple instance learning (MIL) has emerged as a popular method for classifying histopathology whole slide images (WSIs). However, existing approaches typically rely on pre-trained models from large natural image datasets, such as ImageNet, to generate instance features, which can be sub-optimal due to the significant differences between natural images and histopathology images that lead to a domain shift. In this paper, we present a novel, simple yet effective method for learning domain-specific knowledge transformation from pre-trained models to histopathology images. Our approach entails using a prompt component to assist the pre-trained model in discerning differences between the pre-trained dataset and the target histopathology dataset, resulting in improved performance of MIL models. We validate our method on two publicly available datasets, Camelyon16 and TCGA-NSCLC. Extensive experimental results demonstrate the significant performance improvement of our method for different MIL models and backbones. Upon publication of this paper, we will release the source code for our method. Keywords:Visual prompt Multiple instance learning Whole slide image Deep learning. ## 1 Introduction Whole slide images (WSI) play a vital role in histopathology image analysis and clinical disease diagnosis [6, 23, 26]. With the advent of deep-learning-based techniques, histopathology image analysis has undergone a significant transformation [9, 24]. However, there are still challenges when it comes to classifying WSI. Due to their massive size, WSIs cannot be directly fed into typical deep-learning models. Therefore, WSIs are often divided into patches for processing [11]. Unfortunately, annotating patch-level labels is labor-intensive and time-consuming, which limits the applicability of conventional supervised learning methods [16, 18]. To address this issue, multiple instance learning (MIL) has emerged as the dominant technique for WSI analysis. In this approach, each WSI is considered as a bag containing multiple patches (instances), and a WSI bag is labeled negative only if all patches (instances) of this bag are negative. Conversely, the bag's label is positive if at least one of its instances is positive. Downsampling and feature extraction are necessary due to the large number of patches in a WSI. The quality of the extracted patch features greatly influences the performance of the subsequent MIL classification. Most existing methods [13, 17, 29, 19] extract patch features by a frozen feature extractor pre-trained on the large natural image datasets, such as ImageNet [7], and then train the MIL classifier for the WSI prediction, as shown in Fig. 1\((a)\). However, such a MIL training scheme overlooks the domain shift issue between natural and pathological images. To narrow the domain shift, some researchers [28] propose to use self-supervised pre-training methods such as SimCLR [5] to train the feature extractor. However, these self-supervised learning methods do not take full advantage of the bag labels, resulting in limited performance. Another naive solution is to use partial patches instead of all patches to fine-tune the pre-trained feature extractor, as illustrated in Fig. 1\((b)\). However, fine-tuning all the parameters of the feature extractor using limited patches without patch-level labels may impair the benefits from pre-training on large-scale datasets like ImageNet, which increases the risk of overfitting in downstream tasks. Inspired by the breakthrough of prompt learning in natural language processing (NLP), we introduce visual prompts to adapt the pre-trained feature extractor to pathological images, addressing the aforementioned issue. Our prompt learning framework for MIL-based WSI classification is shown in Fig. 1\((c)\). Limited by memory capacity, we propose a feasible solution that selects representative pathological images, instead of all patches, to fine-tune the pre-trained Figure 1: An illustration of three adapting schemes for WSI classification with MIL: (a) freezing the pretrained feature extractor while only training the MIL classifier; (b) fine-tuning the feature extractor and training the MIL classifier; (c) only training the prompt blocks and the MIL classifier. feature extractor. Based on the selected images, we design a prompt component added to the feature extractor to learn visual prompts, and freeze the backbone while only training the prompt component with the lightweight MIL classifier. In this way, our method can improve the performance of the ImageNet pretrained feature extractor and achieve domain transformation to pathological image data. Our method also makes the entire training process highly efficient and lightweight. To the best of our knowledge, this is the first work to explore prompt learning for WSI classification. In summary, our contributions are three-fold: * We, for the first time, introduce visual prompts into WSI classification, which enables data domain transformation by learning prompt components. * We propose an intuitive but effective method for end-to-end prompt training, which involves representative patch selection to reduce the number of instances in a WSI bag. * We conduct extensive experiments to validate the effectiveness of the proposed method on two public datasets, _i.e._, Camelyon16 and TCGA-NSCLC. Experimental results demonstrate consistent improvements across different MIL methods and backbones. ## 2 Related Work **MIL for WSI classification.** MIL-based methods [13, 17, 29, 19] have gained popularity in WSI classification due to their high effectiveness. These methods typically involve using a feature extractor to extract features from image patches of WSI, followed by an aggregation step to obtain a feature representation at the WSI level. A lightweight classifier is then employed to predict the WSI category [13]. In WSI classification, an effective feature extractor that generates representative feature is crucial for accurate classification results. However, existing MIL-based methods for WSI classification mainly adopt a pre-trained feature extractor without fine-tuning, which results in sub-optimal performance [12]. This is due to the domain shift and task discrepancies between the pre-training task (_e.g._, ImageNet) and the downstream task (_e.g._, histopathology) [27]. For this issue, we introduce visual prompts for WSI classification, enabling smooth feature modulation from the upstream dataset to the downstream WSI classification. **Prompt Learning.** Prompt learning has recently emerged as a lightweight and efficient transfer learning paradigm in NLP and has achieved remarkable success [14]. The fundamental idea behind prompt learning is to freeze large-scale NLP models, such as BERT [8] and GPT-3 [3], that have been pre-trained on vast datasets and use task-specific prompts to adapt them to diverse downstream tasks without updating any parameters [1]. Building on the NLP prompt learning paradigm, several studies [20, 22, 4] have proposed to extend prompt learning to natural images in computer vision. For example, Luddecke et al. [20] used text and image prompts to adapt the frozen pre-trained CLIP model [25] to new image segmentation tasks. However, the effectiveness of prompt learning in the field of histopathology analysis is under-investigated. ## 3 Method Fig. 2 illustrates the proposed prompt learning framework, which consists of three primary steps: (I) MIL classifier training, (II) representative patch selection, and (III) prompt fine-tuning. First, an ImageNet pre-trained ResNet [10] is used to extract patch features, which are then used to train the MIL classifier for the attention score of each patch. Second, representative patches in each WSI bag are chosen based on their attention scores to form a new bag. Third, the representative patches are used to fine-tune the prompt blocks plugged into the feature extractor and MIL classifier in an end-to-end manner. we will elaborate on each step in the following sections. ### Attention-based MIL Classifier with Frozen Feature Extractor In attention-based MIL for WSI classification, the standard training process first uses a frozen feature extractor \(f(\cdot)\) pre-trained on ImageNet to extract all patch features. Then all patch features in a WSI bag are aggregated to form the WSI feature using the attention mechanism, which learns an attention score \(\alpha_{k}\) for each patch \(k\) through the MIL classifier. The WSI feature \(\mathbf{F}\) is obtained by computing the attention-weighted average of all patch features in a WSI as [13]: \[\mathbf{F}=\sum_{k=1}^{K}\alpha_{k}f(\mathbf{x}_{k}), \tag{1}\] Figure 2: Overview of our method. (I) MIL classifier training with all patch features. (II) Representative patch selection. (III) Prompt learning with representative patches. where \[\alpha_{k}=\frac{\exp\left\{\mathbf{w}^{\mathrm{T}}\left(\tanh\left(\mathbf{V}_{1}f(\mathbf{x}_ {k})\right)\odot\mathrm{sigmoid}\left(\mathbf{V}_{2}f(\mathbf{x}_{k})\right)\right) \right\}}{\sum\limits_{j=1}^{K}\exp\left\{\mathbf{w}^{\mathrm{T}}\left(\tanh \left(\mathbf{V}_{1}f(\mathbf{x}_{k})\right)\odot\mathrm{sigmoid}\left(\mathbf{V}_{2}f(\mathbf{ x}_{k})\right)\right)\right\}}, \tag{2}\] where \(\mathbf{w}\), \(\mathbf{V}_{1}\) and \(\mathbf{V}_{2}\) are learnable parameters in the MIL classifier, \(\odot\) is the element-wise multiplication, and \(\tanh(\cdot)\) and \(\mathrm{sigmoid}(\cdot)\) denote the \(\tanh\) and sigmoid activation function, respectively. Finally, the MIL classifier head \(h(\cdot)\) predicts the label of WSI from the WSI feature \(\mathbf{F}\), represented as: \[\tilde{\mathbf{y}}=h(\mathbf{F}), \tag{3}\] where \(\tilde{\mathbf{y}}\) denotes the prediction of the WSI label. During training, we minimize the prediction error using the cross-entropy (CE) loss. ### Representative Patch Selection. In WSI classification, it's common for only a small number of patches within a WSI to be associated with the disease of interest. For example, in the positive slides of Camelyon16 [2], on average, less than 10% of the patches in a WSI are tumor patches. Thus, only a few patches are sufficient to represent the entire WSI bag. Based on this observation, we propose a feasible solution that selects representative pathological images to fine-tune the pre-trained feature extractor for WSI classification. In attention-based MIL, the patch with a higher attention score \(\alpha_{k}\) in a WSI bag is more likely to have the same category semantics as the WSI. Thus, based on the patches' attention score \(\alpha_{k}\) calculated by the MIL classifier, we select the top-\(K\) patches with the highest attention scores in each WSI as a new bag. Here, \(K\) is set to 200, which will be discussed in Section 4.2. The new bag label is assigned as the original WSI bag label for prompt fine-tuning. In this way, we reduce vast quantities of patches in each WSI to a small subset, enabling an end-to-end training for both feature extractor and MIL classifier. ### Prompt Fine-tuning. With the selected representative patches, we design a prompt learning framework to adapt the feature extractor from the natural image domain to the pathological image domain, while retaining the advantage of pre-training on the large and diverse ImageNet dataset. Specifically, we design a prompt block that sequentially consists of a global average pooling (GAP), a multi-layer perceptron (MLP) with two layers, and a sigmoid activation, as shown in Fig. 2. Given the intermediate feature map \(\mathbf{f}_{i}\) from the \(i\)-th block of the feature extractor, the prompt block is added in parallel to the basic ResNet block \(g_{i}(\cdot)\) to generate the visual prompt \(\mathbf{p}_{i}\in\mathbb{R}^{D}\), where \(D\) denotes the dimension of the prompt vector. Subsequently, the generated prompts \(\mathbf{p}_{i}\) are channel-wise multiplied with feature maps \(\mathbf{f}_{i+1}\) in the next block, represented as: \[\mathbf{p}_{i}=\mathrm{sigmoid}(\mathbf{W}_{i2}\mathrm{ReLU}(\mathbf{W}_{i1}\mathrm{GAP }(\mathbf{f}_{i}))), \tag{4}\] \[\mathbf{f}_{i+1}=g_{i}(\mathbf{f}_{i})\odot\mathbf{p}_{i}, \tag{5}\] where ReLU(\(\cdot\)) denotes rectified linear unit, \(\mathbf{W}_{i1}\) and \(\mathbf{W}_{i2}\) are the learnable parameters to be fine-tuned, and the parameters of \(g_{i}(\cdot)\) remain frozen during training. During the training process, only the parameters of prompt blocks and the lightweight MIL classifier are updated in an end-to-end manner, while the original pre-trained feature extractor is frozen. ## 4 Experiments ### Datasets **CAMELYON16.** The Camelyon16 [2] dataset consists of 399 H&E stained slides from breast cancer screening, with two classes: normal and tumor. We employ the official 129 testing set, and the official 270 training set is further divided randomly into training and validation sets, with a ratio of \(9\colon 1\). After preprocessing, a total of 4,610,687 patches with the size of \(256\times 256\) are obtained at \(20\times\) magnification, with an average of 11,556 patches per slide. **TCGA-NSCLC.** The Cancer Genome Atlas+ (TCGA) non-small cell lung cancer (NSCLC) [21] dataset is comprised of two subtypes of lung cancer: Lung Adenocarcinoma (TCGA-LUAD) and Lung Squamous Cell Carcinoma (TGCA-LUSC). It contains a total of 1,053 WSIs, with 541 LUAD slides and 512 LUSC slides. We split the dataset into training, validation, and testing sets at the slide level, with a distribution ratio of 65:10:25. In our study, we extract \(256\times 256\) patches at \(20\times\) magnification from each WSI. After preprocessing, a total of 3,252,431 patches are obtained, with an average of 3,089 patches per WSI. Footnote †: [https://www.cancer.gov/tcga](https://www.cancer.gov/tcga) ### Implementation Details In our approach, we adopt the ResNet model as the feature extractor, which is pre-trained on the ImageNet dataset. We remove the last layer of the ResNet following [19] and add prompt blocks to the third layer. For ResNet-18 and ResNet-50, we set the number of prompt blocks as 2 and 6, respectively. For representative patch selection, we select the top 200 patches for each WSI bag by default. For the prompt fine-tuning, we freeze the backbone of ResNet and only train the prompt block and MIL classifier using the Adam optimizer [15] with a learning rate of 1e-4 and weight decay of 1e-4 for 100 epochs. All the experiments are conducted with an NVIDIA GeForce RTX 3090 GPU. ### Comparison Results In this study, we comprehensively evaluate the effectiveness of our proposed framework with various experiment settings, including two datasets: Camelyon16 [2] and TCGA-NSCLC [21]; two backbone networks: ResNet-18 and ResNet-50; and two MIL classifiers: a state-of-the-art model DTFD [29] and a common model ABMIL [13]. We report the area under curve (AUC), accuracy (Acc), and F1-score as evaluation metrics for WSI classification task. Table 1 shows the eight different settings of experiments. The first row in each setting represents the baseline approach with the frozen feature extractor. The second row (RPS-FT) represents using the representative patches to fine-tune both the feature extractor and MIL classifier. The third row (RPS-PT) represents our prompt fine-tuning method. In Table 1, we can observe that our method achieves a consistent improvement compared to both the baseline approach and the fine-tuning approach. Notably, when using ResNet-18 as the feature extractor, our method achieves a higher AUC than the DTFD baseline, with an improvement of 3.83% on the Camelyon16 dataset and 1.28% on the TCGA-NSCLC dataset. Notably, our method with prompt blocks significantly outperforms the RPS-FT scheme, demonstrating the advantage of our visual prompts. Besides, our method shows particularly superior results on the Camelyon16 dataset compared to the results on the TCGA-NSCLC dataset. This can be attributed to the fact that the Camelyon16 dataset has fewer WSIs and a smaller proportion of tumor regions. This further confirms the advantage of our approach in handling challenging datasets. Overall, the experiment results indicate that our method effectively improves the performance of attention-based MIL methods on the WSI classification tasks. ### Ablation Study **Effectiveness of the prompt block number.** To study the impact of the prompt component number, we conduct experiments on the Camelyon16 dataset using DTFD as the MIL classifier with different numbers of prompt blocks added to ResNet-50. The results presented in Fig. 3 (\(a\)) show that the AUC values \begin{table} \begin{tabular}{c|l|c c c|c c c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{ResNet-18} & \multicolumn{3}{c}{ResNet-50} \\ & & AUC & F1 & Acc & AUC & F1 & Acc \\ \hline \hline \multirow{8}{*}{Camelyon16} & DTFD [29] & 87.11 & 78.05 & 86.05 & 90.07 & 81.63 & 86.05 \\ & DTFD-RPS-FT & 81.40 & 69.23 & 81.58 & 89.34 & 81.32 & 86.82 \\ \cline{1-1} & DTFD-RPS-PT & **90.94** & **78.72** & **84.50** & **91.40** & **83.52** & **88.37** \\ \cline{1-1} \cline{2-9} & ABMIL [13] & 84.08 & 74.07 & **83.72** & 85.96 & 77.11 & 85.27 \\ \cline{1-1} & ABMIL-RPS-FT & 83.98 & 76.19 & 82.95 & 83.78 & **80.90** & **86.82** \\ \cline{1-1} & ABMIL-RPS-PT & **87.45** & **76.40** & **83.72** & **86.73** & 78.65 & 85.27 \\ \hline \hline \multirow{8}{*}{TCGA} & DTFD [29] & 92.91 & **87.22** & 87.12 & 93.47 & 89.38 & 89.01 \\ \cline{1-1} & DTFD-RPS-FT & 92.97 & 85.28 & 85.22 & 92.27 & 86.55 & 85.99 \\ \cline{1-1} \cline{2-9} & DTFD-RPS-PT & **94.19** & 87.16 & **87.50** & **93.80** & **89.71** & **89.39** \\ \cline{1-1} \cline{2-9} & ABMIL [13] & 93.56 & 86.05 & 86.36 & 93.34 & 88.39 & 88.26 \\ \cline{1-1} & ABMIL-RPS-FT & 89.76 & 83.22 & 81.06 & 90.85 & 85.29 & 84.85 \\ \cline{1-1} & ABMIL-RPS-PT & **93.84** & **87.46** & **86.74** & **94.21** & **89.55** & **89.39** \\ \hline \end{tabular} \end{table} Table 1: Results (%) of comparative experiments on Camelyon-16 and TCGA-NSCLC dataset. In the table, the best results are in bold. “RPS”: representative patch selection, “FT”: fine-tuning, “PT”: prompt fine-tuning. with different prompt quantities are consistently improved by approximately 1% compared to the baseline approach when using more than one prompt block. These results suggest that our prompt block component is robust and effectively improves the performance of the feature extractor. **Influence of the number of representative patches.** We investigate the effect of different top-\(K\) values in the RPS procedure on the performance of our method using ResNet-50 and DTFD on the Camelyon16 dataset. Fig. 3 (\(b\)) shows the AUC values under two different settings: fine-tuning scheme (RPS-FT) and our prompt fine-tuning scheme (RPS-PT). Our method consistently outperforms the fine-tuning scheme across all \(K\) values using less than 50% GPU resources, demonstrating the efficiency and effectiveness of our method. ### Analysis on the Effectiveness of Visual Prompts During the experiment, we found that fine-tuning produced unsatisfactory performance, which was even inferior to the baselines where the feature extractor was frozen. This subpar performance can be attributed to the specific nature of the WSI classification task. It requires processing all the patches in a given WSI during each iteration of model updating, which is computationally expensive. As a result, only a small portion of the instances are used for training, leading to a high risk of overfitting. In contrast, our proposed method of using visual prompts allows for the quick learning of the policy of the current task based on the previously learned representation, enabling an efficient task and domain adaption, thereby overcoming the limitations of fine-tuning. ## 5 Conclusion In this paper, we propose a novel prompt learning method to learn domain-specific knowledge transformation from ImageNet pre-trained models to pathological images. Our innovation is based on the observation that there is a large Figure 3: Results of ablation studies on CAMELYON16 dataset: (a) Effectiveness of the number of prompt blocks (b) Influence of the number of representative patches. domain shift and task discrepancy between the upstream datasets and pathological tasks, resulting in sub-optimal feature representation. To relieve this issue, we introduce a prompt component and representative patches selection strategy to fine-tune the prompt blocks while freezing the feature extractor backbone. In this way, the extracted patch features can be adapted for pathological images and boost the WSI classification with MIL models. Experiments on two public datasets (_i.e._, Cameylon16 and TCGA-NSCLC) with two MIL classifiers (_i.e._, DTFD and ABMIL) demonstrate the effectiveness and efficiency of our method.
2303.08264
Neuro-symbolic Commonsense Social Reasoning
Social norms underlie all human social interactions, yet formalizing and reasoning with them remains a major challenge for AI systems. We present a novel system for taking social rules of thumb (ROTs) in natural language from the Social Chemistry 101 dataset and converting them to first-order logic where reasoning is performed using a neuro-symbolic theorem prover. We accomplish this in several steps. First, ROTs are converted into Abstract Meaning Representation (AMR), which is a graphical representation of the concepts in a sentence, and align the AMR with RoBERTa embeddings. We then generate alternate simplified versions of the AMR via a novel algorithm, recombining and merging embeddings for added robustness against different wordings of text, and incorrect AMR parses. The AMR is then converted into first-order logic, and is queried with a neuro-symbolic theorem prover. The goal of this paper is to develop and evaluate a neuro-symbolic method which performs explicit reasoning about social situations in a logical form.
David Chanin, Anthony Hunter
2023-03-14T22:37:33Z
http://arxiv.org/abs/2303.08264v1
# Neuro-symbolic Commonsense Social Reasoning ###### Abstract Social norms underlie all human social interactions, yet formalizing and reasoning with them remains a major challenge for AI systems. We present a novel system for taking social rules of thumb (ROTs) in natural language from the Social Chemistry 101 dataset and converting them to first-order logic where reasoning is performed using a neuro-symbolic theorem prover. We accomplish this in several steps. First, ROTs are converted into Abstract Meaning Representation (AMR), which is a graphical representation of the concepts in a sentence, and align the AMR with RoBERTa embeddings. We then generate alternate simplified versions of the AMR via a novel algorithm, recombining and merging embeddings for added robustness against different wordings of text, and incorrect AMR parses. The AMR is then converted into first-order logic, and is queried with a neuro-symbolic theorem prover. The goal of this paper is to develop and evaluate a neuro-symbolic method which performs explicit reasoning about social situations in a logical form. ## 1 Introduction Deep learning approaches have seen success on a wide range of natural language understanding (NLU) and natural language processing (NLP) tasks [23]. However, these models are largely black boxes and it is difficult to understand how these models arrive at conclusions. This is especially problematic when dealing with controversial social norms, where current deep learning models have been shown to learn a range of problematic racial and gender biases [1, 1, 13]. It is not hard to envision how this bias will lead to social harm as we rely more and more on language models to generate content and power systems that make decisions that affect people's lives. While interpretability methods exist that can be applied to deep learning models [1, 16, 17], these methods at best provide hints into what sorts of features influence a model's predictions, but cannot directly explain the reasoning process the model took internally, and do not provide a way to directly change problematic model behavior. Symbolic logic approaches, on the other hand, reason with explicitly defined rules which gives a number of benefits for interpretability, multi-hop reasoning, and directly changing model behavior. It is possible to trace through inference steps in a symbolic logic system to determine how a conclusion was derived, and tweak any problematic logical formulae as necessary. Ideally, we should be able to vet the logical rules that a model is using during inference and delete or modify any undesirable rules as needed. Symbolic logic, however, tends to be brittle and struggles with small inconsistencies in input data, as symbols in logical formulae cannot represent semantically similar concepts like "dad" and "father" without every possible alternate symbol and relation being explicitly added to the system. This bitfulness is not a problem for deep learning, however, as the entire system learns to simply maximize output over noisy inputs. These contrasting strengths motivate neuro-symbolic approaches which can combine the strengths of both symbolic logic and deep learning. The goal of this work is to develop a neuro-symbolic social reasoning system to do logical reasoning using social rules of thumb described in natural language. We accomplish this in several steps. First, we leverage Abstract Meaning Representation (AMR) [1] by converting natural language social rules-of-thumb into AMR and then align them with RoBERTa embeddings. We then generate alternate simplified versions of the AMR for robustness against different wordings of "semantically equivalent" sentences and AMR parse errors. The AMR is then converted into first-order logic, and is queried with a neuro-symbolic theorem prover [1, 16]. We use natural language social rules-of-thumb from the Social Chemistry 101 dataset [16] as a source of rules for the reasoning system. Social Chemistry 101 is a dataset containing over 290,000 rules of thumb (ROTs) for evaluating people's behavior in everyday social situations. Social Chemistry ROTs are drawn from Reddit's r/confessions and r/amitheashole subreddits, the ROCStories corpus [14], and from the Dear Abby advice column web archives 1. These rules of thumb cover a wide range of common situations, with ROTs like "You should be there for your child's birthday" or "It is wrong to damage someone else's property on purpose". ROTs consist of a short sentence containing a judgement like "It's wrong to..." or "It's reasonable to..." followed by a description of the situation being judged. In addition, the dataset contains a social situation text (SST) for each ROT which gives an example that the ROT applies to. A sample of Social Chemistry 101 data is shown in Table 1. These ROTs have the potential to be a great source of commonsense knowledge for a reasoning system, if they can be parsed into a logical form that makes it possible to do reasoning. This paper attempts to do exactly this, by turning Social Chemistry ROTs into first-order logic, as will be described in more detail in section 4. We evaluate the performance of our method on Social Chemistry 101 by measuring the degree to which the method is able to match each ROT in the dataset to its corresponding SST via logical inference, and verify as well that we should not be able to conclude a match via logical inference when applying each ROT to a different, randomly chosen SST from the dataset. For example, we would expect the ROT "It is wrong to be jealous of your partner" to apply to the SST "being jealous of my girlfriend", but it should not apply to the SST "Not buying my roommates' kids Christmas presents". We proceed as follows: In Section 2, we give a overview on AMR and converting AMR into logical formulae; In Section 3, we give a background overview on neuro-symbolic theorem proving; In Section 4, we detail our social reasoning system; In Section 5 we evaluate our technique against the Social Chemistry dataset; In Section 6, we discuss related work; In Section 7 we discuss future directions for this work. The code for this paper is available online 2. Footnote 2: [https://github.com/chanind/amr-social-chemistry-reasoner](https://github.com/chanind/amr-social-chemistry-reasoner) ## 2 Abstract meaning representation Abstract meaning representation (AMR) Banarescu et al. (2013) is a powerful abstraction which can help simplify and standardize sentences for semantic meaning. AMR represents text as a rooted, directed acyclic graph drawing predicate senses and semantic roles from the OntoNotes project Hovy et al. (2006). An example is shown in Figure 1. AMR has first-class support for negation, which is particularly valuable when translating AMR into first-order logic. This is demonstrated below for the sentence "The boy does not go", where negation is represented via the :polarity relation. (g / go-02 :ARG0 (b / boy) :ARG0 (g / girl) :ARG1 b)) The numbers after the instance name in AMR (e.g. go-02 above) refers to a specific OntoNotes or PropBank Kingsbury and Palmer (2002) semantic frame. These frames can take different parameters, but in general :ARG0 refers to the subject and :ARG1 refers to the object. In our work, however, we find that these frame numbers (the -02 in go-02) tend to get mixed-up by existing AMR parsers, so we discard these frame numbers when they appear in AMR. For the rest of this paper, we will remove frame numbers in AMR. AMR makes a number of simplifying assumptions in order to represent semantically similar sentences using identical structure. AMR does not differentiate between verbs and nouns, and cannot represent verb tenses. It does not represent singular and plural, quotations, or articles. For instance, the AMR in Figure 1 could refer to any of the following English sentences, among many others: "The boy desires the girl to believe him.", "The boy will desire the girl to believe him.", "A boy desires a girl to believe him.", "The boy has a desire to be believed by the girl.", etc... For this research, the simplification of English that AMR performs is generally beneficial since it removes information from English sentences which is less relevant to the task at hand. AMR was also chosen due to the availability of high-quality off-the-shelf AMR parsers Lee et al. (2021); Drozdov et al. (2022) and tooling like the Python Penman library Goodman (2020). AMR can be represented as either a tree or a graph, with the tree directly mapping to the AMR text format Goodman (2020). An example of showing the same AMR represented as text, a tree, and a graph is shown in 2. It is always possible to generate the graph representation from the tree representation, but the reverse is not always possible as there are multiple possible trees that map to the same graph. In the rest of this paper, we always use the tree representation rather than the graph representation when working with AMR. AMR also benefits from existing work on converting AMR to first-order logic. Bos (2016) describes an algorithm for converting an AMR graph into existentially-quantified first-order logic joining all predicates by an implicit conjunction. An example of this is shown below, where the \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline **Rule of Thumb** & **Social Situation Text** \\ \hline You shouldn’t put dog poop in front of some-one’s door & putting my neighbor’s dog’s poop in front of their door \\ \hline It is wrong to be jealous of your partner & being jealous of my gf \\ \hline It’s expected that you’ll buy Christmas presents for the kids in your life & not buying my roommate’s kids Christmas presents this year \\ \hline It’s bad to be an enabler of harmful behavior & not wanting to enable my grandparents hurting themselves \\ \hline \end{tabular} \end{table} Table 1: Sample Social Chemistry ROTs and corresponding SSTs Figure 1: AMR for the sentence “The boy wants the girl to believe him” AMR for "The boy does not go" from Figure 3 is converted into logical formulae as follows: \[\neg\exists G.(\text{go}(G)\land\exists B.(:\text{ARG0}(G,B)\land\text{boy}(B)))\] Another example of AMR and its converted logical formulae from [1] is show in Figure 3. There is not space to reproduce the full algorithm of [1] here, but we refer the reader to the original paper for details. In addition, we implemented this algorithm in an open-source Python library called "Amr Logic Converter" 3 which is used in this paper. Footnote 3: [https://github.com/chanind/amr-logic-converter](https://github.com/chanind/amr-logic-converter) ## 3 Resolution with non-binary unification Traditional theorem provers unify predicates using an exact match on the name of the predicate [1]. For instance, the statement father(Homer, Bart) would unify against father(Homer, Y) since they both have the same predicate father/2, and all constants match when the variable Y is grounded to the constant Bart. However, dad(Homer,Y) would not unify since dad/2 is not an exact string match with father/2, despite them having the same semantic meaning. Issues where semantically similar predicates have different wording is common in natural language, so we require a theorem prover which can work with a non-binary unification function based on a similarity score rather than exact string matches. The traditional resolution rule [1] for first-order logic is shown in Figure 4. A unify function is shown in Algorithm 1. For simplicity, the algorithm shown assumes that the variable substitution map is provided as input to the unify function, although typically the substitution map is calculated along with the unification. In addition, the algorithm shown does not include function symbols, as those are not used in this paper. The unify function in Algorithm 1 uses a similarity function, simFunc, which returns a value between 0 and 1, and a threshold \(\tau\), where the unification succeeds if the minimum similarity of all similarity checks in the unification is above \(\tau\). When simFunc is a string comparison which returns 1 if the strings are identical and 0 if not, and \(\tau\) is 0.5, then this reduces to traditional binary unification. For example, using the example of father(Homer, Bart) and father(Homer, Y) from above, unification succeeds with pred1,terms1 = father, [Homer, Bart], pred2,term2 = father,[Homer,Y] and substitutions = {Y / Bart}. ``` Input:pred1, terms1 Input:pred2, terms2 Input:substitutions Input:simFunc, \(\tau\) ``` ``` sim\(\leftarrow\)simFunc(pred1, pred2) terms1\(\leftarrow\)applySubs(terms1, substitutions) terms2\(\leftarrow\)applySubs(terms2, substitutions) forallterm1,term2 \(\in\)zip(terms1, terms2)do iftype(term1) \(\neq\)type(term2)thenreturn\(False\) ifisConst(term1)then sim \(\leftarrow\)min(sim, simFunc(term1, term2)) returnsim\(>\tau\) procedureapplySubs(\(terms,subs\)) newTerms \(\leftarrow\)[] forallterm\(\in\)termsdo newTerm \(\leftarrow\)term ifterm\(\in\)subs thennewTerm \(\leftarrow\)subs[term] newTerms \(\leftarrow\)newTerms \(\cup\)newTerm returnnewTerms ``` **Algorithm 1** Non-binary unify We implemented a non-binary unification theorem prover using input resolution called "Tensor Theorem prover" 4 which we use in this paper. A proof in Tensor Theorem Prover consists of a chain of resolution steps, each with a corresponding similarity score and substitutions map. The similarity score for the proof as a whole is defined as the minimum similarity of all steps in the proof, as is the case Figure 4: Resolution proof rule where each \(a_{i}\in\{a_{1},\dots,a_{n}\},d_{i}\in\{d_{1},\dots,d_{n}\}\) refer to a grounded logical literal, and \(b\) and \(c\) are positive literals. The condition unify(b,c) holds if the predicates for \(b\) and \(c\) match, and all constants in all terms also match after grounding variables. Figure 3: AMR and corresponding logical formulae representation of the AMR for “Mr Krupp dries himself” Figure 2: AMR represented as text, a tree, and a graph, from the Python Penman library. in Neural Theorem Provers Rocktaschel and Riedel (2017). Tensor Theorem Prover seeks to find the proof with the highest similarity score for any query, if one exists. We use the unification algorithm presented in the seminal paper on resolution Robinson (1965) to calculate unification in Tensor Theorem Prover, but replace equality checks between predicates and constants with a non-binary similarity check as shown in Algorithm 1. ## 4 Social reasoning system Our social reasoning system takes a natural language SST and ROT as input. First, we parse the SST and ROT text into token-aligned AMR matched with contextual word embeddings. Next, we generate alternate AMR trees by merging leaf nodes together. From here, we convert AMR trees into first-order logic. Finally, we use a neuro-symbolic theorem prover to query the logical formulae and determine if the ROT is applicable to the situation. Each of these steps will be explained in more detail below. The core insight which allows this to work is that a logical match between a ROT and a situation can be framed as finding a matching subtree in the AMR corresponding to the body of the ROT within the AMR for a situation. This is demonstrated in Figure 5, where the ROT body AMR subtree is found exactly in the SST AMR. However, in practice the AMR for the ROT and the SST rarely align so perfectly. To help address this, we introduce contextual embeddings aligned to the nodes in the AMR tree to allow differently worded but semantically similar nodes to still match using a vector similarity score rather than simply performing an exact string match on the AMR instance and role names. This is shown in Figure 6, where the nodes for "kid" and "child" are allowed to match due to their embedding vector similarity being very high. ### Parsing text into AMR and embeddings The first step when processing ROT and SST text is to generate an AMR tree and contextual embeddings for each sentence. We use a pretrained ensemble AMR 3.0 model for the IBM Transition AMR parser Lee et al. (2021) for parsing the ROT and SST into AMR. We use a pretrained RoBERTa Liu et al. (2019) base model from Hugginga Wolf et al. (2020) and average the last 4 layers of the model to generate contextual embeddings Devlin et al. (2018). An AMR tree with embeddings \(\mathcal{T}=\langle\mathcal{N},\mathcal{E},meta\rangle\) is a set of nodes \(\mathcal{N}\), edges \(\mathcal{E}\), and a metadata function \(meta\) which provides metadata related to each node and edge. The set \(\mathcal{N}=\mathcal{N}_{I}\cup\mathcal{N}_{C}\cup\mathcal{N}_{R}\) where \(\mathcal{N}_{I}\), \(\mathcal{N}_{C}\), and \(\mathcal{N}_{R}\) are disjoint, and \(\mathcal{N}_{I}\) consist of AMR instance nodes (of the form a / alpha in Figure 2), \(\mathcal{N}_{C}\) consist of AMR constants (raw strings like "Mr Krupp" in Figure 3, numbers, or symbols like "-" in :polarity -), and \(\mathcal{N}_{R}\) consist of coreference nodes which refer to an instance node (the b coreference in Figure 2). Each edge \(\langle n,n^{\prime}\rangle\in\mathcal{E}\) is such that \(n\in\mathcal{N}_{I}\) and \(n^{\prime}\in\mathcal{N}\). The \(meta\) function assigns a tuple of meta-data to each node and edge as follows, where \(l\) is a text label, \(p\) is a text predicate, and \(v\) is a vector embedding from RoBERTa. \(v\) can be null (represented as \(\emptyset\)) for cases where an embedding does not exist. \[\begin{array}{ll}\text{If }e\in\mathcal{E},&meta(e)=\langle l\rangle\\ \text{If }n\in\mathcal{N}_{I},&meta(n)=\langle l,p,v\rangle\\ \text{If }n\in\mathcal{N}_{C},&meta(n)=\langle l,v\rangle\\ \text{If }n\in\mathcal{N}_{R},&meta(n)=\langle l\rangle\end{array}\] For example, the AMR in Figure 3 has 2 instance nodes e / dry with \(\langle l=\texttt{e},p=\texttt{dry},v=\text{embedding[dry]}\rangle\) and x / person with \(\langle l=\texttt{x},p=\text{person},v=\text{embedding[``Mr Krupp"]}\rangle\), 1 constant node "Mr Krupp" with \(\langle l=\text{``Mr Krupp"},v=\text{embedding[``Mr Krupp"]}\rangle\), and 1 coreference node x with \(\langle l=\texttt{x}\rangle\). It also contains 3 edges with labels :ARG0, :named, and :ARG1. ### Generating merged AMR trees While AMR can smooth over some differences between sentences that have similar semantic meanings, it is still normal for sentences with similar meaning to have slight differences in the structure of the generated AMR tree. When these AMR trees are turned into logical formulae, any differences in the structure of the generated AMR will cause the formula to no longer match even if the sentences are describing semantically similar concepts. Furthermore, the AMR parsing may introduce errors, and we want to be as robust as possible against minor AMR parsing errors. A common failure mode occurs when AMR has a different structure between the ROT and the SST despite the ROT and SST having similar semantic meaning. This manifests itself in extra nodes or missing nodes which break the alignment between ROT and SST. Figure 8 illustrates this situation, where the text "white dog" causes an extra node in the AMR tree and breaks alignment with the ROT, which only refers to "dogs". Of course, a formula that applies to a "dog" should also apply to a "white dog". We address these alignment errors by generating alternative AMR trees by collapsing AMR leaf nodes into a single node, averaging together the embedding vectors of the collapsed nodes. The hope is that the merged embedding vector of the collapsed nodes in the ROT may match the embedding of the corresponding node in the situation via vector similarity, or vice-versa, where without collapsing nodes the AMR graph might have a shape which makes a match impossible. This is illustrated in Figure 7. We introduce a new node type \(\mathcal{N}_{M}\) to represent a merged node, so that \(\mathcal{N}=\mathcal{N}_{I}\cup\mathcal{N}_{C}\cup\mathcal{N}_{R}\cup \mathcal{N}_{M}\). This node contains a label and the average (using the \(avg\) function) of embeddings from the collapsed nodes produced during a merge: \[\text{If }n\in\mathcal{N}_{M},meta(n)=\langle l,avg(v_{1},\dots v_{k})\rangle,k>0\] where \(k\), the number of embeddings being merged together, is referred to as the merge width. A node \(n^{\prime}\) is a _child_ of a node \(n\) if there is an edge from the instance node to the target node. Let \(descendent_{\mathcal{T}}(n,n^{\prime})\) hold if \(n\) is a child of \(n^{\prime}\) or if there is an \(n^{\prime\prime}\) s.t. \(n\) is a child of \(n^{\prime\prime}\) and \(descendent_{\mathcal{T}}(n^{\prime\prime},n^{\prime})\) holds. A node \(n\) is called the root of tree \(\mathcal{T}\), \(n=root_{\mathcal{T}}\), if there is no edge with \(n\) as its target. A node \(n\) has depth in \(\mathcal{T}\) as defined as follows: \[depth_{\mathcal{T}}(n)=\left\{\begin{array}{ll}0&\text{if }n=root_{\mathcal{T}}\\ 1+depth_{\mathcal{T}}(n^{\prime})&\text{if }\langle n^{\prime},n\rangle\in \mathcal{E}\end{array}\right.\] Only instance nodes \(\mathcal{N}_{I}\) have children, and thus only these nodes may be collapsed in a merge. To perform a collapse, an instance node \(n_{I}\in\mathcal{N}_{I}\) is replaced with a merge node \(n_{M}\in\mathcal{N}_{M}\), such that the label for \(n_{M}\) is the word "MERGE". The embeddings list for the merge node \(n_{M}\) is a list of all embeddings of \(n_{I}\) and all descendents. For an AMR tree \(\mathcal{T}=\langle\mathcal{N},\mathcal{E},meta\rangle\), the AMR tree obtained by merging instance node \(n_{I}\) into new merge node \(n_{M}\) is \(\mathcal{T}^{\prime}=\langle\mathcal{N}^{\prime},\mathcal{E}^{\prime},meta^{ \prime}\rangle\) is defined as follows where \(J=\{n^{\prime}|descendent_{\mathcal{T}}(n^{\prime},n_{I})\}\cup\{n_{I}\}\), \(E_{m}=\{\langle n,n_{M}\rangle|\langle n,n_{I}\rangle\in\mathcal{E}\}\), \(l^{\prime}=\) "MERGE", and \(v^{\prime}=avg(v\in meta(n),v\neq\emptyset|n\in J)\) \[\begin{array}{l}\mathcal{N}^{\prime}=(\mathcal{N}\setminus J)\cup\{n_{M}\} \\ \mathcal{E}^{\prime}=E_{m}\cup(\mathcal{E}\cap(\mathcal{N}^{\prime}\times \mathcal{N}^{\prime}))\\ meta^{\prime}(n)=\left\{\begin{array}{ll}meta(n)&\text{if }n\in\mathcal{N}\\ \langle l^{\prime},v^{\prime}\rangle&\text{if }n\text{ is }n_{M}\end{array}\right.\end{array}\] Not all possible merges are considered valid, however. We require that the number of negations (an edge labeled :polarity pointing to a constant node with label -) must be the same in \(\mathcal{T}^{\prime}\) and \(\mathcal{T}\). This is demonstrated in Figure 11. Furthermore, if instance node \(n_{I}\in\mathcal{T}\) has a corresponding set of \(k\) coreference nodes \(\{n_{1}\ldots n_{k}\}\subseteq\mathcal{N}_{C},k>0\), then \(\mathcal{T}^{\prime}\) must either contain both \(n_{I}\) and all of the corresponding \(\{n_{1}\ldots n_{k}\}\) coreferences, or \(n_{I}\) and \(\{n_{1}\ldots n_{k}\}\) must be removed. This corresponds to not allowing coreferences to be broken by a merge as this could break the semantic meaning of the AMR, as illustrated in Figure 10. We introduce additional parameters for controlling allowed merges futher. \(\tau_{M}\) is a threshold which controls the maximum merge width of the merged embeddings in a single merge node \(meta(n_{M})=\langle l,avg(v_{1}\ldots v_{k})\rangle,k<\tau_{M}\). \(\tau_{D}\) is a parameter which sets a minimum depth at which merges are allowed in a tree \(\mathcal{T}\), so \(depth_{\mathcal{T}}(n_{m})>\tau_{D}\). We recursively generate all possible merged AMR trees such that these above conditions are satisfied, including the original tree where no merges have been performed. While this may raise computational complexity concerns if the trees are large, in practice this was never a problem as AMR trees for a single sentence take a negligible amount of time relative to theorem proving and running the AMR parser. Table 2 shows statistics on the number of merge trees available Figure 5: Full flow going from text to AMR to logical formulae to proof for a simple case where the SST AMR and ROT AMR align perfectly with identical predicates and structure. The matching nodes are highlighted in green in the text, AMR, and logical formulae. In formulae, \(n\),\(p\),\(o\),\(n2\), and \(g\) refer to grounded constants and \(N\),\(P\), and \(O\) refer to universally quantified variables. To address situations where the alignment is not perfect, we introduce the use of contextual embeddings and AMR node merges, described in Section 4.2. Figure 6: Often nodes in AMR are semantically similar, but not identical. We use contextual embeddings with cosine similarity to allow these nodes to still match. The AMR for the ROT is shown on the left, and the AMR for the SST is shown on the right. Figure 7: Merging leaf nodes and averaging the merged embeddings allows the ROT (left) and the SST (right) to match, as the resulting tree no longer has a suprious “w / white-03” node. per sentence in the Social Chemistry dataset. ### Converting AMR into logical formulae To determine if a Social Chemistry ROT matches a SST, we need to turn both the ROT and SST AMR into first-order logical formulae. Until now, we have treated both the ROT and the SST identically, but when translating the ROT and SST into logical formulae we need to treat them differently. A Social Chemistry ROT is a general logical rule which can apply to any number of situations. A ROT typically has the form of an action and a verdict on that action. For instance, the ROT "It's rude to hang up on someone." would have the verdict of "It's rude", and action "to hang up on someone". We use this knowledge of how ROTs are structured to modify the AMR to formulae conversion so the result is in the form of an implication. We start with the AMR to logical formulae algorithm described by [1], but we make modifications to the quantification and the structure of the formulae. The [1] algorithm wraps all instances with existential quantifiers, which assumes that each sentence is a statement which can be true or false. However, natural language is able to express much more than this, and we use our knowledge of what is being expressed in the ROT and the SST to justify changing the formulae. For a ROT, the formulae corresponds to an implication as mentioned above. For a SST, the formulae describes grounded knowledge which we can query to see if the ROT matches. For instance, we can begin with the ROT AMR tree below for "It's rude to hang up on someone.": (r / rude :ARG1 (h / hang-up :ARG2 (s / someone))) From here, the algorithm from [1] would result in the following formulae: \[\exists R.(\text{rude}(R)\land\exists H.(:\text{ARG1}(R,H)\land \text{hang-up}(H)\] \[\land\ \exists S.(:\text{ARG2}(H,S)\land\text{someone}(S))))\] We know that the ROT is a logical implication, so we remove the existential quantification. Then, we replace the conjunction between the verdict, \(\text{rude}(R)\) with implication, and remove the linking :ARG1\((R,H)\), since this linking is handled by the implication itself. In addition, we swap the \(R\) variable in \(\text{rude}(R)\) with \(H\), as this is the target of :ARG1\((R,H)\). This results in the following implication: \[\text{hang-up}(H)\land:\text{ARG2(H,S)}\land\text{someone}(S)\to\text{rude} (H)\] As a final step, \(\text{rude}(H)\) is simplified further to just \(\text{BAD}(H)\), resulting in the following final logical form: \[\text{hang-up}(H)\land:\text{ARG2(H,S)}\land\text{someone}(S)\to\text{BAD} (H)\] Converting SST AMR to logical formulae is much simpler, as there is no need to build an implication. Instead, the only changes necessary from the base algorithm from [1] is to remove existential quantifiers, and to replace variables with constants. Constants are used to indicate that the formulae represents facts about items in the domain of discourse, rather than general theorems. For instance, we can begin with the SST AMR tree below for "Hanging up on my cousin": Figure 11: Invalid merge, since it removes a negation node (:polarity -). The AMR is generated from the text “He sees a man who is not small” Figure 8: Case where the ROT (left) and SST (right) are logically matching, but the AMR does not align due to an extra node in the SST which is not present in the ROT. Figure 10: Invalid merge, since \(b\) is coreferenced outside of the collapsed region. The AMR is generated from the text “The boy wants the girl to believe him”. Figure 9: All possible collapsed AMR trees with a max of 3 merged alignments per node for the AMR for “It’s good to keep things clean” (h / hanging :ARG2 (p / person :ARG0-of (h2 / have-rel-role :ARG1 (i / i) :ARG2 (c / cousin)))) Following [10], this becomes the following logic: \[\exists H.(\text{hanging}(H)\wedge\exists P.(:\text{ARG2}(H,P)\] \[\wedge\ \text{person}(P)\wedge\exists H2.(:\text{ARG0}(H2,P)\] \[\wedge\ \text{have-rel-role}(H2)\wedge\exists I.(:\text{ARG1}(H2,I)\] \[\wedge\ \text{i}(I))\wedge\exists C.(:\text{ARG2}(H2,C)\wedge \text{cousin}(C)))))\] From here, we ground out the existential quantifiers and replace variables with new constants, resulting in the following final logical form. Note that in these examples, lowercase letters indicate a constant, and uppercase letters indicate a variable. \[\text{hanging}(h)\wedge:\text{ARG2}(h,p)\wedge\text{person}(p)\] \[\wedge\ :\text{ARG0}(h2,p)\wedge\text{have-rel-role}(h2)\wedge: \text{ARG1}(h2,i)\] \[\wedge\ \text{i}(i)\wedge:\text{ARG2}(h2,i)\wedge\text{cousin}(i)\] While the above process is shown only for the original AMR tree, this process is also repeated for all merged AMR tree versions of the ROT and SST. ### Reasoning Once logical formulae are generated for a ROT and a SST, we use the Tensor theorem prover library discussed in Section 3 to check if the SST applies to the antecedent of the ROT. Since each ROT is transformed into an implication with a consequent of either GOOD\((X)\), \(\neg\text{GOOD}(X)\), BAD\((X)\), or \(\neg\text{BAD}(X)\), we can verify if the ROT matches by querying these 4 verdicts one by one in the theorem prover. If a proof can be found for any of these queries, then the ROT matches the SST. For unification, we use a hybrid similarity function which can perform exact string match or cosine similarity between embeddings, depending on the predicates or constants being compared. This is shown in Algorithm 2, and is used as the simFunc parameter in unification in Algorithm 1. Here, symbol1 and symbol2 are either constants or predicates, and embedding1 and embedding2 are their RoBERTa embeddings, but may be null. ``` 0: symbol1, embedding1 0: symbol2, embedding2 0: similarity bothHaveEmbeds \(\leftarrow\) embedding1 \(\wedge\) embedding2 eitherIsMerge \(\leftarrow\) "MERGD" \(\in\) [symbol1, symbol2] ifeitherIsMerge then ifbothHaveEmbeds then return\(\frac{1}{2}(cos(\text{embedding1},\text{embedding2})+1)\) else return0.0 ifsymbol1 = symbol2 then return1.0 ifbothHaveEmbeds then return\(\frac{1}{2}(cos(\text{embedding1},\text{embedding2})+1)\) return0.0 ``` **Algorithm 2** Hybrid Similarity Function ## 5 Experimental evaluation To evaluate the performance of this approach, we use the SST and ROT from Social Chemistry 101 as a test bed. For each sample, we turn the ROT and SST into logical formulae as described in Section 4, and check if the the theorem prover is able to prove the verdict of the ROT from the SST. To create negative examples for evaluation, we randomly select a different ROT from the dataset and check that the theorem prover should NOT be able to prove the conclusion of that ROT when applied to the original SST. Based on these, we calculate precision, recall, and f1 score. For example, given the SST "being friends with my ex's sister", and the corresponding ROT "You shouldn't be friend with your ex's family members", the ROT formulae has the following form: \[(\text{you}(Y)\wedge:\text{ARG1}(H,Y)\wedge\text{have-rel-role}(H)\ldots)\] \[\rightarrow-\text{GOOD}(Y)\] Since the consequent of this ROT is \(\neg\text{GOOD}(Y)\), if we can prove \(\neg\text{GOOD}(Y)\) using Tensor theorem prover after inputting the logical formulae for the SST and the ROT, then this is considered a true positive. Likewise, if we cannot find a proof of \(\neg\text{GOOD}(Y)\) then this a false negative. We also pick an unrelated ROT, and correspondingly see if we can prove the consequent of that ROT given the SST. For instance, if we randomly pick the unrelated ROT "You should always give gifts to people for their birthday", and we are able to find a proof for the consequent of this ROT, GOOD\((X)\), when applied to the unrelated SST "being friends with my ex's sister", then this is considered a false positive. Likewise, if we are unable to prove the consequent of the unrelated ROT, then this is considered a true negative. We randomly selected 10,000 samples consisting of a ROT and corresponding SST from the social chemistry 101 dataset as a test set. Statistics related to the AMR trees for the SSTs and ROTs in this dataset are shown in Table 1. Our method includes a hyperparameter for the vector similarity threshold at which a unification is considered a success in the theorem prover. Raising this threshold will increase precision but at the cost of recall. The precision, recall, and f1 score as a function of similarity threshold is shown for in Figure 12. We set a minimum merge depth of 1, and a maximum leaf merge width of 6 nodes. We also investigated the effect of the number of merged nodes on the performance of this approach. For this experiment, we set the similarity threshold to 0.925, and kept the remaining parameters identical to the similarity threshold experiment. The results are shown in Figure 13. Increasing the allowed number of nodes to be merged does not seem to have a large effect on precision, but it does have a large impact on recall and f1 score up to a merge size of 6 nodes. Increasing the max node merge size allows the prover to rely more on the embeddings to unify large chunks of the graph rather than relying on the AMR structure, so it is encouraging to see these merges don't seem to have a negative effect on precision up to 6 nodes. The merge algorithm we propose has a number of restrictions on when merges are possible. The most common causes of merges not being allowed come from crossing negation bounds, and merging across coreferences. However, these restrictions also limit the performance of our method overall. We introduce a metric called collapsability, which measures what portion of an AMR tree can be collapsed following our merge algorithm. For a set of AMR trees \(\{\mathcal{T}_{1}\dots\mathcal{T}_{n}\}\), \(\text{minNodes}(\{\mathcal{T}_{1}\dots\mathcal{T}_{n}\})\) and \(\text{maxNodes}(\{\mathcal{T}_{1}\dots\mathcal{T}_{n}\})\) refer to the number of nodes in the smallest and largest trees in the set, respectively. Collapsability is thus defined below for an AMR tree \(\mathcal{T}\): \[\text{collapsability}(\mathcal{T})=1-\frac{\text{minNodes}(\text{ merges}(\mathcal{T}))-1}{\text{maxNodes}(\text{ merges}(\mathcal{T}))-1}\] Collapsability is 0 if no merges are possible, and 1 if the entire AMR tree can be collapsed into a single merged node. Collapsability is undefined if the original AMR tree is just 1 node as that would cause \(maxNodes\) to be 1, but this never occurs in our dataset. Figure 14 shows the results when considering only subsets of the dataset bucketed by the collapsability percent of the ROT and the SST. The minimum collapsability between these is used as the collapsability of the sample. In these results, we see the importance of allowing merges on the performance of our method. When few merges are possible, the recall is very low, since the majority of the AMR for ROT and SST do not have identical structure. As the collapsability of samples increases, the recall increases dramatically as well, even surpassing the precision at around 80%+ collapsability. Interestingly, at very high collapsability the precision declines slightly, likely since when so many nodes are merged together it is easier to get false positives due to excessive averaging of embeddings. Regardless, the performance increase from higher collapsability should motivate future work to improve the merge algorithm to allow merging across coreference and negation. ## 6 Related work The idea of using a semantic parser to turn natural language into a logical format for reasoning has been used in [23, 14] for solving Winograd schema challenges (WSCs) [10]. Both of these papers use a semantic parser called K-Parser [23] to parse WSC \begin{table} \begin{tabular}{|p{85.4pt}||p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multicolumn{5}{|c|}{Dataset statistics} \\ \hline **ROT AMR** & **Mean** & **Median** & **Stdev** \\ \hline Instance nodes & 7.1 & 7 & 2.5 \\ AMR depth & 5.2 & 5 & 1.3 \\ Logic terms & 13.6 & 13 & 5.9 \\ Merge Trees & 1.6 & 1 & 2.2 \\ \hline **SST AMR** & **Mean** & **Median** & **Stdev** \\ \hline Instance nodes & 8.6 & 8 & 3.1 \\ AMR depth & 5.1 & 5 & 1.4 \\ Logic terms & 18.4 & 17 & 9.1 \\ Merge Trees & 2.4 & 1 & 14.1 \\ \hline \end{tabular} \end{table} Table 2: Statistics for the ROT AMR and SST AMR generated from the Social Chemistry 101 dataset. Instance nodes refers to AMR instances (things like b/boy in AMR), and logic terms is the number of literal in our generated logical formulae before merging. Merge trees is the number of alternate merge trees that can be generated from a SST or ROT AMR tree Figure 12: Precision, Recall, and F1 score, varying the similarity threshold Figure 13: Precision, Recall, and F1 score, varying the maximum merge size sentences and turn them into a logical form that can be reasoned with. Sharma et al. (2015) then tries to find a theorem which will complete the WSC and uses Answer set programming (ASP) Gelfand and Lifschitz () with the semantic graph output from K-Parser to complete the WSC task. K-Parser is conceptually similar to AMR, but is missing key features of AMR like negation and standardized roles, and does not appear to be actively maintained. This approach does not use vector embeddings or other neuro-symbolic approaches to improve the robustness of matches, and thus cannot handle formulae which do not match the exact structured output of K-Parser exactly. There has been work done on relaxing the unification condition of theorem provers to allow for differentiable unification, from which our Tensor Theorem Prover library takes inspiration. Neural Theorem Provers (NTPs) Rocktaschel and Riedel (2017) and Braid Kalyanpur, Breloff, and Ferrucci (2022) allow unification to return a score between 0 and 1. This is typically a similarity score between vector embeddings corresponding to each predicate. However, both of these libraries are back-chaining provers, and can only work with Horn clauses which is insufficient to represent the full range of formulae present in Social Chemistry ROTs. Also conceptually similar is Hunter (2022), which allows using embedding vector similarity to swap logical predicates using a SAT solver. The Social Chemistry 101 dataset includes a number of extra dimensions to the data relating to ethics, psychology, and morality around samples in the dataset. Trained models on this dataset tend to focus on generating judgements conditioned on these attributes Forbes et al. (2020), or using the data as a component in other more upstream datasets such as defeasible NLI Rudinger et al. (2020) or focusing on the intentions of actors in social situations Emelin et al. (2020). In all these cases, the dataset is used for different purposes than matching ROTs with SSTs as is done in this paper. Perhaps most similar to our paper is Kapanipathi et al. (2020). Here, a natural-language query is turned into AMR, and the AMR is turned into logical formulae. However, these formulae are then used to query a pre-existing knowledge-base using SPARQL rather than deriving the knowledge-base as well from a natural language sentence. Furthermore, the focus is on entity recognition and data retrieval rather than social commonsense reasoning, and does not use contextual word embedding similarity for neuro-symbolic reasoning. ## 7 Discussion In this paper, we present a novel system for taking ROTs in natural language and reasoning with them using a neuro-symbolic theorem prover. Our contributions in this paper include: (1) an approach for merging and collapsing AMR nodes for increased flexibility and robustness during reasoning; (2) a hybrid similarity metric which mixes string matching and embedding cosine similarity; (3) a modified version of AMR to logic conversion for working with ROT implications; and (4) our evaluation method using Social Chemistry 101. Furthermore, this paper introduces Tensor Theorem Prover, our implementation of a resolution-based neuro-symbolic theorem prover and AMR Logic Converter, our library for converting AMR to first-order logic. While this paper deals only with a single SST and ROT at a time, this can be extended in the future to allow multi-hop reasoning by combining multiple ROTs and additional background knowledge during the reasoning process. The theorem proving process in this paper is differentiable. In future work the embeddings used for semantic similarity can be further trained via backpropagation. A particular weakness of the current approach is that it does not have a good way to deal with antonyms during the solving process. For example, if a ROT references "gifts that aren't too big", it would be reasonable that a SST describing a "small gift" to match, since "big" and "small" are opposites. A formula like \(\text{big}(X)\rightarrow\neg\text{small}(X)\) could be injected dynamically to address this. Another area in which this approach could be improved is dealing with multiple sentences. This would require working out coreferences between entities in the various sentences, but doing so should allows this approach to work with paragraphs of text instead of single sentences. A final area of future work is be to develop a version of the merge algorithm which can allow merging nodes across coreferences and/or negation. Our evaluation shows better results when more merging is possible, so a clear direction for improvement is to develop a version of the merge algorithm which has fewer restrictions on allowed merges. The method described in this paper is a stepping-stone on the path to building systems capable of social reasoning using an interpretable, neuro-symbolic approach. Systems like this will naturally be useful anywhere commonsense social understanding is necessary, from interacting with users directly, to summarizing human narrative, to giving recommendations on social situations. In all these cases, and especially when dealing with sensitive social topics, it is important for future AI systems to be interpretable and transparent, and for their reasoning to be debuggable and editable. Figure 14: Precision, Recall, and F1 score when bucketing samples based on what percentage of the AMR is allowed to be collapsed (merged)
2302.10702
CoPracTter: Toward Integrating Personalized Practice Scenarios, Timely Feedback and Social Support into An Online Support Tool for Coping with Stuttering in China
Stuttering is a speech disorder influencing over 70 million people worldwide, including 13 million in China. It causes low self-esteem among other detrimental effects on people who stutter (PwS). Although prior work has explored approaches to assist PwS, they primarily focused on western contexts. In our formative study, we found unique practices and challenges among Chinese PwS. We then iteratively designed an online tool, CoPracTter, to support Chinese PwS practicing speaking fluency with 1) targeted stress-inducing practice scenarios, 2) real-time speech indicators, and 3) personalized timely feedback from the community. We further conducted a seven-day deployment study (N=11) to understand how participants utilized these key features. To our knowledge, it is the first time such a prototype was designed and tested for a long time with multiple PwS participants online simultaneously. Results indicate that personalized practice with targeted scenarios and timely feedback from a supportive community assisted PwS in speaking fluently, staying positive, and facing similar real-life circumstances.
Feng Li, Zeyu Xiong, Xinyi Li, Mingming Fan
2023-02-21T14:41:50Z
http://arxiv.org/abs/2302.10702v1
CoPracTter: Toward Integrating Personalized Practice Scenarios, Timely Feedback and Social Support into An Online Support Tool for Coping with Stuttering in China ###### Abstract. Suttering is a speech disorder influencing over 70 million people worldwide, including 13 million in China. It causes low self-esteem among other detrimental effects on people who stutter (PwS). Although prior work has explored approaches to assist PwS, they primarily focused on western contexts. In our formative study, we found unique practices and challenges among Chinese PwS. We then iteratively designed an online tool, CoPracTter, to support Chinese PwS practicing speaking fluency with 1) targeted stress-inducing practice scenarios, 2) real-time speech indicators, and 3) personalized timely feedback from the community. We further conducted a seven-day deployment study (N-11) to understand how participants utilized these key features. To our knowledge, it is the first time such a prototype was designed and tested for a long time with multiple PwS participants online simultaneously. Results indicate that personalized practice with targeted scenarios and timely feedback from a supportive community assisted PwS in speaking fluently, staying positive, and facing similar real-life circumstances. Key words and phrases:People who stutter, Accessibility, Field Study, Assistive Technology + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + Footnote †: Corresponding Author + ## 1. Introduction Suttering, also known as stammering, is a speech disfluency that impacts around 1% of the global population (Suttering, 2013). The cause of the disorder varies across people, but it is identified by recurrent interruptions of the natural flow of speech or part-word repeats [24]. Although many people do not perceive stuttering to be a disability, stuttering could have a variety of negative repercussions on the everyday lives of people who stutter (PwS). For example, stuttering may induce anxiety or shame in both children [35] and adults [6], which may be linked to the fact that PwS generally have lower levels of self-perceived communication competence than those who do not stutter [38]. In addition, the general public may have less trust in PwS and perceive them as being less competent, mature or knowledgeable, which could lead to discrimination and social devaluation in the workplace [1; 31]. Some PwS who have encountered stigma may feel they will continue to face stigma in the future, which might have a detrimental effect on their mental health and self-esteem [3]. As a result, it might create long-term difficulties for PwS in avoiding potentially embarrassing circumstances, such as public speaking, forming lasting relationships, and finding a job. Within the HCI and accessibility communities, researchers have investigated different ways to assist PwS, including exploring design considerations for inclusive speech interfaces for PwS, helping PwS write scripts that they could speak more fluently, diagnosing the degree of stuttering, recording stuttering situations for SLP to diagnose, and uncovering the type of support that PwS would want to have. While informative, these approaches were limited in two ways. First, they were mostly designed and evaluated with PwS living in western cultures. Cultures could influence stuttering behaviors [20]. For instance, black stutturers are more likely to have secondary characteristics (e.g. speech modifiers) that could covert prolongations and repetitions than white stutteres. Moreover, people's attitudes toward stuttering tend to vary across cultural contexts. Prior work reported significant differences in people's attitudes toward stuttering among British, Arab, and Chinese, such as perceived reasons for stuttering, assistance and empathy for PwS [36]. And people's attitudes toward stuttering could affect the experiences, personal identity and mental health of PwS [2; 7; 25]. However, little is known about the experiences of PwS in China, which has 13 million PwS. The second limitation lies in the type of support. Prior work mostly focused on offering individual speaking practicing support with minimum peer support [25]. For example, StammerApp merely provided links to websites [25], which required PwS to read through texts on websites that did not offer feedback on their own practice sessions. Informed by the two limitations, we sought to take a step further to tackle these limitations by first answering a research question(RQ1): **What are the stuttering situations that PwS in China encounter? What are their current workarounds and challenges?** To answer RQ1, we conducted a formative study in which we first analyzed the content posted in four representative and reliable PwS online communities frequently visited by PwS in China (Zhihu1, Weibo, Baidu Tieba and Douban). Then we conducted semi-structured online interviews with 12 PwS from different regions of China. From the formative study, we identified the need for personalized speech practice and timely feedback on their practice from their peers due to the shortage of SLP and assistive mobile apps for PwS to practice speaking. Informed by the stuttering situations and the need for personalized timely feedback from their peers, we derived five design considerations (DCs) Footnote 1: Zhihu: [https://www.zhihu.com/](https://www.zhihu.com/), has 30 million daily active users Based on the DCs, we further designed and implemented a mobile assistive app, _CoPracTter. CoPracTter_ offers stutter-triggering scenarios for PwS to practice, with personalized timely feedback. We then used it to investigate the effectiveness and user experience of these features by answering the second research question (RQ2): **How are personalized practice scenarios, timely targeted feedback, and social support used by PwS in their daily speaking practice?** To answer RQ2, We conducted a 7-day deployment study where 11 PwS participated online simultaneously (N=11). To our knowledge, this is the first deployment study involving multiple PwS participants simultaneously. Participants were required to use it for speaking practice and giving others timely feedback every day. By the end of the study, they were interviewed about their experiences of using the prototype. The results indicated that personalized stutter-triggering video simulation tasks with timely feedback could aid PwS to practice speaking fluency with appropriate pressure. All PwS participants found subjective comments from other PwS to be the most beneficial while objective real-time speech-related feedback is also helpful (e.g. speech rate, facial expression et al.). Participants have different preferences for the feedback types and suggested potential ways to improve the design of some features (e.g. transcription, speech rate et al.). In summary, we made the following contributions: * We uncovered common stuttering scenarios, workarounds and challenges of PwS in China through a formative study including online PwS community analysis and interviews. * Informed by the formative study and through an iterative design process with PwS, we designed and implemented an interactive prototype, CoPracTter, that allows PwS to practice speaking in practical scenarios to induce varying stress levels, review AI-extracted speech features while practicing, share practice recordings with the community, and receive timely and personalized feedback from the community. * We conducted a 7-day deployment study with a community of PwS simultaneously online to understand how they used and perceived the effectiveness of these key features and presented design implications. According to our knowledge, it is the first time such a prototype has been designed and evaluated in a long-term, multi-user simultaneous online deployment study. ## 2. Related Work ### PwS Suffer from Anxiety and discrimination Stuttering could have many negative impacts on PwS. Prior work revealed that stuttering could rise the anxiety level of individuals (Rajaj et al., 2017). PwS were found to have higher anxiety levels than people who do not stutter (PwNS) at all ages: children (Shen et al., 2017), adolescents (Shen et al., 2017) and adults (Boyle et al., 2017). A community cohort study with 843 eleven-year-old children found that the group with persistent stuttering had considerably higher anxiety than the group with recovered stuttering and non-stutter controls (Shen et al., 2017). A comparative study reported that stuttering adolescents have significantly higher anxiety than non-stuttering controls (Shen et al., 2017). For adults who stutter (AwS), stuttering could also have negative impacts on their social interaction abilities (Boyle et al., 2017), which may be linked to the fact that PwS generally have lower levels of self-perceived communication competence than those who do not stutter (Shen et al., 2017). One of the potential influencing factors for their anxiety and social interaction ability may be the public's attitude. An online survey about internalized feelings and discrimination experienced by PwS revealed that most PwS have been treated unfairly by the public, which made them feel annoyed and embarrassed (Boyle et al., 2017). Therefore, PwS, especially young adolescents, would try to hide their stuttering (Boyle et al., 2017). An interview study revealed that the quality of life for PwS was significantly reduced by stuttering in four main aspects: vitality, social function, emotional function, and mental health (Boyle et al., 2017). More detrimentally, many prior works found that the impact of stuttering on PwS often lasts for a long term. Boyle et al. conducted an online survey to investigate the enacted stigma, felt stigma, and global mental health of PwS (Boyle et al., 2017). They found that many PwS who have experienced stigma tend to anticipate that they will continue to suffer stigma for the long term in the future. In addition to social lives, stuttering also impacts PwS greatly in the workplace. Plexico et al. conducted an online study with both PwS and people who do not stutter (Plexico et al., 2017). They found that the participants who stutter differ from people who do not stutter in terms of job satisfaction, discrimination, and vigilance. ### Experience of PwS in China Several studies have reported that compared to western societies, PwS in China are more likely to have negative experiences and attitudes toward their stuttering. A social study revealed that Chinese people tend to have more negative ideas about stuttering than people in western nations, and Chinese PwS lacked the typical optimism for pursuing any career they desired (Han et al., 2018). An online survey study revealed significant variations between British, Arab, and Chinese attitudes toward stuttering, including their attributions of stuttering etiology, their role in assisting PwS, and their empathy for PwS (Zhang et al., 2019). Particularly, Chinese people have more negative attitudes towards these issues, which may result in Chinese PwS tending to hide their stutter and not seeking help from others. In addition, the listener's behavior when listening to PwS speaking English is also different across cultures, which may result in different workarounds for PwS in different cultures. Zhang et al. provided empirical evidence that Chinese listeners spent more time on the speaker's background and less on the eyes and nose than Americans (Zhang et al., 2019). Jin et al. pointed out that little is known about speech-language pathology in China, and scientific research on the behavior of stuterers based on this science are still in its infancy (Zhang et al., 2019). This aligns with the findings of a survey study that people in Rio de Janeiro and Belgium all treated stuttering more seriously and devoted more resources to speech-language therapy (Zhang et al., 2019). Although prior works have revealed many differences in stuttering between western societies and China and the lack of stuttering-related research in China, none of them investigated the Chinese PwS user experiences and needs in detail. Therefore, we conducted a formative study to investigate potential differences in stuttering situations and challenges for PwS in China ### Supporting Tools for PwS Within the HCI and accessibility community, researchers have been devoted to investigating ways to assist PwS. Vigot et al. investigated the effectiveness of different types of auditory feedback in reducing stuttering (Vigot et al., 2018). Specifically, they deployed delayed auditory feedback on smartphones and found its effectiveness limited due to the native latency of the smartphones. A speech script writing assistance tool was also developed, which could intelligently replace stutter-trigger words with easy-to-pronounce ones with similar meanings (Han et al., 2018). Much work in the community focused on facilitating SLP to identify their patients' stuttering context remotely in time as well as aiding PwS to recognize their own progress. Chandra et al. investigated the usage of social robots in the stuttering clinic (Chandra et al., 2018). They presented eight scenarios that can be adapted for stuttering intervention with social robots, such as cooperation between social robots and a single user, music modeling and cooperative games. The iAmS (Zhang et al., 2019) is a prototype allowing PwS to register their stutter-related situations in a regulated workflow. The system will notify the SLP associated with the specific user about the detailed information of his stuttering situation in time so that they could identify the context better. Later, the prototype was iterated with personalization features and renamed as BroiStu (2018), which was evaluated with a user study involving both SLP and PWS to rate each feature. The design proved to be helpful for both parties. The implementation of the personalization aspect was presented and specifically evaluated by Madeira et al. in their later work (Madeira et al., 2019). The effects of these self-reflective applications are highly dependent on SLPs. While the therapy from SLP is expensive (Madeira et al., 2019), in China, there is a dearth of SLPs with expertise in stuttering, which results in insufficient SLP therapy and a contradiction between PwS and SLP (Han et al., 2018). Therefore, it is crucial to develop a strategy to assist PwS in improving their speech fluency more independently without relying on SLPs. When designing tools to support PwS to improve their speech fluency more independently, it is important to involve PwS in the design process to better cater to their needs (Chandra et al., 2018). McNaney et al. conducted a survey study with PwS to understand the barriers PwS face in their daily lives and their need for digital support (McNaney et al., 2018). They also further developed a PwS-supporting mobile application - StammerApp, where PwS can find SLP, practice speaking, take journals and connect with other PwS. However, their design mainly focused on supporting self-training using six scenarios without personalized real-time feedback and advice from others, which would constrain the training effect. Meanwhile, for the aspect of linkage with other PwS, they only provided links to the generic support groups and forums, yet no personalized real-time feedback or advice from other PwS was provided. Our prototype, _CoPracTter_, integrated different types of personalized feedback with various stutter-trigger situations for PwS to practice within an online community. ## 3. Formative Study To answer RQ1, we conducted a formative study including two parts: (1) an analysis of posts in popular Chinese PwS online forums to obtain a preliminary understanding of the stuttering experiences and user needs of PwS and (2) a semi-structured interview (N=12) to investigate their needs in detail. The findings from the formative study highlight the user needs to guide the design of a prototype which can help PwS practice speech fluency in personalized scenarios and obtain timely feedback within a supportive community. Based on these findings, we then performed an iterative design process with 12 PwS to better address their needs. ### Analysis of Content Posted in Chinese PwS Online Communities We first searched for online communities that are frequently visited by Chinese PWS, who share their experiences and seek help from others, and identified the following ones: Weibo2 (246 million daily active users (DAU)), Zhihu3 (30 million DAU), Baidu Tieba4 (10 million DAU), and Douban5 (3 million DAU). We screened posts by the keywords "Stutter Correction", "Stutter Treatment", and "Stutter Exercise" from these forums. All acquired data is from 10 years prior to the collection date (June 10th, 2022). After data cleaning, 807 representative posts were eventually obtained. The data was then thematically analyzed using an open coding approach (Han et al., 2018). Two co-authors read through the collected posts first to get familiar with the data. Then they coded the data independently by grouping similar posts and extracting high-level topic keywords. On the weekly project meetings, all co-authors discussed the coding result together and updated the code book. \begin{table} \begin{tabular}{|c c c c c|} \hline **Id** & **Gender** & **Age** & **Location** & **Self-description of Stuttering Situations** \\ \hline 1 & male & 22 & Nanchang & I sometimes repeat the first word and could not move on. \\ 2 & male & 17 & Chengdu & Sometimes I want to speak but I am afraid to, especially when making friends. \\ 3 & male & 23 & Guangzhou & I have difficulty pronouncing some syllables. I will stop in the middle of a sentence. \\ 4 & male & 23 & Nanchang & I have difficulty pronouncing some syllables, especially the first syllables in a sentence. \\ 5 & male & 23 & Hangzhou & I repeat some words when I say something important. \\ 6 & female & 19 & Yichun & I have difficulty pronouncing some syllables. \\ 7 & male & 35 & Fangchenggang & I could not open my mouth when I stutter. \\ 8 & male & 23 & Zhoukou & I get nervous when I stutter and I will speak very slowly. \\ 9 & male & 24 & Fuzhou & I stutter more on the important occasions. \\ 10 & male & 23 & Huhehaote & Stuttering has negative effects on my career. \\ 11 & female & 22 & Shenzhen & I stutter more when making phone calls or talking to strangers. \\ 12 & female & 22 & Yinganmeng & I repeat, stretch and could not pronounce some words. \\ \hline \end{tabular} \end{table} Table 1. Meta information and self-descriptions of the 12 formative study participants ### Semi-structured Interview To validate the findings from the online posts analysis and to identify other potential user needs, we conducted online semi-structured interviews with 12 PwS (9 male, 3 female, aged 17-35) with a variety of self-described stuttering situations (See Table 1). For participants screening, we referred to the criteria used by Sicotte et al. (2019) that participants who were willing to participate should have demonstrated difficulty with at least 5% of syllables spoken and considered themselves as having stuttering problems. To ensure the diversity of the participants, we selected participants from different locations in China. Each interview session took about one hour and all interviewees were compensated in accordance with local standards. The interview focused on their stuttering experiences, workarounds, experiences of practicing speaking with others and expectations regarding assistive technologies. The development of interview questions on stuttering experiences was guided by Yaruss's evaluation of stuttering therapy (Yaruss, 2019). Each interview lasted approximately 40 minutes. To analyze the interview data, we followed the Grounded Theory (Yaruss, 2019) method. Three co-authors read through the interview transcripts first to familiarize themselves with the data and then did the open-coding process independently. Then all co-authors discussed and updated the code book on the weekly project meeting for two weeks. ### Findings In this section, we will report our findings from the formative study and highlight unique aspects of the Chinese context. #### 3.3.1. Needs for speech practice apps for PwS We found that PwS in China suffer from shortage of SLP and efficient practice strategies. Therefore, they all desire assistive mobile applications that could help them improve speaking fluency. However, there are few studies or products designed for them. All these findings emphasised on the necessity of designing such practice assistance apps for PwS. Shortage of SLP in ChinaWe were surprised to find that the attitudes of PwS towards SLP in China and in western countries are very different. As aforementioned, there are many works helping SLP monitor the progress of their stuttering patients in the western context (Sicotte et al., 2019; Sicotte et al., 2019) and they reported PwS prefer feedback from SLP than other people (Sicotte et al., 2019). Although qualified SLP could assist and benefit PwS for their speech, SLP service is not always available in China due to its high cost and insufficiency. From the online posts, we found that it is difficult for PwS to find reliable SLPs due to the insufficiency of qualified SLPs in China (_"Can anyone recommend reliable SLPs for me? It is hard to distinguish by myself"_). Meanwhile, there are many PwS accusing SLP and stuttering-treatment organizations for deceiving and over-charging them (_"The characteristics of their therapy: type concept, long duration and high costs"_). Practice speaking alone as the main strategyConsidering this situation, we asked our interviewees about their current workarounds. We found that their primary method for improving speech fluency is to practice reading and speaking everyday. Speaking practice was perceived to be more efficient than reading (_P4: "I used to spend 30 minutes per day reading books aloud. Later I found that practice speaking is more efficient than reading"_). In addition, speaking practice could be done independently by talking to yourself or dependently with others' response. However, practicing alone is not efficient as practicing with others, as most PwS found speaking to themselves totally different from talking to other people (_P1: "I do not stutter at all when I am talking to myself"_). Unfortunately, it is hard to find appropriate practice partners that are patient and could provide constructive advice. _Lack of assistive mobile applications designed for PwS._ Seeing the limitation of the SLP and difficulty of practicing with others, we asked our interviewees about their experience of using assistive technologies such as mobile applications for speech fluency practice. All interviewees reported that they never used mobile apps specifically designed for PwS. P2 used an app only for assessing general speech fluency and Mandarin pronunciation, with no specific stuttering-related functions. P4 expressed her concern about the credibility of mobile applications: _"I want to learn more about speech training apps, but I am also worried of being deceived"_. When asked about applications specifically design for PwS to practice speaking fluently in simulated scenarios, all interviewees indicated that they were eager to try such applications but could not find any in the app store or other social media platforms in China. Similar findings were reported by McNaney et al. in their survey study [25] that most respondents have not used any mobile apps designed for PwS. It is obviously a general problem with PwS worldwide. #### 3.3.2. Stuttering situations. _Common and specific scenarios in China._ We identified 8 different scenarios in two main categories: official and daily life situations. Some of them overlap with prior studies but some of them are unique to Chinese context. Our interviewees indicated that they stutter when delivering a speech [14] (P5, 9, 11), attending interviews (P1), presenting in a meeting (P4, 5, 7, 10), having workplace conversation (P4, 5, 7, 9, 10), answering phone calls [25] (P1-12), shopping (P2, 4), ordering food (P4) and making friends (P11). Meanwhile, there are also some specific scenarios for Chinese PwS. For instance, reporting identity information when doing community level PCR test (P2) and reporting code for getting delivery (P4). These scenarios usually cause people nervous so that they stutter more. This leads to the first design consideration (**DC 1): The practice should focus on the stutter-trigger scenarios, which is personalized across individuals and cultural context.** _Personalized nervous scenarios._ Another interesting and unique finding is that some people got nervous when talking to their acquaintance (P4, 10, 12) while others stutter more when they are talking to strangers. Though there are personalized differences, it is common that people stutter more when there is some real persons around them than being alone, no matter whether those persons are talking to them. In addition, when comparing phone calls, video calls, and in-person interactions, the majority of our interviewees (N=11) considered in-person conversations to be the most difficult, expect that P6 was more afraid of phone calls since she has difficulty pronouncing the initial syllable of a dialogue. Not being able to allow the caller to see her face aggravated the problem, as she would have liked to signal that she is attempting to speak out. Based on these findings, we concluded **DC 2: The practice scenarios should be carefully designed to trigger pressure and nervousness.** _Hard-to-pronounce syllables._ Many PwS have difficulty in pronouncing some syllables such as 'da' and 'li'. This is different from English-speaking PwS as they stutter on words like 'want' and'sam' [14]. And they are usually blocked on those syllables when they are nervous. In addition, P1 and P10 indicated that they have some persistent hard-to-pronounce syllables, which will block them whether they are nervous or not. Therefore, they think practicing different sentences containing the specific hard-to-pronounce syllables without scenario would also help. This derived **DC 3: The practice scenarios should require users to say stutter-trigger syllables, either implicitly or explicitly.** #### 3.3.3. Personalized timely feedback and social support. _Personalized peer feedback and advice._ As aforementioned, PwS in China rely more on some types of peer support than SLP support. Although there are many online forums and communities, it is still difficult for PwS to get support and the answer they need in time. In every popular online forum, we observed a lot of posts where PwS describe their situations and seek for peer suggestions or recommendations for potential methods to improve their situation (_7 have been pronouncing like this for decades and I really want to change. Does anyone have potential methods?_). However, the response rate is low. The reason may be that many users just enter the forum to search for the information they need and do not look at other posts. Many PwS have tried a lot of methods learned online, but they found it hard to persist for a long time due to lack of personalized advice and clear vision of their progress. Advice in the online forums are just generic ones while no specific personalized feedback based on their performance were provided. So they don't know what is their exact speaking problem and the most suitable and effective methods (_7 tried many methods such as metronome and deliberate breaks, but I quit after a short time because I did not see any progress_"). When asked about the potential effect of the personalized feedback from other PwS, all interviewees agreed that it will benefit their speech fluency improvement and give them mental support and motivation (_P9: "With a supportive group, I will be more motivated to practice because I know that they are practicing with me and will give me feedback"_). Many interviewees also emphasised on the importance of finding supportive peers (_P4: "I would be more confident if I receive positive feedback from other PwS" and P5: "I want more personalized authentic feedback from others, not just common compliments"_). Therefore, instead of providing connection with SLP, feedback and support from other PwS should be valued more in our design - **DC 4: The platform should build a supportive community where PwS could give and receive personalized peer feedback in time.** _Timely objective feedback._ When asked about the detailed types of feedback they want, many interviewees also mentioned the lack of quantitative and objective indicators (_P1: "When I practice for a speech, I do not have any objective feedback. I can just receive some simple suggestions from my friends" and P3: "I assessed my behaviors based on my own subjective feelings, without any feedback from other aspects"_). When asked about the detailed types of feedback, many PwS (N = 7) mentioned speech rate, tone and transcript, which aligns with the findings from prior work on Figure 1. Low-Fidelity Prototype UI Created with Figma (A. Task description page with a 5-second countdown before starting practice. B. Practice page with real-time objective feedback including facial expression, volume and speech rate. C. Community page where users give and receive feedback within the Community. D. The review page where commenters can view the task background, listen to the audio, and comment on the strengths and weaknesses. E. Review page where the practicer can view others’ comments) providing instant feedback on analysis and training of general speech fluency (Han et al., 2017; Chen et al., 2018; Wang et al., 2019). Although these features were proved to be beneficial for general speech training, their effectiveness in assisting PwS practice speaking fluency was not investigated. This is also evidenced in the online posts as many of them indicated that PwS should try to speak slowly to avoid stuttering. In addition, many PwS consider facial expression and body movements as important factors to pay attention to when they are speaking, because their facial expression and body movements are somewhat influenced when stuttering and they want to avoid that (_P7: "The most recent practice experience of mine is to have video conferences with my friends, where I could clearly see my facial expressions"_). This is also reflected in the forum posts ("_Paying attention to your facial expression is important! Some PwS have strange facial expressions even when they are not stuttering_"). Therefore, real-time self-image including facial expression and body movement should be considered as an important type of feedback for PwS to adjust accordingly - **DC 5: The platform should provide timely objective feedback for PwS to reflect on and adjust accordingly (e.g. speech rate and volume).** ## 4. Prototype Design With the guidance of the five DCs, we designed several features and integrated them into a low-fidelity prototype (Zhu et al., 2019) to elicit more honest feedback on the effectiveness of these key features. By conducting an iterative design workshop with the 12 participants from the formative study, we summarized the key findings for potential improvements on the prototype and then polished the prototype accordingly to the final version. ### Low fidelity Prototype We created the low-fidelity prototype (See Fig 1) using _Figma6_. In the following paragraphs, we present the key features of the prototype in two perspectives: the practicer and the commenter. Footnote 6: Figma: [https://www.figma.com/](https://www.figma.com/) #### 4.1.1. Practitioner's perspective - Fig 1 A, B, E When starting a task, a paragraph of the task description page with a five-second countdown will be displayed before the conversation task officially starts (Fig 1 A). The countdown is designed to give users a moderate sense of tension **(DC3)**. After the countdown, the practice page will be loaded where a video will be auto-played (Fig 1 B). The content of the video is a person talking to the user, prompting questions and waiting for users to answer. We recorded the video of a real person talking instead of using animations to make it more stressful, as participants reported they become nervous when there are real persons around **(DC3)**. The person in the video will nod and smile to make the user experience more realistic and stressful **(DC3)**. For conversation topics, we used the ones mentioned by our PwS interviewees which could induce stutter-triggering words **(DC2)** and make them nervous, including job interviews, phone calls, workplace meetings, clothing purchases, food orders, and campaigns **(DC1)**. Some of them were also mentioned in the online survey conducted by McNaney et al. but not implemented in their prototype (StammerApp) (McNamey et al., 2017). In addition, some specific pronunciation tasks were also designed where users need to read the text displayed aloud **(DC2)**. The text was designed to contain several stutter-triggering syllables. For this kind of task, the video did not include any real person and only displayed plain text. While the user is speaking, some real-time objective feedback **(DC5)** will be displayed below the video: a dynamic volume graph and speech rate number (Fig 1 B). A real-time self-image captured from the front camera will be displayed at the bottom-right corner of the video area. From the self-image, PwS can observe their facial expressions and body gestures when speaking. We expect the users to reflect on these objective feedback indicators and adjust their speech accordingly in real-time. The users' voices will be recorded and uploaded to the server for the next phase. In addition, on all the main pages, a paragraph of encouraging words (See the text in Fig 1) will be displayed at the bottom of the page **(DC4)**. Besides real-time self-reflection and adjustment based on the objective feedback, users can also reflect on their behaviors after practice by opening the tasks they completed to view others' feedback on their own audios (Fig 1 E). The result page displays the recorded audio, average speech rate, audio transcript and comments from other PwS users (commenters). All comments will be evaluated by researchers to verify that none are toxic (**DC4**). Although prior work also allows users to review and reflect, they can only listen to their audios, without any indications or feedback (Krishnan et al., 2017). #### 4.1.2. Commenter's perspective - Fig 1 C & D After finishing the task, the user can check audios uploaded by other PwS users in the community (Fig 1 C). By opening one audio tab, a page with detailed information will be loaded (Fig 1 D) where users can see the task description, listen to others' audio and give them comments and grades as feedback on the practice's strengths and weaknesses **(DC4)**. Everyday, all users will be reminded by Wechat7 private messages to practice, give feedback to other peers and check the feedback they got in time**(DC4)**. Footnote 7: Wechat: a Chinese instant messaging, social media, and mobile payment app developed by Tencent. [https://www.wechat.com/](https://www.wechat.com/) ### Iterative Design Workshop with PwS To collect primary user feedback and perception on the prototype, we recorded a video to demonstrate the workflow and interaction in this low-fidelity prototype. Then we invited the 12 interviewees back to experience our low-fidelity prototype and collected their feedback by interview. Each session took about 30 minutes. In the following paragraphs, we present the findings from the iterative design workshop on user experience and conclude improvements to be made for each feature. #### 4.2.1. Insufficient Stress of the tasks All participants found the personalized simulated video conversation tasks beneficial as they could practice and rehearse some stutter-triggering scenarios. However, many participants (N=8) mentioned that the pressure in the task was insufficient to train them to handle similar circumstances under pressure in the real world. In terms of potential solutions, P4 suggested that character in the task video should make direct eye contact with users (_P4: "People in the video should have direct eye contact with users to make it more stressful"_). P5 proposed hiding the task description after a few seconds to make the simulation more realistic and demanding. Many participants (N=9) also felt that situations requiring prompt response are stressful. In addition, all task themes should be relevant to daily life; otherwise, PwS may find it difficult to complete the tasks and become frustrated (P6). Although most participants required higher stress level for the tasks, P6 stated that different stress levels should be provided for each task, so that everyone could choose levels suitable for their practice. This aligns with our findings in the formative study that the preferences for tasks of PwS are highly personalized. Based on the participants' feedback, we iterated the prototype by **(1) asking the persons recording the video to make direct eye contact with potential users, (2) hiding the task description after a few seconds, (3) recording task videos that require prompt response and (4) providing multiple stress levels of each task by adding disruptions.** #### 4.2.2. Number of conversation rounds in a task Most participants (N=10) appreciated the conversation content in the video as it is relevant to daily life. However, many participants (N=7) found the task insufficiently engaging and stressful because the persons in the videos only spoke once and then waited for the user to speak. When the user has finished speaking, the person in the video should speak up again to make the dialogue sound more natural. Therefore, **we increased the number of conversation rounds in a task in the iterated prototype.** #### 4.2.3. More personalized real-time feedback and subjective feedback All participants found the visualization of speech-related data helpful, including speech rate(P1, 2) and facial expression (P2, 7). In addition, they also desired to see more types of speech indicators in real-time. Meanwhile, they have different preferences for the types of real-time feedback, such as volume (P6), pitch, tone, rhythm (P1) and transcript (P3). To fulfill the personalized needs of real-time feedback, **all forms of real-time feedback indicators mentioned by the participants were presented by default in the iterated prototype. And switches for each indicator were provided for users to switch off whatever indicator they did not want to see in real-time.** All participants believed the subjective feedback would be beneficial and they were willing to carefully listen to and provide comments on the audios of others. Meanwhile, P2 and P8 expressed their concern about the situations that they do not have any concrete suggestions to offer. To address this problem, we decided to **add a rating function as one type of compulsory feedback and to make comments optional,** so that users can only provide ratings if they have no comments. #### 4.2.4. Insufficient reflection assistive functions Most of the participants (N=9) agreed that the current design of the reflection page was beneficial. In addition to the reminders and encouraging words, P5 and P7 suggested that a daily report demonstrating progress for the day could help them better reflect on their daily behaviors and progress, which could be encouraging and motivating for future practice. Therefore, we decided to **add a daily report function.** ### Final Prototype We modified the prototype based on the findings from the iterative design workshop. In this section, we present the final version of our prototype, including the practicer's perspective and the commenter's perspective. Figure 2. The workflow of practicers’ perspective: task description page (A), practice (B), review (C) and daily report (D) #### 4.3.1. Practicer's perspective Practicer will first see the task description page (Fig 2 A) with text description and a five-second countdown, which is the same as the low fidelity prototype. After the countdown, the practice page (Fig 2 B) will be loaded, where the conversation task video will be auto-played and users need to talk to the person in the video. The person in the video will nod, smile, have direct eye contact with the users and interrupt them during the conversation by saying words like "excuse me" to make the user experience more realistic and stressful. While the user is speaking, a set of real-time feedback will be displayed below the video: audio transcription, speech rate, pitch, volume and their trends accordingly (Fig 2 B). A real-time self-image captured from the front camera will be displayed at the top-right corner of the video area, From which PwS can observe their facial expressions when speaking. All the real-time feedback can be individually switched off by the corresponding buttons to their right. There are several sections in each video. In each section, the person in the video will say several sentences to set up and push forward the scenario, ask a question and then wait for the user to respond. This is to increase the stress level by increasing the conversation rounds. Upon finishing each section, users need to tap the "Finish" button to indicate that they finished speaking. All users will be reminded by Wechat private messages to practice every day. By the end of each day, users are reminded to check the feedback they get from other users (**DC4**) and the daily report. By opening the review page of a task, users can re-listen to their audio and see other users' comments (Fig 2 C). The daily report concluded the tasks the user finished today (Fig 2 D), which could be opened to see others' feedback, the number of comments this user gave to others, the number of thumb-ups received and daily ranking. #### 4.3.2. Commenter's perspective After finishing all compulsory tasks, users are required to open the community page (Fig 3, left) to check others' audios. By selecting an audio block, a page with detailed information will be loaded (Fig 3, right) where users can see and listen to others' audio and give them feedback. For each section of the audio, users need to rate the audio (from 0 to 5 stars) and give optional comments. Users will be reminded to point out the issue in a polite manner and provide practical advice with encouragement. Users can also thumb up and reply to others' comments (See the red labels in Fig 3, right). All comments are evaluated by researchers to verify that none are toxic. Figure 3. The workflow of commenters’ perspective: group page (left) and comment page (right) #### 4.3.3. Prototype Implementation We utilized React(Rendle, 2017) framework for the front end and Node.js for the back end. For the database, we used MySQL and used Structured Query Language (SQL). For real-time features, We utilize the real-time speech-to-text conversion service from Tencent Cloud8, a world-known back-end service provider. The service packs the blob data from each audio frame to the server and gives text feedback in real time. By using _getUserMedia_ method in a web application, we can get control of the front camera and recording system of the device, and continuously get the underlying logic & resources of video and audio. Furthermore, we use Fourier analysis and signal processing to extract volume and pitch information and display the real-time image from the front camera. Footnote 8: Tencent Cloud: [https://www.tencentcloud.com/](https://www.tencentcloud.com/) ## 5. Deployment Study We conducted a seven-day deployment study with multi-user online simultaneously to understand the effectiveness of the key features we designed, and explore potential design implications for future work. To our knowledge, it is the first time such a prototype was designed and evaluated in a long-term, multi-user simultaneous online deployment study. ### Participants and Apparatus We recruited 11 participants (8 males, 3 females, aged 19-35) from the online community by posting recruitment messages. Considering that some PwS do not want to emphasize their stuttering and tend to live with it life-long without any training, we only recruited those who were willing to improve their speech fluency using our prototype. During registration, we asked the participants to rate the severity of their stuttering on a scale from 1 to 10. The results ranged from 4 to 10. We deployed the prototype to a remote server so that participants could use it to practice from anywhere and on any devices convenient for them, as long as they had access to the internet to open the website. Participants were compensated in accordance with local standards. ### Task Design Every day, each participant was assigned three tasks: one compulsory conversation task, one compulsory pronunciation task, and one optional conversation task. Each conversation task took about 10 minutes and each pronunciation task took approximately one minute. Only after finishing all required tasks could the user view and comment on the audio of others. Each participant was asked to comment on three assigned participants each day, which was fixed throughout the experiment. This was to ensure each participant received feedback from others and commenters could observe potential improvements in the participant they were evaluating. ### Procedure To begin, we held 11 individual onboarding sessions one day before the official commencement day of the study, where participants were briefed about the study and signed consent forms. Then participants watched a pre-recorded video demonstrating the prototype and explored the prototype freely with their assigned account in order to familiarise themselves with it. Participants were able to ask any questions about the study and the prototype. After that, participants were asked to finish a series of tasks that walked them through the entire daily practice procedure: talk to the video, listen to and comment on others' uploaded audios and review the comments for their own audios. For the following 7 days, participants were required to complete the assigned tasks and comment on others' audios in a timely and appropriate manner every day. At the end of a day, they were asked to fill in a daily questionnaire to reflect on their experience for the day, where they need to reflect on their training and rate each type of feedback for their usefulness from 1 to 7. On the day following the seven-day study, All participants attended a semi-structured interview, where they were asked about subjective reflections on the prototype and expectations for future systems. As aforementioned in Section 4, all users were reminded by WeChat private messages to practice and give comments. ### Data Analysis Three of our co-authors conducted an open coding process to analyze the qualitative data, including user comments, responses to the daily questionnaires and interview transcripts. In addition, we collected activity logs of users' behavior in each task, such as the audio length, status of each indicator switch, and comment-related statistics. After the data cleaning process, we ran statistical tests on all the quantitative data collected. ## 6. Results In the following paragraphs, we will discuss the findings of user perception and usage of the personalized practice scenarios, timely targeted feedback and social support for PwS (RQ2). ### Practice Tasks in Different Scenarios with Pressure Many participants indicated that the tasks are useful and engaging for them to practice speech fluency. Meanwhile, there is still room for development in the task's design in order to make it more personalised and realistic. #### 6.1.1. Overall Usage of the Tasks Overall, all participants indicated that the tasks are helpful for them to practice speech fluency in an effective and engaging way. In addition, all participants stuttered several times in the practice procedure, and they agreed that the task was stressful and embedded stutter-trigger words, which satisfied their practice requirements. Specifically, the tasks trained them to organize the speech content quickly before speaking (_P11: "I find myself organizing the speech content better and faster. I also have more courage to speak in real life"_) and speak for a long time (_P5: "In real life, I don't have opportunity to speak for a long time"_). P3 expressed his excitement about finding a potential reason for his stuttering: "_Through the tasks, I found that my stuttering may not be physical defect, but caused by my inability of organizing the speech content quickly. I'm more motivated to practice"_. In addition, many participants (N=5) indicated that many tasks were very engaging and made them feel nervous to some extent, which is helpful because it simulated the situation in which they need to talk under pressure in real life (_P2: "Many tasks made me nervous and under-pressure, which I believe could help me prepare before facing similar situations in real life"_ and _P6: "I found it very helpful because I can't find people to practice speaking in a simulated situation. It helped me to train speaking for daily life and workplace scenarios, which is just what I need"_). #### 6.1.2. Personalized Tasks (DC1) with Stutter-triggering Syllables (DC2) _Preference for task sections._ Overall, all participants agreed that the tasks were designed properly and well simulated the conversation scenarios in real-life. Our results show that the number of sections in the task video and task topic influenced the user's experience. As aforementioned in Section 4, we designed the tasks to contain various number of sections from 1 to 5. Many participants (N=6) indicated that they prefer tasks with at least five sections over those with only one or two sections that require them to deliver a lengthy speech (_P1: "Tasks with 5 sections resembled conversations in daily lives, which are the situations that I need to practice the most"_). Other participants preferred tasks with few sections (_P10: "I prefer the lengthy speech tasks because I tend to stutter more when speaking for a long time without interruption"_). Preference for task topicsWe also asked each participant to identify their favourite tasks. It was found that their preferences varied a lot and they wish to receive more personalized tasks in the future. As mentioned before in section 4, our simulated conversation tasks included two types: daily life scenarios and important occasions like in workplace. Seven participants preferred the important occasions (_Po: "I like the meeting in the workplace, because it is important for me to practice"_) and the other four preferred daily life scenarios (_P5: "I like the movie sharing task because it's interesting so I can say more"_). From the daily questionnaires and interviews, we found that the preference for tasks was highly related to their personalized needs in their daily lives. For instance, there is one task that required participants to deliver a speech for campaign in the university. Only P5 and P10 liked this task as they are undergraduate students, and it can help them prepare for the campaign in real university life (_P10: "This task is a situation I normally worry about and stutter extremely severely."_). However, other participants indicated that they dislike this task the most, because it is too far away from their daily lives (_P5: "I know little about this topic, so I don't know what to say"_). #### 6.1.3. Stress and Realism (DC3) As mentioned before in section 4, we designed and recorded task videos with real persons who performed verbal interruptions, direct eye contact and nodding to make it more realistic and stressful. Many participants (N=7) indicated the videos were well designed and made them nervous to some extent (_P7: "At first I believed that I was talking to a real person in real time because the person in the video keeps nodding and smiling"_). We also observed different preferences toward stress level among all participants: most participants preferred high stress levels while there were also some participants preferred low stress levels (_Po: "I suddenly did not know what to say when she interrupted me. It feels awful and I would rather not practice that kind of scenario recently"_). However, some participants also stated that there is still a gap between practicing with the prototype and the real world. P11 said that the conversation design is not realistic enough: "_The questions the waiter asked in the ordering task are not similar to what I heard in real life, which made it less realistic_". P9 mentioned the problem that the actors in the task videos could not bring enough pressure: "_At first, I'm very nervous. But as the study went on, I got familiar with it. And the actors are not like my boss, who gave me a lot of pressure_". From the audios uploaded, we discovered that some participants did not treat the tasks seriously as real-life scenarios. They kept silent for a long time before speaking for some tasks, which is not likely to happen in real life, if they have no idea (P11: "I really don't know what to speak for some unfamiliar topics, so I need to think for a while before speaking. If this happens in real life, I may just tell the listener that I have no idea"_). They quit in half of this task when they thought their performance is too bad and re-open it to start again, while in real life, they do not have the chance to do that (P11: "_I just do not want others to hear me speaking so bad"_). ### Personalized Timely Objective and Subjective Feedback In this section, we present the participants' perception of comment-based subjective feedback and real-time objective feedback. #### 6.2.1. Timely Subjective Peer Feedback and Social Support (DC4) Overall usefulness of the subjective peer feedbackAs aforementioned in Sec 4, we designed four features for peer feedback: text commenting, rating, thumb-up and reply. In the seven-day deployment study, we collected 1676 valid text comments in total, 47 of which were replied and 45 of which got thumb-up. For comment-based features (comments (M=4.48, SD=2.73), thumb-up (M=2.24, SD=1.89) and rating (M=2.03, SD=2.38)), we conducted a one-way repeated measures ANOVA test to determine the difference between each factor. The result showed that the differences of participants' opinions between comments and thumb-up (p=0.004), comments and rating (p=0.014) are significant, which indicated that comments were the most important one in the set of comment-based features. After open-coding, all 1798 comments are concluded into three main categories according to their content: problems and advice(1059), compliments and encouragement(550) and reply to the content (80). Some comments may fall into more than one category. Overall, participants found most of others' comments beneficial for diagnosing their problems, providing advice, encouraging them to keep practicing, and communication with others. In the following paragraphs, we will discuss user perceptions of each type of comments in detail. Problems and advice aided PwS to diagnose their problemsProblems and advice: The majority of the comments pointed out specific issues and gave personalized advice about speech rate(361), cohesion(283), content(169), emotion and confidence(153), clarity(68), volume(29) and accent(6). All participants found others' advice generally helpful, although there were also some disagreements with some comments. For speech rate, we found two different speech rate groups often comment on each other suggesting them to change their speech rate (_P1: "try to rise your speech rate steadily" and P2: "You speak too fast. Try to slow down and take a break at the appropriate position, which will make you sound more like a normal person"_). Although four participants found speech rate related comments helpful (_P9: "People advised me to talk more slowly. I believe it will help"_), P4 expressed his disagreement with the comments: "Comments told me to speak slowly but I don't think it is a problem". Another issue frequently mentioned in the comments is speech cohesion, including abuse of filler words (_P5: "You used too many words like 'umm', 'well' and 'you know"_) and unnecessary blocks in one sentence, which may be caused by improper breathing pattern and problems with the content organization (_P1: "You can speed up your speech and reduce the number of pauses within and between sentences" and P9: "Think about it before saying it; stop a little bit in the middle of each sentence to give yourself time to think"_). For speech content, some comments suggested the participant speak more (_P1: "The overall quality is great but the content is short. I hope you can say a little more"_) while others suggested the participant speak less (_P9: "I think you can say less"_). Regarding mindsets and emotion, many comments suggested the receiver to relax and be more confident. Some participants also pointed out that some users should make their speech more emotional (_P6: "We should speak with emotion in life, like joy and sadness."_). When asked about how they gave comments to others, most participants indicated that they tried to point out problems gently and recommend useful methods they have tried, which is consistent with our hypothesis (_P1: "My expectation is to point out the problem to others, but it may not be accepted by everyone"_, _P2: "My comments were all about speech therapy because I have learned and tried many speech therapy methods"_). P11 expressed his concern that giving comments may disturb others: "_I was afraid that my suggestions will disturb the original methods they are using". Compliments and encouragement helped PwS stay positiveAll participants were pleased and appreciative of others' compliments and encouragements. Many comments (486 out of 1798) were compliments about several aspects, such as speech rate and tone (_P2: "The speech rate is perfect" and P3: "I rated it 5 out of 5 because there were proper pauses and the speech rate was proper"_). There were also encouragements and reassuring phrases (N=64): _P9: "I believe you will become more fluent. Just devote more time to practice" and P10: "You are only 20 years old. You have a lot of potential and chances"_. All participants agreed that they were encouraged and motivated to keep practicing (_P7: "The encouragement made me more motivated to practice"_). However, many participants (N=6) also stated that they dislike comments that are too vague or seem perfunctory (_P6: "I did not say well but the comment said it is good"_). _Reply to the content spoken facilitated communication._ In addition to comments on speaking fluency, there were also 80 comments replying to what the speech content of the uploaded audios: _P6: "Though I do not like group gatherings, I will still go to it since it is essential in the working place" and P8: "I have not heard of this song. I will search for it later". Many participants found it interesting and had a sense of community (_P4: "It was very interesting. I felt like talking to a friend"_). _Thumb-up, rating and reply to comments enriched the comments and communication._ Most participants found thumb-up and ratings to be much less effective than textual comments. In general, these two components are underutilized, and participants thought it less clear than the user comments (_P1: "Thumb-up is not so intuitive as direct comments" and P3: "When rating others: I don't want to give negative impact, so it may not reflect the true situation"_). In the meantime, we discovered that the content being thumbed up are mainly in four types: longer-than-average text; encouragement; personalized advice; and interesting content and conversations. This indicated that these types of comments are more beneficial for PwS than others. Five types of replies to comments were observed from the coding result: (1) Expression for thanks: _P10: "Thanks for your comments. Let's keep practicing together" and P7: "Thanks for your encouragement and advice"_. (2) Disagreement, especially on the speech rate: _P1: "I think my problem is not caused by the speech rate as I stutter even when I speak slowly. It is true that I am more likely to stutter when speaking fast, but I think the biggest effect of speaking fast is that it may cause my speaking the wrong word instead of stuttering"_. (3) Confusion: _P7: "Sorry, I do not understand your comment"_. (4) Reply to the speech content, but not relevant with the stuttering situations: _P9: "I am in a small city, which is far from downtown" and P10: "There is always chicken in the countryside"_. (5) Agree and Add-on to other participants' comments: _P6: "You are right. I have a lot of syllables hard to pronounce" and P10: "Yes, it is hard for me to pronounce words starting with 'b"_. This indicated that these types of comments are more likely to trigger communication among PwS in the community. _Encouraging words on every page._ As aforementioned in Section 4, on each main page of _CoPracTter_, reassuring phrases were presented to reassure and encourage users to speak with confidence. Most participants (8 out of 11) considered it encouraging whereas the remaining participants did not pay attention to it (_P8: "I didn't see it because I didn't need it"_). P2 even provided us with a collection of books and encouraging phrases. Besides encouraging words, participants indicated they want more interactive, intimate and practical encouragement. P1 wanted more interesting functions like sending roses to others and P4 recommended adding an additional screening condition (_"Encouraging words can be displayed only when the average score in this task is lower than 3"_). P10 wanted brief and encouraging videos similar to _"the King's speech"_. P8 suggested that human-recorded voice would be more encouraging than plain text ("I want encouragement with the real human voice, which could cause proximity and authenticity"_). _The daily report helped users reflect on their behaviors._ Although we expected that the daily report could help the participants see their progress clearly and reflect on their behaviors every day, its effect was limited. Most participants (N=6) did not open the report every day because they thought the tasks were the most helpful while the report was merely a summary of the information they already knew (_P7: "I seldom open daily report"_). Several participants (N=4) said the report was clear and facilitated their review of daily behaviors (_P11: "It is convenient for me to review quantitative feedback. With the report, I do not have to check every comment one by one"_). P1 stated that seeing the ranking in the daily report motivated him to practice more and comment more in the first few days of the study: "_When I saw the ranking in the daily report, I was eager to get the first position in the ranking within the first four days. But later I did not know what to comment, as these problems were discussed in the preceding days"._ #### 6.2.2. Real-time Objective Feedback (DC5) In this section, we present the findings of participants' perceptions of real-time feedback. Overall, all participants found the speech-related indicators displayed in real-time helped them gain a better understanding of their speaking, such as speech rate and stutter frequency. Different participants had their preferences for each of the real-time feedback metrics. They generally agreed more on the effectiveness of speech rate, audio transcription, and facial expression, while most of them did not consider pitch, volume, and their trend to be effective. In the following paragraphs, we will report the usage and user perception of each type of objective feedback _The speech rate value was more straightforward than the rate trend and requires less cognitive load._ Speech rate is the second most useful feature (M=5.00, SD=1.71) according to the summary questionnaire and exit interview. Participants could directly know whether they are speaking too fast or slow from the speech rate value (_P1: "Speech Rate is very intuitive"_). Some participants suggested that they would prefer a straightforward qualitative result of the comparison between their own speech rate and the optimal rate for Chinese speeches (150-180 syllables per minute [21]), rather than exact values/the whole trend (_P3: "I could not look at so many indicators" and P9: "I just want to know whether I'm talking fast or slow"_). In addition, P1 mentioned that he wanted direct reminders to remind him when he speaks too fast or too slow rather than only displaying the speech rate value. Only one participant thought the speech rate did not help (_P2: "I can control my speech rate. I don't need to see the number"_) and he wanted a real-time auto-grading function based on speech rate and fluency, just like some Karaoke mobile apps. _Audio transcription displayed the stuttered words._ Audio transcription is the third most useful feature (M=4.91, SD=1.68). This feature is perceived very differently among participants. Some participants found it helpful in displaying the repetitions and words not pronounced clearly (_P9: "I looked at the transcript each time I spoke to see if there were any repeats or words not clearly pronounced"_). However, the effectiveness of these features is limited by the transcription accuracy (_P11: "Because I speak in dialect, the transcript may not be very accurate"_). In addition, there are also some participants finding it meaningless (_P2: "Because I know exactly what I'm talking about, I don't know why it transcribed my voice into the text"_). _Facial expression caused pressure and needed to be paid attention to._ Facial expression captured from the front camera is the most useful feature (M=5.27, SD=1.81). Most participants found it useful to monitor their facial expressions and movements through the front camera view, so that they would feel more stressed or could adjust their facial expressions accordingly (_while P5: "The front camera helped me see my facial expression and mouth shape changes, and it caused more pressure" and P6: "It helped me a lot! I can see my facial expression all the time"_). While there was also one participant who found it not effective, as he did not care about his facial expression and only cared about his speech content (_P2: "I think it is optional"_). _Pitch and volume with trends needed to be designed more carefully._ The effectiveness of pitch (M=4.00, SD=1.41), pitch trend (M=3.91, SD=1.44), volume (M=3.73, SD=1.54) and volume trend (M=3.91, SD=1.50) are not significant. Some participants found the pitch and volume values not representative or generalizable (_P2: "I think pitch and volume depend on the situation you are speaking in"_). In the meantime, some participants preferred to see a qualitative indication, which should be the result of the comparison between their data and the optimal values (volume: between 50 and 65 decibels [33]; pitch: 100-120 Hz for adult male and higher for adult female [27]), similar to what they have suggested for speech rate (_P3: "Only show fast/slow/high/low instead of the number" and P4: "Pitch and volume are of little use, I need a reference value to know whether I am speaking at a high or low volume and pitch"_). ## 7. Discussion We summarize our key contributions and then discuss our design considerations and future work in light of prior work. ### Key Contributions We uncovered the stuttering scenarios, workarounds, and challenges of PwS in China by first analyzing four popular online forums where Chinese PwS actively participate and then conducting semi-structured interviews. Specifically, we identified the need for assistive mobile applications, general and specific Chinese stuttering situations, and the need for personalized timely feedback on their practice sessions and social support. From there we derived five design considerations: (1) The practice should focus on their stutter-triggering scenarios. (2) The practice scenarios should be carefully designed to trigger pressure and nervousness. (3) The practice scenarios should trigger users to practice stutter-triggering syllables, either implicitly or explicitly. (4) The platform should build a supportive community where PwS could give and receive timely personalized peer feedback on their practice sessions. (5) The platform should provide timely objective feedback on PwS's practice sessions for them to reflect on and adjust accordingly (e.g. speech rate and volume). We implemented these design considerations in a prototype named _CoPracTter_ and evaluated it with a deployment study where 11 PwS participants used it online simultaneously for seven days. We derived key findings and insights to inform future design: (1) Stressful stutter-triggering practice scenarios close to daily lives could help PwS practice speaking fluently in real-life. (2) Timely subjective personalized peer feedback on their practice sessions could help PwS reflect on their speech by pointing out problems and providing advice, encouragement, and communication opportunities. (3) Real-time objective feedback could help PwS adjust their speech rate and facial expression. Next, we discuss each of our key findings in light of prior work and the design implications. ### Practice Scenarios Based on our design considerations, we summarized key findings about practice scenarios in three aspects: task topic, stutter-triggering syllables, and stress. Task topic - (DC1)All participants agreed that practicing personalized tasks with stutter-triggering syllables in different stressful scenarios with our prototype could effectively help them improve their speech fluency and deal with similar situations in their daily lives. Although prior work uncovered potential task topics for PwS, only the usage of a subset of six topics was investigated (Kumar et al., 2018). Our work included all these six topics and added more Chinese-specific scenarios, such as reporting identity information when doing community-level PCR tests and reporting verification codes for receiving delivery. Moreover, we also observed personalized preferences toward different practice task topics and the number of conversation rounds in them. Thus, it is important to allow users to tailor their practice tasks based on preferences. Moreover, cultural context should also be considered as some scenarios (e.g. social events-related scenarios and living habits in an area) might only work in specific cultural contexts. Stutter-triggering syllables - (DC2)Our formative study findings show that many Chinese PwS have persistent hard-to-pronounce and stutter-triggering syllables, many of which (e.g. 'da' and 'li') were not reported in prior work (Shi et al., 2018; Kumar et al., 2018). This might be caused by language differences and pronunciation habits. By embedding those stutter-triggering words in the conversation design of _CoPracTter_, we found that participants stuttered on those words and were generally satisfied with the design. Furthermore, our formative study found that merely embedding stutter-triggering words in the conversation design was not enough because some participants thought practicing speaking a single sentence with their stutter-triggering words would be more efficient practice. Therefore, we designed some tasks without video which only asked the user to read the sentence on the screen aloud. However, our evaluation results indicated that none of the participants stuttered on these reading tasks. This does not mean that speaking a single sentence with stutter-triggering words was not beneficial for practice speaking, but only suggested that reading sentences on screen is not an effective way to train PwS to speak their stutter-triggering words. Future work should investigate better ways of designing stutter-triggering sentence practices. #### Stress and realism - (DC3) Prior work found that PwS would like to practice speaking under pressure (Krishnan et al., 2017). However, it remained unknown how much stress PwS would prefer to have. Another novel design of _CoPracTter_, As aforementioned in Sec 4.3.1, was using the tasks to induce different stress levels by controlling the person in the video to make direct eye contact with the user or interrupt them when they were speaking, etc. According to our findings, different individuals had various preferences for this design. Some participants preferred tasks with high-stress levels while some preferred low-stress levels. Interestingly, some participants even found the highest stress level we offered was still not stressful enough. Furthermore, participants felt that the realism of the current task was limited by its format, a pre-recorded 2D video, and could not perfectly simulate the daily conversation with stress. To make the practice experience more realistic, future research should explore using other techniques, such as VR and AR, to provide a more immersive practice environment. In addition, face-swap techniques might be leveraged to make the practice condition more realistic or stressful by showing the face of someone whom they are afraid of speaking to in the practice task videos. ### Subjective Personalized Peer Feedback - (DC4) All PwS participants agreed that personalized timely subjective feedback on their practice sessions was beneficial for their training. Participants appreciated their peers' comments and advice on their problems, kind compliments, and encouragement (**DC4**). Although prior work also involved the monitor and feedback mechanism, most of them focused on registering the stutter situations in real life for SLP to better diagnose (Beng et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017) or practice independently without others' feedback (Krishnan et al., 2017). None of them investigated the effectiveness of peer feedback on their practice sessions, which is important to Chinese PwS due to the shortage of SLP in China (Krishnan et al., 2017). Prior work revealed that Chinese people have more negative attitudes towards their role in assisting PwS and empathy (Krishnan et al., 2017), which may result in Chinese PwS feeling lonely and eager to build relationships with other PwS. This aligns with our findings on the lack of personalized peer support and advice in online communities. Therefore, we required participants to comment friendly while gently pointing out their issues. All participants felt the comments helped them and several participants reported receiving psychological encouragement. Although encouraging comments and compliments are heart-warming, some participants found some comments perfunctory, and they would like to get practical advice rather than perfunctory compliments. A prospective ideal comment should not only avoid reiterating a problem that has already been stated too many times for the sake of encouragement but also highlight important issues. Future work should analyze the characteristics of the comments in the PwS community and design learning-based AI models to moderate the comments in online communities and forums. The comments could be classified and labeled to make it more efficient for users to retrieve the information they want. ### Real-time Objective Feedback - (DC5) We also confirmed that many objective feedback features are effective (**DC5**), such as facial expression, speech rate, and transcription. Although prior work indicated speech-related indicators were beneficial for general speech training, how they would be utilized and perceived by PwS was not investigated (Kumar et al., 2017). We summarized the participants' usage patterns for each feature. Although participants found many types of objective feedback beneficial, many found it challenging to view multiple indicators simultaneously while practicing. Instead, they preferred to see qualitative labels (e.g. fast or slow) of their speech rather than the exact values while practicing. Thus, future work should investigate the most efficient presentation of various forms of objective information to assist real-time practice. While for reflection after practice, many participants preferred such detailed objective information that could help them reflect on their performance. These low rating scores of the current visualizations of features suggested that viewing trends of multiple features in parallel was overwhelming. Future work should investigate better ways to visualize such information to avoid information overloading. In addition, there are many other features that users expect, such as a summarization of common characteristics in users' stutter situations, straightforward indicators of where users were blocked, and a breathing indicator as some PwS tend to stutter after they speak for a long time and forget to inhale. ## 8. Conclusion and Future Work In this work, we investigated the effectiveness of targeted practice scenarios and personalized timely feedback in assisting PwS in coping with stuttering and improving speech fluency. Informed by the literature and a formative study investigating user experiences and needs of PwS in China, we developed _CoPracTter_, an online support tool for PwS to practice speech fluency and receive timely feedback, through an iterative design process involving twelve PwS. A seven-day deployment study with eleven PwS simultaneously online in China revealed that these key features could assist PwS in practicing to improve speech fluency, maintaining a positive mindset and facing similar situations in real life. In addition, we provided design implications for future work in assisting PwS to enhance their speech fluency. To our knowledge, our work is the first to design and evaluate a mobile app that integrates a rich set of personalized practice scenarios, real-time speech indicators and timely peer feedback from real persons through a long-term multi-user simultaneous online study. Although the seven-day deployment study allowed us to derive insights into the effectiveness and user perception of the key features, the study time is still relative short to capture the whole picture of users' usage patterns. Future work should conduct an even longer deployment study with a revised mobile app that integrates the design implications from this work. ## Acknowledgments We thank our reviewers for their constructive feedback and PwS participants for their participation.
2303.00638
MEGA-DAgger: Imitation Learning with Multiple Imperfect Experts
Imitation learning has been widely applied to various autonomous systems thanks to recent development in interactive algorithms that address covariate shift and compounding errors induced by traditional approaches like behavior cloning. However, existing interactive imitation learning methods assume access to one perfect expert. Whereas in reality, it is more likely to have multiple imperfect experts instead. In this paper, we propose MEGA-DAgger, a new DAgger variant that is suitable for interactive learning with multiple imperfect experts. First, unsafe demonstrations are filtered while aggregating the training data, so the imperfect demonstrations have little influence when training the novice policy. Next, experts are evaluated and compared on scenarios-specific metrics to resolve the conflicted labels among experts. Through experiments in autonomous racing scenarios, we demonstrate that policy learned using MEGA-DAgger can outperform both experts and policies learned using the state-of-the-art interactive imitation learning algorithms such as Human-Gated DAgger. The supplementary video can be found at \url{https://youtu.be/wPCht31MHrw}.
Xiatao Sun, Shuo Yang, Mingyan Zhou, Kunpeng Liu, Rahul Mangharam
2023-03-01T16:40:54Z
http://arxiv.org/abs/2303.00638v3
# MEGA-DAgger: Imitation Learning with Multiple Imperfect Experts ###### Abstract Imitation learning has been widely applied to various autonomous systems thanks to recent development in interactive algorithms that address covariate shift and compounding errors induced by traditional approaches like behavior cloning. However, existing interactive imitation learning methods assume access to one perfect expert. Whereas in reality, it is more likely to have multiple imperfect experts instead. In this paper, we propose MEGA-DAgger, a new DAgger variant that is suitable for interactive learning with multiple imperfect experts. First, unsafe demonstrations are filtered while aggregating the training data, so the imperfect demonstrations have little influence when training the novice policy. Next, experts are evaluated and compared on scenarios-specific metrics to resolve the conflicted labels among experts. Through experiments in autonomous racing scenarios, we demonstrate that policy learned using MEGA-DAgger can outperform both experts and policies learned using the state-of-the-art interactive imitation learning algorithm. The supplementary video can be found at [https://youtu.be/pYQiPShk6dU](https://youtu.be/pYQiPShk6dU). ## I Introduction Learning-based control methods have shown successful applications in complex robotic systems [1]. Of wide applicability is imitation learning [2, 3, 4, 5], which only requires expert demonstrations that are easy to collect at scale. Among various imitation learning techniques, interactive methods such as DAgger [6] and Human-Gated DAgger (HG-DAgger) [7] are increasingly popular as they can address the covariate shift and compounding error induced by naive behaviour cloning [8]. Interactive imitation learning [6, 9, 10, 11, 7, 12] essentially involves expert feedback intermittently during novice policy training. For example, DAgger trains the novice using a mixture labels from expert policy and novice policy. In order to learn a more effective policy from human expert, HG-DAgger extends DAgger by introducing a human gated function to decide when expert should take over. Robot-gated methods, such as SafeDAgger [9], allow the robot to actively query the human expert and request interventions only when necessary. Nevertheless, to the best of our knowledge, all interactive methods have two key assumptions: 1. expert demonstration is perfect; and 2. all demonstrations are from single expert. The first assumption can barely hold in reality, since human expert usually make mistakes. For example, over 323 reportable human-driven vehicle crashes occurred each day in 2021 in the state of Pennsylvania [13]. It further reports that driver error accounts for \(85\%\)-\(90\%\) of all traffic crashes, which implies that human experts are often unreliable in terms of driving. The second assumption is also invalid when there are multiple experts trying to teach a novice. For example, different drivers may have different driving policies. Some might have a performant but aggressive style, whereas others may choose to be safe but conservative. If one learns a policy from different experts simultaneously, demonstration labels provided by different experts may conflict. Thus, scenarios where multiple experts exist call for new techniques in imitation learning. As illustrated in Figure 1, two demonstrations from two experts are not perfect (left figure), and we hope to learn good behavior (right figure) from both of them and avoid undesired behavior. In this work, we address the problem: _How can we interactively learn from multiple imperfect experts?_ Towards this end, we propose Multiple-Expert-Gated DAgger (MEGA-DAgger), a DAgger variant that is designed for learning from multiple imperfect experts. Specifically, we consider MEGA-DAgger for end-to-end autonomous racing, in which both safety and progress are crucial. We propose a control barrier function-based _safety scorer_ and filter unsafe expert demonstrations while aggregating new data. To resolve the multiple experts' action conflict, we evaluate each expert based on safety and progress scores and choose the best one. We show that our proposed solution outperforms existing DAgger variants and learns better-than-experts policy through experiments. Note that, even if our framework has only been examined in the racing case, it can be easily Fig. 1: Illustration of learning from multiple imperfect experts. For example, two rollouts (yellow and blue trajectories in the left figure) are from _two different experts_, respectively. Each of them has undesired behavior (red box), and ideally we can learn complementary good behavior from both of them (green trajectory in the right figure). applied to general autonomous systems with modified case-specific metrics. ### _Related Work_ **Imitation learning for autonomous driving**: Previous work have successfully applied imitation learning to self-driving cars for safe driving, see, e.g., [14, 15, 4, 16]. Few studies adopt imitation learning policy for autonomous racing scenarios which need to maintain performance in addition to safety. [17] provides a benchmark for autonomous racing using imitation learning. However, these work all assume single expert and cannot handle multiple experts case. In addition, they also have the assumption that expert demonstrations are perfect, which is relaxed in this work as well. **Imitation learning with imperfect expert**: learning from imperfect demonstrations is also studied in the past few years, see, e.g., [18, 19, 20, 21]. These work focus on learning from imperfect demonstrations from single expert using inverse reinforcement learning. Instead, we consider interactive imitation learning with multiple imperfect experts for safety-critical autonomous systems in this paper. **Imitation learning with multiple experts**: when there are multiple experts, it is natural to raise a question: which expert should we trust? This problem has been studied in the classification setting with the assumption that the labeled dataset is provided beforehand, see, e.g, [22, 23]. However, in our work, the training process is interactive with unlabeled experts for complex autonomous systems, which makes existing literature inapplicable. The contributions of this paper are summarized below: 1. We propose Multiple-Expert-GAted DAgger (MEGA-DAgger), a DAgger variant used for interactive imitation learning from multiple experts; 2. We develop a data filter that can strategically truncate undesired demonstrations, which significantly improves the safety in autonomous racing scenarios compared with existing DAgger variants. 3. We provide metrics to evaluate each expert and we empirically demonstrate that MEGA-DAgger can learn policies that outperform both experts and policies learned using HG-DAgger, the state-of-the-art interactive imitation learning algorithm. ## II Background ### _DAgger and HG-DAgger_ _DAgger_ is an interactive imitation learning algorithm that aggregates new data by running the expert policy and novice policy simultaneously [6]. Specifically, at each integration \(i\), new training data \(\mathcal{D}_{i}\) is generated by: \[\pi_{i}(x_{t})=\phi_{i}\pi_{exp}(x_{t})+(1-\phi_{i})\pi_{N_{i-1}}(x_{t}), \tag{1}\] where \(x_{t}\) is the state at time step \(t\), \(\phi_{i}\in[0,1]\) is a probability, \(\pi_{exp}\) is the expert policy, and \(\pi_{N_{i-1}}\) is the trained novice policy at iteration \(i-1\). Then, one can aggregate the dataset by \(\mathcal{D}\leftarrow\mathcal{D}\bigcup\mathcal{D}_{i}\), and new policy \(\pi_{N_{i}}\) is trained on \(\mathcal{D}\). By allowing the novice to affect the sampling state distribution, DAgger can mitigate the covariate mismatch and compounding error caused by behavior cloning [8]. _Human-Gated DAgger (HG-DAgger)_ is a DAgger variant proposed to be a more suitable interactive algorithm for the task of learning from human expert [7]. It mainly differs by proposing the new rollout generation method: \[\pi_{i}(x_{t})=g(x_{t})\pi_{exp}(x_{t})+(1-g(x_{t}))\pi_{N_{i-1}}(x_{t}), \tag{2}\] where \(g(x_{t})\) is a gating function with value 0 if state \(x_{t}\) is safe and value 1 if state \(x_{t}\) is unsafe. HG-DAgger assumes the expert is optimal and has privileged information regarding the safety of the current state. Thus, the expert takes control only if unsafe behavior rolled by novice policy is observed. Compared with DAgger, HG-DAgger can achieve better sample efficiency and improved training stability. Unlike DAgger and HG-DAgger, MEGA-DAgger is presented for multiple non-optimal experts case. As shown in Figure 2, the overall loop is similar to HG-DAgger with three added blocks: first, in each iteration, one expert should be chosen to be the dominant expert; then, data filter removes unsafe demonstrations, as experts are non-optimal and their behavior can also be unsafe; in addition, one needs to resolve the conflicts from multiple experts. ### _Autonomous Racing_ Autonomous racing is a competitive scenario where vehicles are driving at a high speed and trying to win the race [24]. It has become increasingly popular in the past few years, since it provides a suitable scenario for evaluation of both safety (no crash) and progress (high speed). Race cars of different scales are widely used for research and competition, such as Indy Autonomous Challenge [25], Roberrace [26], Formula Student Driverless [27], F1TENTH [28], and so on. Learning-based control methods has attracted growing attention in autonomous racing recently, see, e.g., [29, 30]. Imitation learning framework has also been successfully applied [17] since only human demonstrations are required in imitation learning and they are easy to collect. In [17], only one expert is considered and it is assumed to be Fig. 2: Control loop for MEGA-DAgger. For each iteration, one expert should be chosen to be the dominant expert. Data Filter is used to remove unsafe demonstration and Conflict Resolution is used to eliminate actions conflict among experts. perfect. However, different racers often have different and imperfect competition styles. This motivates us to learn from multiple imperfect experts and finally have a better-than-experts policy. ## III Methodology Algorithm 1 provides an overview for the MEGA-DAgger algorithm. The algorithm starts with an empty global dataset \(D\) and a randomly initialized novice policy \(\pi_{N_{0}}\) from the class of all possible policies \(\Pi\). Similar to HG-DAgger, MEGA-DAgger incorporates expert demonstrations \(D_{j}\) from each rollout \(j\) into \(D\) incrementally at each iteration, with \(j=\{1,...,M\}\), where \(M\) is the number of experts. During a rollout, the novice performs inference and controls the ego vehicle until the expert notices that the novice enters an undesired region. The expert then intervenes and takes control under this circumstance, and provides action label \(a\) for the current observation \(o\). After guiding the ego vehicle back to the desired region, the control of the vehicle is handed over to the novice again. The pairs of observation and action for demonstrations are only recorded and collected when the expert intervenes and takes control. ``` 1:procedure MEGA-DAgger(\(\pi_{exp}^{1:M}\)) 2: Initialize \(\mathcal{D}\leftarrow\emptyset\) 3: Initialize \(\pi_{N_{0}}\) to any policy in \(\Pi\) 4:foriteration\(i=1:K\)do 5:forrollout\(j=1:M\) with expert \(\pi_{exp}^{j}\)do 6:fortimestep\(t\in T\) of rollout\(j\)do 7:if\(\pi_{exp}^{j}\) takes control then 8:\(o\leftarrow\) rollout\({}_{i,j}^{t}\) 9:\(a\leftarrow\pi_{exp}^{j}(o)\) 10:\(D_{j}\gets o,a\) 11:\(D_{j},\sigma_{t}\leftarrow\) Data Filter\((D_{j})\) 12:endif 13:endfor 14:\(D_{j}\leftarrow\) Conflict Resolution\((D_{j},D,\sigma_{t})\) 15:\(D\gets D\cup D_{j}\) 16:endfor 17: Train \(\pi_{N_{i}}\) on \(\mathcal{D}\) 18:endfor 19:endprocedure ``` **Algorithm 1** MEGA-DAgger MEGA-DAgger considers learning from multiple imperfect experts \(\pi_{exp}^{1:M}\), which is different from other DAgger variants that assume only having the access to a single optimal or near-optimal expert [6, 7]. In each iteration \(i\), we let each expert \(\pi_{exp}^{j}\) takes turns to observe and to intervene if necessary in rollout\({}_{i,j}\). **Challenges:** The scenario of multiple imperfect experts brings two major challenges. First, the expert demonstrations may not be safe. In HG-DAgger [7], safety is ensured by interventions from the optimal expert. Since such an optimal expert is missing in our context, unsafe demonstrations can be incorporated into the training dataset, which is detrimental to the novice policy and can potentially cause collisions when performing inference during autonomous racing. Moreover, as shown in Figure 3, different experts may provide drastically different labels for similar observations from adjacent states, which can consequently interfere with interpolations using the learned novice policy during inference. **Solution for undesired demonstrations:** To mitigate the challenge of unsafe data, we design a data filter based on _Control Barrier Function (CBF)_[31]. The data filter takes \(D_{j}\) as input. It first checks the current LiDAR observation and gets the current ego position \((x_{t}^{e},y_{t}^{e})\), then outputs the CBF value \(h(x_{t}^{e},y_{t}^{e})\), which is defined by \[h(x_{t}^{e},y_{t}^{e})=(x_{t}^{e}-x_{t}^{p})^{2}-(y_{t}^{e}-y_{t}^{p})^{2}- \alpha^{2}, \tag{3}\] where \((x_{t}^{p},y_{t}^{p})\) is the current obstacle (such as opponent agent, nearest boundary point) position and \(\alpha\) is the corresponding minimal safe distance. Leveraging the result of discrete-time CBF condition [32], we define the safety score by: \[\sigma_{t}=h(x_{t+1}^{e},y_{t+1}^{e})-(1-\gamma)h(x_{t}^{e},y_{t}^{e}),0<\gamma \leq 1. \tag{4}\] Note that higher \(\sigma_{t}\) value indicates higher safety robustness1. Therefore, the current rollout will be immediately terminated by the data filter once safety score \(\sigma_{t}\) becomes negative. Since the ego vehicle likely has already deviated from desired overtaking behavior several steps before it enters the unsafe region, \(\beta\) number of previous steps are also truncated once the vehicle enters the unsafe region in order to remove as many undesired demonstrations as possible. Footnote 1: Constructing a valid CBF sometimes is expensive. In this work, we do not require valid CBF, as the CBF condition here is only used to provide a heuristic safety score. **Solution for conflicted demonstrations:** To resolve the conflicted labels from different experts, a function for conflict resolution is executed after each rollout \(j\) before the incorporation of \(D_{j}\) into \(D\). The conflict resolution takes Fig. 3: Illustration of conflicted labels from different experts. Blue dots represent hit points of LiDAR scans. Red and green arrows represent unit vectors of steering angles from labels. Pink rectangles represent ego and opponent vehicles. Grey lines represent the boundaries of the race tracks. \(D_{j}\), \(D\), and \(\sigma_{t}\) as inputs. We use _cosine similarity_ to identify and select similar observations due to its wide applications in similarity detection for various sensors that are commonly seen in robotics, including LiDAR scans [33, 34] and RGBD camera [35, 36]. To efficiently leverage parallel processing, the cosine similarities \(\Theta\) between all observations \(O_{j}\) in \(D_{j}\) and all observations \(O\) in \(D\) is calculated by the dot product of \(O\) and \(O_{j}\) divided by element-wise multiplication of the Euclidean norms of \(O\) and \(O_{j}\): \[\Theta=\frac{O\cdot O_{j}}{\|O\|\odot\|O_{j}\|}.\] The indices of elements in \(\Theta\) that are higher than a similarity threshold \(\epsilon\) are selected to retrieve similar demonstrations from \(D\) and \(D_{j}\) for calculating the evaluation score \(\omega_{t}\). In our autonomous racing context, \(\omega_{t}\) for every similar demonstration can be calculated as the sum of the normalized safety score (safety indication) and normalized speed of the ego vehicle (progress indication)2: Footnote 2: One can also choose more complex score calculation methods such as assigning adapting weights to safety score and progress score depending on preference. \[\omega_{t}=\frac{\|\sigma_{t}\|-\min_{t}\lVert\sigma_{t}\rVert}{\max_{t}\lVert \sigma_{t}\rVert-\min_{t}\lVert\sigma_{t}\rVert}+\frac{\|v_{t}\|-\min_{t}\lVert v _{t}\rVert}{\max_{t}\lVert v_{t}\rVert-\min_{t}\lVert v_{t}\rVert}.\] Within a group of similar demonstrations, the action label of the demonstration with the highest \(\omega_{t}\) is then used to replace the action labels of all other similar demonstrations. After conflict resolution, \(D_{j}\) is then merged with \(D\). Finally, a policy \(\pi_{N_{i+1}}\) is trained on \(D\) at the end of iteration \(i\). ## IV Experiments In this section, we provide experimental evaluations and demonstrate that our proposed MEGA-DAgger enjoys significant improved safety and performance compared with both experts and HG-DAgger. All code for reproduction are available at [https://github.com/M4D-SC1ENTIST/MEGA-DAgger](https://github.com/M4D-SC1ENTIST/MEGA-DAgger). ### _Experimental Setup_ To better understand the effect of each component in MEGA-DAgger on learning with multiple imperfect experts, we apply our method to learn overtaking behavior in a two-vehicle competitive autonomous racing scenario. We use the 2D racing simulator fltenth_gym3[28] for our experiments. Each vehicle in the simulator takes steering angle and speed as inputs and is equipped with a 2D planar LiDAR that outputs an array of laser scans with a length of 1080. Besides the LiDAR scan, the pose of each vehicle is also accessible at each step in the simulator. Footnote 3: [https://github.com/fltenth/fltenth_gym](https://github.com/fltenth/fltenth_gym) For comparison, we choose to use HG-DAgger as our baseline since it is a state-of-the-art interactive imitation learning algorithm, and MEGA-DAgger is proposed as a step towards learning from imperfect experts based on it. During training and evaluation, each rollout is terminated either when the ego vehicle successfully overtakes the opponent, or the ego vehicle collides with the opponent or other obstacles. The environment is reset after the termination of a rollout. The _percentage of overtakes_ and _percentage of collisions_ are chosen as our main criteria for evaluation throughout our experiments. During the evaluation, each learned policy is tested for 100 rollouts. The number of rollouts that overtake or collide is recorded respectively to calculate the percentages by dividing the total number of rollouts. We use the winning strategy lane switcher from the F1TENTH ICRA 2022 Race [37] both as the opponent and as the foundation for the experts planner. The lane switcher partitions the race track into two lanes. It keeps tracking one lane with Pure Pursuit until it encounters the opponent. Under this circumstance, it switches to the other unoccupied lane and tracks the lane using Pure Pursuit. Although the lane switcher is directly used as the opponent, for the expert planner, the lane switcher outputs reverse steering angles to generate undesired behaviors with a probability \(P(U)\), where \(U\) denotes the event of undesired behaviors. In this way, we are able to generate multiple different imperfect experts by setting different random seeds for \(P(U)\). The parameters in our experiments are listed in Table I. ### _Data Filter_ To fairly evaluate the effect of the data filter, we first disable the conflict resolution function and only use HG-DAgger with our proposed data filter to learn from one expert with various \(P(U)\) ranging from 0.1 to 1.0. In each trial, the novice policy learns from an expert with a fixed \(P(U)\) for 1000 rollouts. For comparison, we also train novice policies in the same way with only HG-DAgger. To keep the amount of demonstrations the same for fair comparison, we randomly truncate the demonstrations when only using HG-DAgger to the same amount of demonstrations when using HG-DAgger with the proposed data filter after each rollout. The truncation ratios for different \(P(U)\) are presented in Table II. As shown in Figure 4, using HG-DAgger with the proposed data filter shows significant improvement over using HG-DAgger with randomly truncated demonstrations. However, the performance of both the novice policies learned with each method gradually diminishes as \(P(U)\) increases. \begin{table} \begin{tabular}{c c} \hline \hline Hyperparameter & Value \\ \hline Neural network structure & \(2\times 256\) \\ Input dimension4 & 108 \\ Evaluation rollouts number & 100 \\ Minimal safe distance \(\alpha\) & 0.42 \\ Truncated step \(\beta\)5 & 70 \\ Cosine similarity threshold \(\epsilon\) & 0.99 \\ \hline \hline \end{tabular} \end{table} TABLE I: Values for hyperparameters. A two-layer MLP (multi-layer perceptron) with 256 hidden units is used as the novice policy. Note that since \(P(U)\) is only related to reversing the steering angle and does not guarantee collisions, the expert planner may still successfully overtake the opponent from another side of the opponent with the reversed steering angle and give good demonstrations. Therefore, even if \(P(U)\) is equal to 1.0, with the data filter to get rid of bad demonstrations strategically, the learned novice policy can still have a percentage of overtakes around \(30\%\), which can also be reflected by remaining 18% of the demonstrations after going through the data filter (as shown in Table II). ### _Conflict Resolution_ To evaluate the effect of the proposed conflict resolution method, we generate 5 different experts based on the modified lane switcher with a fixed \(P(U)\) value of 0.5. For comparison, we train the novice policies using MEGA-DAgger, HG-DAgger with data filter, and HG-DAgger only on two different maps. When using HG-DAgger with and without the data filter, the novice only learns from one expert. Each method is used for training a randomly initialized novice policy during 1000 rollouts in total, with the network being saved and evaluated every 100 training rollouts for experiments. 5 trials are performed to calculate the 95% confidence interval. **Better than HG-DAgger**: Figure 5 shows the maps and the experimental results of learned policies. MEGA-DAgger has about 45% average improvement on both overtaking and collision avoidance compared with vanilla HG-DAgger, and has about 15% average improvement compared with HG-DAgger with data filter. When only using HG-DAgger, the novice can barely learn from demonstrations from multiple imperfect experts. Although compared with vanilla HG-DAgger, the data filter and conflict resolution functions demonstrate noticeable improvements on overtaking and collision avoidance, both metrics gradually decrease after reaching the peak at around 300 training rollouts. This indicates that MEGA-DAgger is able to reduce the amount of unsafe and conflict demonstrations rather than completely eliminating them. As more demonstrations are incorporated into the global dataset, the effect of MEGA-DAgger slowly decays. **Better than experts**: We empirically attribute the improved performance of MEGA-DAgger over HG-DAgger with data filter to learning from complementary good demonstrations from different experts. By visualizing the collision points of a learned policy using MEGA-DAgger and all experts over 200 evaluation rollouts (as shown in Figure 6), we can see that each expert frequently collides in different \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(P(U)\) & \(0.1\) & \(0.2\) & \(0.3\) & \(0.4\) & \(0.5\) \\ \hline \(r_{\beta}\) & \(0.22\) & \(0.25\) & \(0.28\) & \(0.32\) & \(0.34\) \\ \hline \(P(U)\) & \(0.6\) & \(0.7\) & \(0.8\) & \(0.9\) & \(1.0\) \\ \hline \(r_{\beta}\) & \(0.38\) & \(0.49\) & \(0.57\) & \(0.69\) & \(0.82\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Ratios of removed demonstrations with different \(P(U)\), where \(r_{\beta}\) denotes the ratio of removed demonstrations from all collected demonstrations. Fig. 4: The effect of the data filter on overtakes rate (above) and collisions rate (below), respectively. The results with different undesired behavior probability \(P(U)\) are presented. Fig. 5: Comparison of MEGA-DAgger, HG-DAgger with data filter, and HG-DAgger on two different maps. The left and right columns show the results on Map 1 and Map 2, respectively. Policy networks are saved every 100 training rollouts for evaluation. Each plot is an average of 5 experiments, and the shaded region represents 95% confidence intervals. regions of the map, and the learned policy can learn complementary good behavior from them and have less collision. Table III shows that MEGA-DAgger is better than all experts and it has 13.6% and 13.2% average improvements on collision avoidance and overtaking, respectively. Also, we find that the trained policy from MEGA-DAgger is more stable (smaller standard deviations) than experts. Since MEGA-DAgger is able to resolve conflicts by picking the best action under similar observations as illustrated in Figure 3, it is able to effectively leverage the complementary nature of the collision points, resolve conflicts, and learn a better-than-experts policy as a result. ## V Conclusion and Discussion While interactive imitation learning methods, such as DAgger and its variants, have been successfully applied to many autonomous systems, they all assume the access to one optimal expert. However, it is more likely to only have access to multiple non-optimal experts. In this paper, we study how to make effective use of these experts through interactive imitation learning. Specifically, MEGA-DAgger, a new DAgger variant, is proposed to filter unsafe demonstrations and resolve experts conflict. Through thorough experiments on end-to-end autonomous racing, we demonstrate that MEGA-DAgger has improved safety and performance relative to HG-DAgger. We also show that MEGA-DAgger can learn a better-than-experts policy. It is worth noting that we use both the progress score and safety score to heuristically evaluate demonstrations, but they are not used as training feedback. This is different from _reward function_ in reinforcement learning context, which is used to guide the training process and usually needs to be carefully designed. One interesting direction for future work is to automatically learn confidence scores to evaluate and compare actions from experts. Also, we plan to conduct experiments on real-world autonomous vehicles and trying to reduce the sim-to-real gap. We train 10 preliminary policies using HG-DAgger with the data filter for different \(\beta\) values ranging from 0 to 100 steps and find out that setting \(\beta\) as 70 works best for our scenario (both best overtakes rate and collisions rate), which can be found in Figure 7. ## Acknowledgment The authors thank Derek Zhou and Zhijun Zhuang for their diligent help on the experiments, and Luigi Berducci, Nandan Tumu, and Hongrui Zheng for useful discussion.
2305.15805
Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers
Autoregressive Transformers adopted in Large Language Models (LLMs) are hard to scale to long sequences. Despite several works trying to reduce their computational cost, most of LLMs still adopt attention layers between all pairs of tokens in the sequence, thus incurring a quadratic cost. In this study, we present a novel approach that dynamically prunes contextual information while preserving the model's expressiveness, resulting in reduced memory and computational requirements during inference. Our method employs a learnable mechanism that determines which uninformative tokens can be dropped from the context at any point across the generation process. By doing so, our approach not only addresses performance concerns but also enhances interpretability, providing valuable insight into the model's decision-making process. Our technique can be applied to existing pre-trained models through a straightforward fine-tuning process, and the pruning strength can be specified by a sparsity parameter. Notably, our empirical findings demonstrate that we can effectively prune up to 80\% of the context without significant performance degradation on downstream tasks, offering a valuable tool for mitigating inference costs. Our reference implementation achieves up to $2\times$ increase in inference throughput and even greater memory savings.
Sotiris Anagnostidis, Dario Pavllo, Luca Biggio, Lorenzo Noci, Aurelien Lucchi, Thomas Hofmann
2023-05-25T07:39:41Z
http://arxiv.org/abs/2305.15805v3
# Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers ###### Abstract Autoregressive Transformers adopted in Large Language Models (LLMs) are hard to scale to long sequences. Despite several works trying to reduce their computational cost, most of LLMs still adopt attention layers between all pairs of tokens in the sequence, thus incurring a quadratic cost. In this study, we present a novel approach that dynamically prunes contextual information while preserving the model's expressiveness, resulting in reduced memory and computational requirements during inference. Our method employs a learnable mechanism that determines which uninformative tokens can be dropped from the context at any point across the generation process. By doing so, our approach not only addresses performance concerns but also enhances interpretability, providing valuable insight into the model's decision-making process. Our technique can be applied to existing pre-trained models through a straightforward fine-tuning process, and the pruning strength can be specified by a sparsity parameter. Notably, our empirical findings demonstrate that we can effectively prune up to 80% of the context without significant performance degradation on downstream tasks, offering a valuable tool for mitigating inference costs. Our reference implementation achieves up to \(2\times\) increase in inference throughput and even greater memory savings. + Footnote †: Correspondence [email protected]. ## 1 Introduction The introduction of Transformers (Vaswani et al., 2017) in Large Language Models (LLMs) has profoundly influenced the landscape of Natural Language Processing (NLP), due to their appealing scaling properties (Kaplan et al., 2020) and their ability to train efficiently on modern hardware architectures designed for extensive parallel computing. As LLMs grow larger and more complex, the challenges associated with training and deploying them become more prominent. Especially challenging is the quest for processing increasingly longer sequences, as pure self-attention layers scale quadratically in sequence length during train and inference. To address this limitation, several efforts focus on efficient implementations of the attention mechanism on dedicated hardware (Dao et al., 2022; Touvron et al., 2023), or on algorithmic procedures to directly tackle the quadratic complexity. The latter direction has led to numerous variants sacrificing the generality of the standard attention mechanism in favor of more efficient alternatives (Tay et al., 2020, Kitaev et al., 2020, Choromanski et al., 2020, Katharopoulos et al., 2020, Zaheer et al., 2020, Shi et al., 2021, Lin et al., 2022, Zhu and Soricut, 2021, Dai et al., 2020), some of which are illustrated in Fig. 1. Specifically, a large number of these methods focus either on sparsifying the attention weights, reducing the size of the available context to each token, or compressing the number of tokens to reduce the size of the attention matrix. These methods, however, are inherently static, in the sense that each token is either forced to attend to a fixed pre-specified context window, or the input context is compressed to a fixed dimensionality, regardless of the information content of the input sequence. Furthermore, a performance gap still exists with respect to pure self-attention in many applications, thus implying the existence of a non-trivial trade-off between the span of the attention context and the model's capabilities (Dao et al., 2022, Sun et al., 2021, Beltagy et al., 2020). To address these challenges, and enhance inference efficiency, while staying faithful to pure self-attention, we pose the following question: _Can we dynamically prune past content based on the available context, while preserving as much as possible the expressivity of the model?_ In response to this question, we introduce a novel method for context pruning in Transformer-based decoder architectures. Our approach adds a minimal amount of additional training parameters that enable individual tokens to dynamically remove portions of the input sequence in a layer-wise fashion. Once part of the context is removed, it is disregarded for the remaining part of the autoregressive generation process, leading to reduced memory usage and computational requirements during inference. To this end, we also design a dynamic data structure that implements efficient insertion/removal of tokens from the context while supporting batched inference. In contrast to traditional methods relying on local or sparse attention, which may not capture the nuances and dynamic nature of the data over long contexts, ours leverages contextual cues to dynamically determine the relevance of the available information through a learned mechanism. This is achieved by making use of a sparse sigmoid function (Peters et al., 2019, Martins et al., 2020). As demonstrated by our experimental evaluations, this allows us to extract and utilize essential details in a more adaptive and accurate manner. The degree of pruning can be effectively controlled through a hyperparameter that effectively accounts for the sparsity level. Our technique serves as a modular building block for existing pre-trained models and can be easily integrated through a minimal fine-tuning stage. For our study, we focus on GPT-2 models (Radford et al., 2019) as they are publicly available and widely benchmarked, but due to the uniformity of modern architectures, our approach can be straightforwardly extended to any autoregressive Transformer. Moreover, since our method is based on context pruning, it can be seamlessly combined with other approaches aimed at improving inference efficiency, such as quantization, weight pruning, approximate attention, or other hardware optimizations. We find that up to \(80\%\) of the context can be successfully pruned, with minimal deterioration in terms of perplexity and zero-shot performance, while requiring significantly fewer resources during inference. We showcase how these improvements can lead to measurable practical gains, by providing an efficient implementation that reduces memory usage for caching during token generation. More specifically, for larger context sizes we get up to \(50\%\) wall-time latency reduction for each generation Figure 1: Visualization of the causal attention weights associated with standard, local, sparse causal attention, and our approach. Adaptively sparse attention (rightmost) prunes weights dynamically for each token, and it does not impose any restricting inductive biases on the final attention structure. step, while still decoding with up to \(2\times\) larger batch sizes, leading thus to significant performance benefits. These findings highlight the potential of context pruning as a powerful technique to enhance the efficiency and interpretability of Transformers in NLP. ## 2 Related Work Despite exhibiting human-level performance on a number of challenging tasks, LLMs are resource intensive and inefficient. While the human brain consumes roughly the amount of energy equivalent to a dim light bulb, top-performing GPT models require multiple GPUs with \(\sim\)80GB of memory each for inference (Strubell et al., 2019; Frantar and Alistarh, 2023). Several research efforts have been focusing on improving their efficiency and memory requirements from several different angles. Weight Pruning and Quantization.Modern LLMs have high memory and compute requirements for both training and testing. To address this limitation, a number of research efforts (Kwon et al., 2022; Frantar et al., 2023; Frantar and Alistarh, 2023) have resorted to the established practice of weight pruning (Hassibi et al., 1993) to efficiently compress the original model to a more manageable size. Remarkably, a large percentage of the original weights can be safely removed, resulting in only marginal perplexity growth (Bahl et al., 1983). An alternative approach to reduce the memory and compute, is quantization (Dettmers et al., 2022; Yao et al., 2022; Xiao et al., 2022; Frantar et al., 2022), which reduces the precision of the model's numerical representation. Quantization schemes (Dettmers et al., 2022) enable 8-bit matrix multiplication for both feed-forward and attention projection layers resulting in significantly improved memory allocation without incurring any performance degradation. Efficient Transformers and context pruning.One primary constraint of Transformer-based models is their quadratic complexity with respect to the length of the input sequence. Extensive research explores alternatives that exhibit sub-quadratic scaling, resulting in three main strategies (Lin et al., 2022). The first replaces the attention mechanism with an alternative operation that features more favorable scaling with the input sequence length (Peng et al., 2021; Katharopoulos et al., 2020; Choromanski et al., 2020; Schlag et al., 2021). While several recent methods in this category show promise, none have emerged as a definitive winner, and most state-of-the-art language models still rely on the standard attention mechanism (Touvron et al., 2023; Chowdhery et al., 2022). The second approach proposed to compress the length of the input context, controlling the complexity of the attention operation but unavoidably sacrificing potentially relevant information from the original input (Lee et al., 2019; Wang et al., 2020; Jaegle et al., 2021). The third approach involves pruning the attention matrix, preventing each token from attending to every other token within the context (Zaheer et al., 2020; Martins et al., 2020; Lee et al., 2023). This line of research is motivated by the theoretical finding highlighting that sparse Transformers retain the expressivity of their dense counterparts (Yun et al., 2020). Many methods in this category employ specially designed attention masks that aim to zero out as many entries as possible, often based on principles of locality, randomness, or a combination of both. The main drawback of these methods is their mostly static nature, meaning that every token is compelled to attend to a fixed context window and disregard the rest of the context regardless of its specific role within the input sequence. Our approach falls within this last category, and enables dynamic sparsification of the attention matrix for decoder models, without resorting to any potentially restricting inductive biases about its structure. Implementation Speed-upRecently, hardware-optimized implementations (Dao et al., 2022; Touvron et al., 2023) have been proposed with the aim of optimizing computational resources during the training phase of Transformers (Hoffmann et al., 2022). On the other hand, as recent breakthroughs have led to widespread adoption of these models (Ouyang et al., 2022; OpenAI, 2023; Kopf et al., 2023), performance during inference becomes more relevant by the day. In decoder-based autoregressive Transformers, the backbone architecture of most current state-of-the-art LLMs, inference involves evaluating and generating tokens one by one, using cached previous activations to avoid redundant computations. In contrast to training, the inference is memory bound (Shazeer, 2019; Ivanov et al., 2021; Pope et al., 2022). Compute is under-utilized, especially when deploying larger models, as the time required to transfer model parameters and activations to hardware memory far exceeds the actual computational time. This is further exacerbated by recent trends to ever-increase the model size and enable longer context windows. As a result, batch decoding, a promising direction for more efficient utilization of hardware resources, is impeded. ## 3 Methodology Background.We operate on sequences of text tokens \(\mathbf{T}\in\{0,1,\ldots,n_{\text{vocab}}\}^{n}\), where \(n\) is the length of the sequence and \(n_{\text{vocab}}\) is the vocabulary size. Tokens are embedded into \(\mathbf{X}^{0}\in\mathbb{R}^{n\times d}\) using an embedding layer, where \(d\) is the embedding dimension of the model. When necessary, we use the superscript \(\ell\in\{1,2,\ldots,L\}\) to denote the representations and weights at different layers. One layer of the Transformer-decoder architecture (Vaswani et al., 2017) is defined as \[\mathbf{X}=\text{MHA}(\text{LayerNorm}(\mathbf{X}^{\ell-1}))+ \mathbf{X}^{\ell-1}, \tag{1}\] \[\mathbf{X}^{\ell}=\text{FF}(\text{LayerNorm}(\mathbf{X}))+ \mathbf{X}, \tag{2}\] where MHA stands for Multi-head self-attention defined as \[\text{MHA}(\mathbf{X})=\text{Concatenate}(\text{head}_{1}(\mathbf{X}),\text{ head}_{2}(\mathbf{X}),\ldots,\text{head}_{h}(\mathbf{X}))\mathbf{W}_{O},\quad \text{where} \tag{3}\] \[\text{head}_{i}(\mathbf{X})=\text{SA}\left(\mathbf{Q}_{i}, \mathbf{K}_{i},\mathbf{V}_{i}\right). \tag{4}\] Here \(\mathbf{Q}_{i}=\mathbf{X}\mathbf{W}_{Q_{i}}\), \(\mathbf{K}_{i}=\mathbf{X}\mathbf{W}_{K_{i}}\), and \(\mathbf{V}=\mathbf{X}\mathbf{W}_{V_{i}}\) are the queries, keys and values and SA denotes the single-head self-attention. The weight matrices \(\mathbf{W}_{Q_{i}},\mathbf{W}_{Ki},\mathbf{W}_{V_{i}}\in\mathbb{R}^{d\times p}\) linearly project the input embedding into the head dimension \(p\). Finally, \(\mathbf{W}_{O}\in\mathbb{R}^{d\times d}\) is the output projection. The feed-forward part of the Transformer is defined as \[\text{FF}(\mathbf{X})=\sigma_{\text{FF}}(\mathbf{X}\mathbf{W}_{F_{1}}) \mathbf{W}_{F_{2}}, \tag{5}\] where \(\sigma_{\text{FF}}\) is a nonlinearity, and \(\mathbf{W}_{F_{1}},\mathbf{W}_{F_{2}}\) are linear layers with typical dimensions \(\mathbf{W}_{F_{1}}\in\mathbb{R}^{d\times 4\cdot d}\) and \(\mathbf{W}_{F_{2}}\in\mathbb{R}^{4\cdot d\times d}\). A final projection layer \(\mathbf{W}_{\text{logits}}\in\mathbb{R}^{d\times n_{\text{vocab}}}\) is used to project back to the vocabulary space and predict the next token from the representations \(\mathbf{X}^{L}\). We are focusing on Pre-LN (Xiong et al., 2020) decoder-only architectures, meaning that attention is causally masked, i.e. every input token \(i\) attends to the first \(i\) tokens in the input sequence. Conceptually, our method acts by predicting these attention masks using a learned mechanism in a layer-wise manner, with the introduction of additional constraints to make sure causality is preserved (i.e. if a token is dropped, it will remain dropped in the future). During inference, however, our method can efficiently be implemented by erasing tokens from the key-value cache commonly adopted in autoregressive attention models. Background: key-value cache.In autoregressive Transformers, inference can be optimized by reusing pre-computed activations (keys and values) to accelerate the sequential generation of tokens (Ott et al., 2019; Vaswani et al., 2018; Wolf et al., 2020), bringing down the computational cost to generate a single token to \(\mathcal{O}(n)\) from \(\mathcal{O}(n^{2})\) (where \(n\) is the sentence length). Most existing sparse attention techniques ignore the specifics of this process and focus on sparsifying each attention operation separately. As non-attended tokens can still be attended to by subsequent tokens, memory benefits are limited. By contrast, our approach is compatible with this setting, allowing us to design an efficient batched data structure where dropped tokens are effectively removed from the computation. ### Adaptively Sparse Attention We allow the network to selectively drop parts of the context that are no longer required. An illustration of our proposed method can be seen in Fig. 2. At each layer, we introduce the param Figure 2: We illustrate the state of the memory buffer at the start at each iteration for our proposed approach. Dropped tokens are irrelevant for any subsequent generation step and their cached activations are erased. Since self-attention is a set operation, the buffer (keys/values) of the dropped tokens can be reused by subsequent tokens, ensuring that the data structure is as packed as possible. eters \(\mathbf{W}_{Q_{\text{int}}}^{\ell},\mathbf{W}_{K_{\text{int}}}^{\ell}\in\mathbb{R}^ {d\times r}\) for dimension \(r\in\mathbb{R}\), that calculate the interaction queries and keys \(\mathbf{Q}_{\text{int}}^{\ell},\mathbf{K}_{\text{int}}^{\ell}\in\mathbb{R}^{n \times r}\), as \(\mathbf{Q}_{\text{int}}^{\ell}=\mathbf{X}^{\ell}\mathbf{W}_{Q_{\text{int}}}^ {\ell}\) and \(\mathbf{K}_{\text{int}}^{\ell}=\mathbf{X}^{\ell}\mathbf{W}_{K_{\text{int}}}^{\ell}\). We then calculate the _interaction_ of token \(k\) with token \(j\) at layer \(\ell\) as: \[\mathbf{I}_{k,j}^{\ell}=\begin{cases}\prod_{n=j+1}^{k}\overline{\mathbf{I}}_{ n,j}^{\ell}\text{ \ and \ }\overline{\mathbf{I}}_{n,j}^{\ell}=\sigma\left(\frac{(\mathbf{Q}_{i}^{\ell}) _{n}^{\top}(\mathbf{K}_{\text{int}}^{\ell})_{i}}{\sqrt{r}}+\beta^{\ell} \right),\text{if }j<k\\ 1,\text{if }j=k,\\ 0,\text{if }j>k,\end{cases} \tag{6}\] where \(\sigma(\cdot)\) denotes the sparse sigmoid function introduced in Section 3.2 and \(\beta^{\ell}\in\mathbb{R}\) a scalar parameter per layer, that controls the initial sparsity as seen in Fig. 3 (right). Indices in \(\mathbf{Q}_{\text{int}}^{\ell},\mathbf{K}_{\text{int}}^{\ell}\in\mathbb{R}^{n \times r}\) refer to the rows of the matrices. We can then modify the self-attention \[\text{SA}(\mathbf{Q}_{i}^{\ell},\mathbf{K}_{i}^{\ell},\mathbf{V}_{i}^{\ell})= \text{softmax}\left(\frac{\mathbf{Q}_{i}^{\ell}(\mathbf{K}_{i}^{\ell})^{\top} }{\sqrt{p}}+\log(\mathbf{I}^{\ell})\right)\mathbf{V}_{i}^{\ell}. \tag{7}\] For \(j>k\) we set \(\mathbf{I}_{k,j}^{\ell}=0\), which leads to masking entries in the self-attention, corresponding to the regular causal masking. We also impose that a token cannot drop itself, thus \(\mathbf{I}_{k,k}^{\ell}=1\). We want to preserve information regarding the current token as its predictions are particularly important in determining the next token for the regular language modeling task that we are considering. Small values of \(\overline{\mathbf{I}}_{n,j}^{\ell}\) impose partial masking of the corresponding token in the attention, and complete masking occurs when \(\overline{\mathbf{I}}_{n,j}^{\ell}=0\). The cumulative product over tokens \(j+1\to k\) in Eq. (6) imposes that dropping a token (when \(\sigma\left(.\right)\to 0\)) has an irreversible effect, as it will remain dropped for all subsequent tokens, and hence for the remaining of the generation process. The complexity of the pruning logic is \(\mathcal{O}(n\cdot d\cdot r+n^{2}\cdot r)\), which is lower than the one of the self-attention operation for \(r<d\). Our mechanism allows layers to act independently, meaning that different sparsity patterns are encountered across layers. We also experimented with tying the model's dropping decisions with depth by imposing that a token dropped at a given layer cannot be attended to in subsequent layers. However, we observed worse results and hence did not pursue this further. This is perhaps expected, given numerous results and interpretability studies regarding sparsity patterns of attention heads at different layers (Ramsauer et al., 2020; Hao et al., 2021). ### Sparse Sigmoid In Eq. (6), we use \(\sigma(\cdot)\), as a sigmoid-like function to let the network decide when and what to drop. We favour binary decisions, leading to interaction values of either \(0\) or \(1\). Inspired by the \(\alpha\)-entmax function introduced in Peters et al. (2019); Martins et al. (2020), we define the \(\alpha\)-sigmoid (based on Figure 3: (Left) We use a cosine scheduler to set the values of \(\alpha\) during training. (Middle) For values of \(\alpha>1\), mappings of the \(\alpha\)-sigmoid saturate at \(\pm 1/(\alpha-1)\). During inference, we replace the \(\alpha\)-sigmoid with a step function, that corresponds to the case \(\alpha\to\infty\). (Right) Distribution of \(\mathbf{I}_{k,j}^{\ell}\) for different values of \(\beta^{\ell}\) with respect to the distance between the tokens \(k-j\). For this depiction, we assume random normally distributed vectors as inputs and randomly initialized weights \(\mathbf{W}_{Q_{\text{int}}}^{\ell},\mathbf{W}_{K_{\text{int}}}^{\ell}\), according to ‘He’ initialization (He et al., 2015). the entropies proposed by Tsallis (1988)) as: \[\sigma(x)=\alpha\text{-sigmoid}(x)=\text{argmax}_{p\in[0,1]}\left(p\cdot x+H_{ \alpha}(p)\right), \tag{8}\] where \[H_{\alpha}(p)=\begin{cases}\frac{1}{\alpha(\alpha-1)}(p-p^{\alpha}+(1-p)-(1-p)^ {\alpha}),\text{ if }\alpha\neq 1\\ -p\log p-(1-p)\log(1-p),\text{ if }\alpha=1.\end{cases} \tag{9}\] By varying \(\alpha\) during the training, we can control the sparsity in the network, i.e. regulate the softness of the pruning mechanism. In practice, we start from small values of \(\alpha=1\) and increase it according to a cosine scheduler, as shown in Fig. 3. Small values of \(\alpha\) allow meaningful gradient signals to pass through the dropping mechanism, which is crucial at the beginning of training. On the other hand, larger values of \(\alpha\) lead to sparse results desired during inference. We thus increase \(\alpha\) to values leading to very sparse solutions, as illustrated in Fig. 3. In practice, during inference, we replace \(\sigma(\cdot)\) with the step function, that corresponds to \(\alpha\rightarrow\infty\). We also initialize the biases parameters \(\beta^{\ell}\) in (6) to a positive value, ensuring that tokens at the beginning of training have a prior towards not being dropped. This strategy also facilitates fine-tuning existing pretrained models, as our module will initially default close to the identity function. The \(\alpha\)-sigmoid along with the training schedule on \(\alpha\) allows for good signal propagation properties for the gradients (Noci et al., 2022). We also explored using a regular sigmoid with a varying temperature (Kim et al., 2022), leading to suboptimal nonbinary predictions and instabilities during training. Training with our sparse sigmoid also directly eliminates the need of having any auxiliary network (Lee et al., 2023). ### Regularized Objective We augment the regular language modeling objective with a regularization that incentivizes the network \(f\) to drop parts of the sequence. We fine-tune pretrained models, with parameters \(\theta\), using the objective: \[L(\theta,\mathbf{T})=L_{lm}(\theta,\mathbf{T})+L_{sparsity}(\theta,\mathbf{T}), \tag{10}\] where \[L_{lm}(\theta,\mathbf{T})=\text{CE}(f_{\theta}(\mathbf{T}),\text{shift}( \mathbf{T})) \tag{11}\] is the regular cross-entropy loss for the language modeling task based on the original and shifted input tokens \(\mathbf{T}\), and \[L_{sparsity}(\theta,\mathbf{T})=\gamma\frac{2}{L\,n(n-1)}\sum_{i,\ell}\mathbf{ I}_{i,j}^{\ell} \tag{12}\] is the sparsity loss, encouraging the model to prune the context. In total \((L\,n(n-1))/2\) entries of \(\mathbf{I}_{i,j}^{\ell}\) are learned, as indicated in Eq. (6). We choose \(\gamma>0\) to enforce different levels of sparsity. In general, for a current position \(i\) in the context, we define as sparsity, the percentage of the previous tokens dropped, i.e. \((\text{tokens }\leq i\text{ dropped})/i\). ## 4 Experiments We fine-tune pretrained GPT-2 models 1, that support a context size of up to 1024 tokens, on a subset of the English Wikipedia _20220301.en_ and English _bookcorpus_ datasets. We keep a separate test set where we report perplexity after training. All models shown, for a fair comparison, were fine-tuned using the same lightweight training setup as described in Appendix A. When using our adaptive sparse attention, we use a cosine scheduler for the \(\alpha\) parameter as displayed in Fig. 3 and specify \(r=64\) for the dimensions of \(\mathbf{W}_{Q_{\text{int}}}^{\ell},\mathbf{W}_{K_{\text{int}}}^{\ell}\). More ablations regarding optimization and variations of our dropping mechanism are provided in Appendix B. Unless otherwise stated, results refer to GPT-2-_small_ models. We use the term _dense_ for the regular GPT-2 models, fine-tuned without any additional \(\mathbf{W}_{Q_{\text{int}}},\mathbf{W}_{K_{\text{int}}}\) parameters. Baselines.We compare against the baselines presented in Fig. 1. _Local Attention_ refers to a causal attention mask, where each token attends to the previous \(k\) tokens in the sequence, including itself. This can also be interpreted as restricting the receptive field of the model. _Sparse Attention_ refers to the baselines from Child et al. (2019), Lin et al. (2022), where each token \(i\) attends to the tokens satisfying (1) \(\left\lfloor i/k\right\rfloor=\left\lfloor j/k\right\rfloor\) and (2) the tokens \(k-1,2\cdot k-1,\ldots,\left\lfloor i/k\right\rfloor\cdot k-1\) (numbering starts from zero). We fine-tune these baselines using the same aforementioned fine-tuning procedure, for different choices of \(k\), leading to different levels of sparsity, depending on the current context size. Data structure.Real-world deployment of our approach exhibits numerous challenges due to the nature of batched generation. In particular, we highlight differences in prompt length (initial prefix), different final lengths (termination criteria), and uneven dropping of tokens across different sentences. Maximum performance is achieved when the key-value cache is represented as a contiguous block of memory, and any masking resulting from padding or removed tokens ("holes") will result in a decrease in efficiency. To this end, we devise an efficient batched data structure that allows for efficient insertion and deletion of tokens (leveraging the set nature of the self-attention operation), while _(i)_ allowing the underlying storage to be processed as a contiguous block of memory and _(ii)_ ensuring that the load factor of the data structure is high enough to guarantee a performance speed-up. More details are provided in the Appendix A. ### Results Perplexity vs sparsity.We first study how context-pruning changes for different levels of sparsity in Fig. 4. Depending on the current context size, our method allows for up to 80% of the context to be successfully pruned, i.e. removed, with no performance loss in terms of perplexity (-0.085 average gain in perplexity when context size is 1000 tokens for 80.35% of sparsity compared to the dense counterpart). Our method also adapts to the current context size, meaning a network trained with specific sparsity regularization exhibits different levels of sparsity depending on the current context size. Compared to the baselines, our method exhibits consistently lower perplexity results for the same level of sparsity. Figure 4: Perplexity (lower is better) for different levels of sparsity. (Left) Overall perplexity averaged across tokens with context size varying from \(1\) to \(1024\). The three plots on the right show perplexity for different context sizes. Figure 5: Mean zero-shot accuracy (higher is better) for the _WinoGrande_, _HellaSwag_, _PIQA_, and _LAMBADA_ datasets. As the sparsity of all methods depends on the context size, we average the expected sparsity based on the lengths of the prefixes in these datasets. (Left) GPT-2-_small_ models and (right) all GPT-2 models. Zero-Shot Performance.To test general model capabilities and complement perplexity evaluations, we provide results on several zero-shot tasks (Dettmers et al., 2022) in Fig. 5. Similar trends hold overall; our approach retains or even outperforms the performance of the dense baseline, even for cases with high sparsity. These tasks involve scenarios where the model is required to perform without any specific training or prior exposure to the target domain. The results obtained validate that the models' general capabilities can be retained, even under high levels of sparsity. Computational Analysis.We analyze the gains in terms of FLOPs and required memory when generating new sequences due to caching in Fig. 6. Our dropping mechanism introduces additional Figure 6: (Left) Distribution of FLOPs for models with different levels of sparsity. Here, _embedding-layer_ refers to the embedding of the input sequence to the representation \(\mathbf{X}^{0}\), _logits-layer_ to the projections of the final representation \(\mathbf{X}^{L}\) according to the vocabulary size, _feed-forward_ to the feed-forward components, summed across the different layers, _qxvo-calculation_ to the projection of the current representation to queries, keys, values and the final output projection, _attention_ to the actual softmax operation and _drop-tokens_ to additional compute required for calculating \(\mathbf{Q}_{\text{int}}^{\ell},\mathbf{K}_{\text{int}}^{\ell}\) and performing dropping via Eq. (6). (Right) Memory requirements when caching previous activations (keys and values). When implementing dropping, interaction keys \(\mathbf{K}_{\text{int}}^{\ell}\) have to be additionally cached. Figure 7: We measure throughput using the optimal batch size on a NVIDIA RTX A5000 GPU. (Left) Throughput in terms of tokens per second for different models and different levels of sparsity (top) averaged across tokens for context sizes from 1 to 1024 and (bottom) when the context size is 1000 tokens. (Right) Average (top) throughput for varying context size for the GPT-2-_medium_ model and average (bottom) time per generation step for varying context size. As our models require significantly less memory, a larger batch size can be accommodated, where large portions of the throughput gains can be attributed to. computational costs for the calculation of \(\mathbf{Q}_{\text{int}}^{\ell},\mathbf{K}_{\text{int}}^{\ell}\) and the logic behind dropping via Eq. (6). Due to the relatively small chosen parameter \(r\), i.e. the output dimension of the interaction weights \(\mathbf{W}_{\text{Q}_{\text{int}}}^{\ell},\mathbf{W}_{K_{\text{int}}}^{\ell}\), these are nevertheless minimal. Although the raw FLOPs benefit when using sparse models does not seem very significant, as aforementioned, inference is predominately memory-bound. The attention thus takes a significant proportion of real-time inference (Dao et al., 2022). On the contrary, dense matrix multiplications used for all linear projections are very efficient. Memory benefits, on the other hand, are substantial, as the memory required for caching is a linear function with respect to sparsity, with a negative slope. Sparser solutions will thus additionally allow us to generate more sequences in a batched fashion. This is particularly relevant for bigger models, also longer sequences, where batch decoding is a major challenge (Shazeer, 2019). Throughput.We demonstrate how reduced context and reduced memory requirements can lead to significant real-world time throughput in Fig. 7. Initially, our pruned networks are slower in terms of latency for small context lengths, because of the additional cost associated with the logic behind pruning. Nevertheless, they quickly surpass the dense baseline that struggles as the context size increases. This verifies the fact that although raw FLOPs benefits look unsubstantial, in fact, this leads to significant gains due to the specific memory profile of Transformers' inference. Crucially, our pruned networks can support a much bigger batch size, leading to significant throughput gains. More specifically, for long context sizes, our GPT-2_-small_ model offers an additional \(98\%\) margin in throughput for a loss in perplexity of only \(0.316\), with respect to the dense counterpart. Similarly, our GPT-2_-medium_ model can yield \(189\%\) additional throughput for only \(0.084\) loss in perplexity for a context size of 1000 tokens. In particular, the same model (for \(\gamma=1.0\)) provides a higher throughput than a GPT-2_-small_ model, while achieving \(3.769\) lower perplexity. As context windows become larger by the day in state-of-the-art models, we expect these gains to become even more relevant. Interpretability.Fig. 8 provides insights into the interpretability aspect of the model's decision-making process. It is observed that token removal predominantly occurs when encountering stop words (punctuation), which aligns with the intuition that local information within a sentence becomes less relevant after its completion. Furthermore, it is worth noting that layers at varying depths exhibit distinct behaviors, reinforcing our rationale for dissecting token removal decisions across depth. Figure 8: (Top) Example of pruned tokens for layer 5 for the GPT-2_-small_ model fine-tuned with \(\gamma-0.3\) during generation. Most pruning is triggered by punctuation. (Bottom-left) We calculate the probability of tokens to be kept in the context based on the part of speech (POS) of the words they correspond to. (Bottom-middle) Most dropping is caused by tokens corresponding to punctuation, but distinct layers behave differently. (Bottom-right) Example of the number of tokens pruned by the tokens’ position id, for 2 layers of GPT-2_-small_. The variance in sparsity distribution across different depths indicates the necessity of conducting additional interpretability research to obtain valuable insights in the interactions of the tokens within the model. We provide more insights towards this direction in the Appendix C. ## 5 Discussion We proposed Adaptively Sparse Attention, a novel approach to dynamically prune the context in decoder-only Transformer architectures. Our results indicate that our technique performs favourably compared to competitive baselines in terms of the ratio between perplexity and sparsity of the attention weights. Remarkably our approach also significantly reduces the computational and memory requirements without affecting its final performance. We practically showcase these benefits achieving more than double the throughput at cases. Adaptively sparse attention comes with two additional practical advantages: first, it can be seamlessly integrated into existing pre-trained models via a cheap fine-tuning step; second, it represents an orthogonal contribution to the burgeoning research line aimed at increasing the level of efficiency of modern LLMs. As such, we envision its combination with existing techniques like weight pruning and quantization to be a promising avenue for future research.
2302.00511
Iterative Deepening Hyperband
Hyperparameter optimization (HPO) is concerned with the automated search for the most appropriate hyperparameter configuration (HPC) of a parameterized machine learning algorithm. A state-of-the-art HPO method is Hyperband, which, however, has its own parameters that influence its performance. One of these parameters, the maximal budget, is especially problematic: If chosen too small, the budget needs to be increased in hindsight and, as Hyperband is not incremental by design, the entire algorithm must be re-run. This is not only costly but also comes with a loss of valuable knowledge already accumulated. In this paper, we propose incremental variants of Hyperband that eliminate these drawbacks, and show that these variants satisfy theoretical guarantees qualitatively similar to those for the original Hyperband with the "right" budget. Moreover, we demonstrate their practical utility in experiments with benchmark data sets.
Jasmin Brandt, Marcel Wever, Dimitrios Iliadis, Viktor Bengs, Eyke Hüllermeier
2023-02-01T15:33:51Z
http://arxiv.org/abs/2302.00511v2
# Iterative Deepening Hyperband ###### Abstract Hyperparameter optimization (HPO) is concerned with the automated search for the most appropriate hyperparameter configuration (HPC) of a parameterized machine learning algorithm. A state-of-the-art HPO method is Hyperband, which, however, has its own parameters that influence its performance. One of these parameters, the maximal budget, is especially problematic: If chosen too small, the budget needs to be increased in hindsight and, as Hyperband is not incremental by design, the entire algorithm must be re-run. This is not only costly but also comes with a loss of valuable knowledge already accumulated. In this paper, we propose incremental variants of Hyperband that eliminate these drawbacks, and show that these variants satisfy theoretical guarantees qualitatively similar to those for the original Hyperband with the "right" budget. Moreover, we demonstrate their practical utility in experiments with benchmark data sets. Machine Learning, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband, Hyperband Hyperband, Hyperband Hyperband, Hyperband, Hyperband Hyperband, Hyperband, Hyperband Hyperband, Hyperband Hyperband Hyperband, Hyperband Hyperband, Hyperband Hyperband Hyperband, Hyperband Hyper Hyperband run for the smaller maximum budget. In the spirit of heuristic graph search algorithms such as iterative deepening A* or iterative deepening search (Korf, 1985), we dub our extension Iterative Deepening Hyperband (ID-HB), which comes in three variants. Each of these variants considers a different possibility to reuse the information gathered by the previous Hyperband, which essentially differ in the degree of conservatism. We provide theoretical and empirical results showing that the performance deterioration is negligible compared to the improvements in terms of efficiency. To this end, we provide theoretical guarantees as well as empirical evaluations for various tasks on benchmark datasets. ## 2 Hyperparameter Optimization In a typical supervised ML setting, the learner is provided with a (training) data set \(\mathcal{D}=\big{\{}\big{(}\mathbf{x}^{(n)},y^{(n)}\big{)}\big{\}}_{n=1}^{N} \subset\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\) is some feature space and \(\mathcal{Y}\) is a label space. The pairs in \(\mathcal{D}\) are assumed to be i.i.d. samples of some unknown data-generating distribution \(P^{*}\), i.e., each \(z^{(n)}=(\mathbf{x}^{(n)},y^{(n)})\) is an independent realization of \(Z=(X,Y)\sim P^{*}\). In addition, the learner is provided with a (suitable) hypothesis space \(\mathcal{H}\), which is a subset of all mappings \(\mathcal{Y}^{\mathcal{X}}=\{h:\mathcal{X}\to\mathcal{Y}\}\) from the feature space \(\mathcal{X}\) to the label space \(\mathcal{Y}\). Thus, each hypothesis \(h\in\mathcal{H}\) assigns a prediction \(h(\mathbf{x})\in\mathcal{Y}\) to a provided instance \(\mathbf{x}\in\mathcal{X}\). The prediction quality of a hypothesis for a given instance-label pair \((\mathbf{x},y)\) is assessed by means of \(L(h(\mathbf{x}),y),\) where \(L:\mathcal{Y}\times\mathcal{Y}\to\mathbb{R}\) is a loss function that incentives correct predictions. The goal in supervised ML is to induce a hypothesis that has the lowest expected loss (generalization error) for instance-label pairs \((\mathbf{x},y)\) sampled according to \(P^{*}\). Formally, the learner seeks to find \[h^{*}\in\arg\min_{h\in\mathcal{H}}\,\mathrm{GE}(h)\,,\] where \(\mathrm{GE}(h)\) denotes the generalization error of \(h\): \[\mathrm{GE}(h)=\mathbb{E}_{(\mathbf{x},y)\sim P^{*}}[L(h(\mathbf{x}),y)]\,.\] To this end, a learner (or inducer) \(\mathcal{A}\) is designed that returns for a given (training) data set a hypothesis deemed to be suitable (having low generalization error) for the learning task at hand. Typically, a learner is parameterized by a parameter space \(\Lambda\), whose elements are called hyperparameters, which may be high-dimensional tuples with components from different domains (continuous, discrete, or categorical). Thus, a learner is formally a mapping \[\mathcal{A}:\ \mathbb{D}\times\Lambda\to\mathcal{H},\,(\mathcal{D},\lambda) \mapsto\hat{h}\,,\] where \(\mathbb{D}=\bigcup_{N\in\mathbb{N}}(\mathcal{X}\times\mathcal{Y})^{N}\) is the set of all possible training data sets. It is worth noting that the learner can be defined in a similar way, if the hypothesis space \(\mathcal{H}\) is a subset of all mappings from the feature space \(\mathcal{X}\) to \(\mathbb{P}(\mathcal{Y}),\) i.e., the set of probabilities over the label space \(\mathcal{Y}\). The only difference is that the loss function has a different signature in this case, namely it is a mapping \(L:\mathbb{P}(\mathcal{Y})\times\mathcal{Y}\to\mathbb{R}\). The goal of hyperparameter optimization (HPO) is then to find an optimal hyperparameter for a given learner \(\mathcal{A}\) and data set \(\mathcal{D}\), i.e., \[\lambda^{*}\in\arg\min_{\lambda\in\Lambda}\,\ell(\lambda)=\arg\min_{\lambda \in\Lambda}\,\mathrm{GE}\big{(}\mathcal{A}(\mathcal{D},\lambda)\big{)}.\] However, as \(P^{*}\) is unknown and so is the generalization error \(\ell\), one estimates the latter for a fixed hyperparameter by means of a function \(\widehat{\ell}:\Lambda\to\mathbb{R}\), which is typically referred to as the validation error. As the actual computation of the validation error for a specific hyperparameter might be costly in terms of the available resources (e.g., wall-clock time, number of used data points, etc.), the validation error is usually determined only for a certain resource allocation, and thus its actual value depends on the resources used. In light of this, we denote by \(\widehat{\ell}_{r}(\lambda)\) the validation error of \(\mathcal{A}\) used with the hyperparameter \(\lambda\) and \(r\) resource units. Obviously, the choice of \(r\) involves a trade-off: The more resource units are used, the more accurate the estimate, but the more costly its calculation, and vice versa. Roughly speaking, an HPO method seeks to find an appropriate hyperparameter of \(\mathcal{A}\), while preferably using as few resources as possible, and/or staying within a maximum assignable budget \(R\) for the resource consumption for evaluating a hyperparameter during the search. For the sake of convenience, we assume that \(R\) is an element of \(\mathbb{N}\cup\{\infty\},\) where \(R=\infty\) means that there are no restrictions on the resource usage. We define \(\ell_{*}(\lambda):=\lim_{r\to R}\ell_{r}(\lambda)\) for any \(\lambda\in\Lambda\) and \(\nu_{*}:=\inf_{\lambda\in\Lambda}\ell_{*}(\lambda)\). The formal goal of an HPO method is then to identify a hyperparameter \(\lambda\) which belongs to \(\arg\min_{\lambda\in\Lambda}\ell_{*}(\lambda)-\nu_{*}\). ## 3 Hyperband In this section, we explain shortly the functionality of the Hyperband algorithm by Li et al. (2018) and its subroutine Successive Halving (Karnin et al., 2013). ### Successive Halving The Successive Halving (SH) algorithm solves the nonstochastic best arm identification problem within a fixed budget and was already applied successfully to HPO. It iteratively allocates the available budget to a set of hyperparameter configurations, evaluates their performance and throws away the worst half. This process is repeated until only one hyperparameter configuration remains, which is then returned by the algorithm as the proposed winner. Due to this, we can allocate exponentially more budget to more promising hyperparameter configurations. ### Hyperband The Hyperband algorithm by Li et al. (2018) solves the HPO problem by considering it as a pure-exploration adaptive resource allocation problem. It iterates over different sizes of the set of hyperparameter configurations \(n\) while keeping the budget \(B\) fixed, and calls the SH algorithm as a subroutine on the \(n\) configurations and with budget \(B\). This way, different allocation strategies are considered for the tradeoff between (i) considering many configurations \(n\) and (ii) giving the configurations longer training time \(B/n\). Each call of SH is called a _bracket_. ## 4 Related Work To achieve state-of-the-art performance, hyperparameter optimization (HPO) is an inevitable step in the machine learning process, dealing with finding the most suitable hyperparameter configuration of a machine learning algorithm for a given dataset and performance measures. Considering HPO as a black-box optimization problem, various methods can be used to tackle this problem. However, one particular challenge in HPO is that evaluating a hyperparameter configuration is expensive, rendering naive approaches such as grid search and random search, although widely applied, impractical. The HPO literature can be separated into two branches: model-free and model-based methods. While the latter leverage a surrogate model of the optimization surface to sample more promising candidates (Hutter et al., 2011), for instance, methods evolutionary algorithms fall into the former category of model-free methods. Notably, grid search and random search belong to the model-free category too. While standard random search is considered to be too inefficient in the HPO setting, in Hyperband (Li et al., 2018), a random search is combined with a multi-fidelity candidate evaluation routine, i.e., successive halving (Jamieson and Talwalkar, 2016), devising a powerful HPO method. Moreover, the model-free approach Hyperband can be combined with other model-free methods such as evolutionary algorithms (Awad et al., 2021) or hybridized with model-based approaches (Falkner et al., 2018) to improve its efficacy and efficiency. In (Mendes et al., 2021), the authors propose a meta-learning approach to focus the budget for evaluation even more on promising candidates instead of wasting the budget on hyperparameter configurations performing inferior to the incumbent. While these approaches aim at increasing Hyperband's efficacy and efficiency for a single run of Hyperband, in this paper, we are interested in increasing Hyperband's efficiency after increasing its budget that can be assigned to a single hyperparameter configuration at maximum. The general setting was already considered and analyzed in (Li et al., 2018) as the infinite horizon setting, where Hyperband is run repeatedly for increasing maximum budgets \(R\). We continue a previously conducted Hyperband instead of starting from scratch every time the maximum budget is increased. For a more detailed introduction to HPO and a more thorough overview of corresponding methods, we refer the reader to (Feurer and Hutter, 2019; Bischl et al., 2021). ## 5 Iterative Deepening Hyperband In the infinite horizon setting of Hyperband, once a run of Hyperband terminates, a new run is started from scratch with an increased maximum assignable budget \(R\). However, the previously evaluated hyperparameter configurations are discarded completely for the Hyperband run with the increased \(R\), which means that the previously used budget is essentially wasted. Discarding this information is irrational due to various reasons: * We are ignoring knowledge about promising hyperparameter configurations that we already acquired. * Instead of using the information we already collected, we evaluate new candidates, allowing for more exploration, which is actually desired but also requires more budget. * From an ecological perspective, the resources in the previous Hyperband runs are not well invested, since the information, if at all, has been solely used for deciding to re-run Hyperband for a larger budget. To leverage already collected information, we propose an extension to Hyperband, which is stateful and thus can be continued for a larger maximum budget \(R\), which we dub Iterative Deepening Hyperband (ID-HB). The name for our method is inspired by iterative deepening search (Korf, 1985), e.g., iterative deepening depth-first search, where the depth-first search is invoked repeatedly with increasing maximum depth. The pseudocode for this extension is given in Algorithm 1 and Figure 5.3 illustrates the core idea. From Algorithm 1, we can see that the old max size \(R_{t-1}\) is increased by a factor \(\eta\) to obtain \(R_{t}\). For sake of convenience w.r.t. theoretical analysis, we limit ourselves to increases in the maximum budget by a factor of \(\eta\). According to the new max size \(R_{t}\), the variables \(s_{\text{max}}\) and \(B\) are computed as before by Li et al. (2018), which will result in an additional bracket for the new max size \(R_{t}\) as well as increased pool sizes for the already existing brackets. Therefore, we fill up the brackets with additional hyperparameter configurations until the correct start pool size with respect to \(R_{t}\) is reached. However, since typically more newly sampled hyperparameter configurations are added than can be propagated to the next iteration of a bracket, different strategies of how to proceed with the state of the previous Hyperband and the newly sampled hyperparameter configurations are conceivable. In the subsequent Sections 5.1 to 5.3, we elaborate on three possible strategies of how a previous Hyperband for a lower maximum budget \(R_{t-1}\) can be continued for a larger maximum budget \(R_{t}\). We order the strategies according to their truthfulness with respect to decisions taken in a Hyperband run that would have been run from scratch. This order also goes hand in hand with improved efficiency, i.e., less budget is accumulated for evaluating hyperparameter configurations. ### Discarding Iterative Deepening (dID-HB) Arguably, the most truthful but potentially inefficient way to update the brackets with the newly received hyperparameter configurations is to allow for revising previous decisions regarding propagation to subsequent iterations. More specifically, when the start pool of hyperparameter configurations is extended by the new hyperparameter configurations, we allow discarding hyperparameter configurations that were promoted in the previous run and have already been evaluated on a larger budget. In the first iteration, we consider all the candidates available, i.e., the newly sampled hyperparameter configurations, previously discarded, and previously promoted hyperparameter configurations. The top-k is computed as before, and the selected hyperparameter configurations are promoted to the next iteration. Hence, discarding iterative deepening Hyperband is able to revise its previous decisions regarding promotions and discard hyperparameter configurations that have been promoted in the previous run. While only those hyperparameter configurations in an iteration need to be evaluated that were newly sampled and eventually the last iteration for the new max size \(R_{t}\), it may happen that only new configurations are promoted to the subsequent iteration. In this case, this variant of the stateful extension will only save the resources already used for evaluating the old configurations for the minimum budget of a bracket. In practice, however, we will see in Section 7.2 that old candidates are often kept in subsequent iterations. Eventually, we have obtained a variant of Iterative Deepening Hyperband that has the potential of improving substantially in terms of efficiency while maintaining the same outcome if the same set of initial hyperparameter configurations was given to the original Hyperband. In the worst case, however, this variant is almost as expensive as re-running the original version, i.e., only the first iteration evaluations of the old candidates are saved. ### Preserving Iterative Deepening (pID-HB) Taking a step towards more efficiency and reuse of previous evaluations of hyperparameter configurations, in preserving iterative deepening Hyperband (pID-HB), we promote the top-k hyperparameter configurations of a pool of candidates comprising the promoted hyperparameter configurations and all hyperparameter configurations that have already been promoted to this budget level in the previous Hyperband run but have not been promoted in the continued run. In this way, we conserve the information about hyperparameter configurations that have already been evaluated for this budget but have been discarded in a previous iteration. Still considering such hyperparameter configurations allows already discarded hyperparameter configurations to return to the pool of promising candidates. On the one hand, we can thereby potentially increase the efficiency, since after a hyperparameter configuration returns to the set of promoted candidates, we do not need to spend additional budget on evaluating a new candidate, except for the last iteration with the budget of the new max size. On the other hand, we can revise "wrong" decisions of the current run if a previously discarded old hyperparameter configuration performs better on a larger budget than the configuration that has superseded it. In the worst case, pID-HB exhibits the same computational complexity as dID-HB, only saving the computational resources of the old hyperparameter configurations being evaluated on the start budgets of the respective brackets. However, due to the ability to reconsider already discarded candidates that have already been evaluated in a previous Hyperband run, the chances of reusing already evaluated candidates increase. Although pID-HB does not necessarily return the same result as re-running Hyperband from scratch, intuitively, we expect it to yield similar performance. Moreover, although pID-HB has the same worst-case runtime complexity as dID-HB, pID-HB at least takes all available information into account, i.e., evaluation data of previously evaluated but discarded hyperparameter configurations is not ignored but still considered. ### Efficient Iterative Deepening (eID-HB) The third and last strategy is to continue the previous Hyperband run in the most efficient and thus resource-saving way. Accordingly, we dub this variant "efficient iterative deepening Hyperband" (eID-HB). However, the maximum efficiency comes at the cost of potential performance deteriorations, as it does not revoke any previously made decisions. In other words, it only fills up the candidate pools promoting hyperparameter configurations until the new levels are reached. If a hyperparameter configuration was promoted to a subsequent iteration of successive halving it remains, even if there are hyperparameter configurations in the newly sampled set of hyperparameter configurations that would actually replace them. Technically, when determining the top-k for the next iteration of successive halving we subtract the number of hyperparameter configurations that have already been promoted in the previous run. According to the difference, new promotions are selected among previously discarded candidates and newly sampled hyperparameter configurations that were promoted to the current iteration. In principle, it may happen that some hyperparameter configurations may have been wrongly promoted as compared to starting the Hyperband run from scratch. Since already promoted hyperparameter configurations remain promoted and decisions cannot be revised, the overall performance may deteriorate. However, if the deterioration would be negligible, eID-HB would allow us to run Hyperband for a larger max size \(R_{t}\) at the cost of only running Hyperband for the larger max size \(R_{t}\). In the subsequent sections, we provide theoretical guarantees and also demonstrate empirically that the proposed extensions improve the efficiency significantly while maintaining similar performance. ## 6 Theoretical Results We split the theoretical results into two parts. First, we give some theoretical guarantees for our extensions of SuccessiveHalving, and second, we extend them to the IterativeDeepening-Hyperband algorithm. ### IterativeDeepening-SuccessiveHalving Since the Successive Halving algorithm (Jamieson and Talwalkar, 2016) solves a bandit problem, we will stick in the following analysis of our iterative deepening variants of Successive Halving to the notation of multi-armed bandits. Our algorithms can then easily be applied to hyperparameter optimization by regarding a hyperparameter configuration as an arm. If we pull an arm \(i\) for \(k\) times, we observe the loss \(\ell_{i,k}\). Similar to Li et al. (2018) we need the following assumption for our theoretical analysis of our proposed IterativeDeepening-SuccessiveHalving algorithms. **Assumption 6.1**.: For each arm \(i\in\mathbb{N}\) the limit \(\nu_{i}:=\lim_{t\to\infty}\ell_{i,t}\) exists. Moreover, we denote the convergence speed by \(\gamma(j)\geq\sup_{i}|\ell_{i,j}-\nu_{i}|\,\forall j\in\mathbb{N}\) and provide the following result, the proof of which is given in Appendix B.1. **Theorem 6.2** (Necessary Budget for IterativeDeepening-SuccessiveHalving).: _Fix \(n\) arms from which \(\tilde{n}\) arms were already promoted. Let \(\nu_{i}=\lim_{t\to\infty}\ell_{i,t}\) and assume \(\nu_{1}\leq\dots\leq\nu_{n}\). For any \(\epsilon>0\) let_ \[z_{ID-SH}= \eta[\log_{\eta}(n)]\] \[\times\max_{i=2,\dots,n}i\Big{(}1+\min\big{\{}R,\gamma^{-1}\left( \max\big{\{}\tfrac{\epsilon}{4},\tfrac{\nu_{i}-\nu_{i}}{2}\big{\}}\right) \big{\}}\Big{)}.\] _If the efficient, discarding or preserving IterativeDeepening-SuccessiveHalving algorithm given in Algorithm 3, Algorithm 4 resp. 5 are run with any budget \(B\geq z_{ID-SH}\), then an arm \(i\) is returned that satisfies \(\nu_{i}-\nu_{1}\leq\epsilon/2\)._ Further, we can specify the improvement of the incremental variants over the costly re-run of SuccessiveHalving (SH) as in the following theorem (proof is deferred to Appendix B.2). **Theorem 6.3** (Improvement of number of pulls of xID-SH in comparison to SH).: _Fix \(n\) arms, a budget of \(B\), a maximal size of \(R\) and \(r\) and \(\eta\). Assume that we have already run SuccessiveHalving on \(\tilde{n}\) arms and the same values for \(B\), \(r\) and \(\eta\). Let \(\eta_{-}=\eta-1\) and \(s^{+}=s+1\). If we ran SuccessiveHalving (SH), efficient IterativeDeepening-SuccessiveHalving (eID-SH) and preserving resp. discarding IterativeDeepening-SuccessiveHalving (p/dID-SH) over \(s\) rounds with above variables, we have_ _a) \(\#\{\)total pulls of eID-SH\(\}\)_ \[\leq\left(1-\frac{(s^{+})(\tilde{n}R+\eta^{*})(\eta_{-})-(\eta^{* ^{+}}-1)(2R+n)}{(s^{+})(nR+\eta^{*})(\eta_{-})-(\eta^{*^{+}}-1)(R+n)}\right)\] \[\quad\times\#\{\)_total pulls of SH\(\}\)_ _and b) \(\#\{\)total pulls of p/dID-SH\(\}\)_ \[\leq\left(1-\frac{(\eta_{-})((s^{+})\eta^{*}+R\tilde{n})-(\eta^{* ^{+}}-1)(R+n)}{(\eta_{-})(s^{+})(nR+\eta^{*})-(\eta^{*^{+}}-1)(R+n)}\right)\] \[\quad\times\#\{\)_total pulls of SH\(\}.\] The fraction of improvement in the total number of pulls for eID-SH in comparison to SH is shown in Figure 2, while for the other variants we provide similar plots in Appendix B.2. ### IterativeDeepening-Hyperband An optimal hyperparameter configuration \(\lambda^{*}\) as defined above may not always exist. Even if it exists, it could be infeasible to search for it as our hyperparameter configuration space is usually very large or even infinite. Therefore we will relax our goal and seek to find a configuration that is at least "nearly optimal". Similar to the HPO problem literature, we define the notion of such a near-optimal configuration as follows: For \(\epsilon>0\), we call \(\hat{\lambda}\) an _\(\epsilon\)-optimal configuration_ iff \(\nu_{\hat{\lambda}}-\nu_{\lambda^{*}}\leq\epsilon\). To ensure that the search for such a configuration is not like searching for the needle in the haystack, we need an assumption which guarantees that the probability of the existence of an \(\epsilon\)-optimal configuration in our sample set is high enough. **Assumption 6.4**.: The proportion of \(\epsilon\)-optimal configurations in \(\Lambda\) is \(\alpha\in(0,1)\). Note that we now have at least one \(\epsilon\)-optimal configuration in a sample set with probability at least \(1-\delta\), if the size of ``` Inputs: old max size \(R_{t-1}\), \(\eta\geq 2\), old losses \(L_{(t-1,\cdot,\cdot)}\), discarded configurations \(D_{(t-1,\cdot,\cdot)}\), promoted configurations \(P_{(t-1,\cdot,\cdot)}\), mode flag \(\rho\in\{p,d,e\}\) Initialize:\(R_{t}\leftarrow\eta R_{t-1}\), \(s_{\text{max}}\leftarrow\lfloor\log_{\eta}(R_{t})\rfloor\), \(B\leftarrow(s_{\text{max}}+1)R_{t}\) for\(s\in\{s_{\text{max}},s_{\text{max}}-1,\ldots,0\}\)do \(n_{t}\leftarrow\lceil\dfrac{B}{R_{t}}\dfrac{\eta^{s}}{(s+1)}\rceil\) \(r=R\eta^{-s}\) if\(s>0\)then\(n_{t-1}\leftarrow\lceil\dfrac{\eta^{s-1}s_{\text{max}}}{s}\rceil\) else \(n_{t-1}\gets 0\) endif \(\delta\gets n_{t}-n_{t-1}\) \(C\leftarrow\text{get\_hyperparameter\_configuration}(\delta)\) for\(i\in\{0,\ldots,s\}\)do \(r_{i}\gets r\eta^{i}\) \(L_{(t,s,i)}\gets L_{(t-1,s-1,i)}\cup\{\text{run\_then\_return\_val\_loss}(c,r_{i}):c\in C \setminus P_{(t-1,s-i,i-1)}\}\) \(\triangleright\) Evaluate configurations if\(\rho=d\)then\(\triangleright\) Discarding ID-HB if\(i=0\)then \(T\gets D_{(t-1,s_{1},i)}\cup P_{(t-1,s-1,i)}\cup C\) else \(T\gets C\) endif \(k\leftarrow\lfloor\nicefrac{{n_{t}}}{{\eta^{i+1}}}\rfloor\) elseif\(\rho=p\)then\(T\gets D_{(t-1,s_{1},i)}\cup P_{(t-1,s-1,i)}\cup C\) \(k\leftarrow\lfloor\nicefrac{{n_{t}}}{{\eta^{i+1}}}\rfloor\) elseif\(\rho=e\)then\(T\gets D_{(t-1,s-1,i)}\cup C\) \(\triangleright\) Efficient ID-HB \(k\leftarrow\lfloor\nicefrac{{n_{t}}}{{\eta^{i+1}}}\rfloor-\lfloor\nicefrac{{n_ {t-1}}}{{\eta^{i+1}}}\rfloor\) endif \(C\leftarrow\text{top}_{k}(T,L_{(t,s,i)},k)\) \(D_{(t,s,i)}\gets T\setminus(C\cup P_{(t-1,s-1,i)})\)\(\triangleright\) Update discarded configurations \(P_{(t,s,i)}\gets P_{(t-1,s-1,i)}\cup C\)\(\triangleright\) Update promoted configurations endfor ``` **Algorithm 1** IterativeDeepening-Hyperband (ID-HB) Figure 1: Illustration of taking over previously sampled configurations. the sample set is \(\lceil\log_{1-\alpha}(\delta)\rceil\) for a fixed failure probability \(\delta\in(0,1)\). With this, we can state the following theorem, the proof of which is given in Appendix B.3. **Theorem 6.5**.: _Let \(\eta,R,\alpha\) and \(\delta\) be fixed such that_ \[R \geq\max\Big{\{}\left\lceil\log_{1-\alpha}(\delta)\right\rceil( \eta-1)+1,\] \[\eta\Big{(}\log_{\eta}(\log_{\eta}(R))+4+\frac{\lfloor\log_{\eta }(R)\rfloor}{2}\] \[\quad-\log_{\eta}\big{(}\big{(}[\log_{\eta}(R)]+1)!\big{)}\,/( \lfloor\log_{\eta}(R)\rfloor+1)\Big{)}\bar{\gamma}^{-1}\Big{\}}\] _for_ \[\bar{\gamma}^{-1}:=\max_{s=0,\ldots,\lfloor\log_{\eta}(R)\rfloor} \max_{i=2,\ldots,n_{s}}i\big{(}1\] \[\quad\quad+\min\Big{\{}R,\gamma^{-1}\Big{(}\max\Big{\{}\frac{ \epsilon}{4},\frac{\nu_{i}-\nu_{1}}{2}\Big{\}}\Big{)}\Big{\}}\Big{)}.\] _Then ID-HB finds an \(\epsilon\)-optimal configuration with probability at least \(1-\delta\)._ ## 7 Empirical Evaluation In this section, we evaluate the proposed extension of Hyperband empirically and compare the three strategies devised in Section 5 to the original way of applying Hyperband when increasing the max size \(R\) as done in the infinite horizon setting. More specifically, we are interested in the following two research questions: * Is ID-HB able to retain the quality of returned hyperparameter configurations for its variants dID-HB, pID-HB, and eID-HB, respectively? * To what extent can dID-HB and pID-HB reduce the computational effort? To answer **RQ1** and **RQ2**, we conduct an extensive set of experiments, the setup of which is outlined in Section 7.1. The results of these experiments are subsequently presented and discussed in Section 7.2. ### Experiment Setup In our experimental evaluation, we compare the proposed ID-HB approach in all its three flavors to the original Hyperband as a baseline to answer research questions **RQ1** and **RQ2**. To this end, we conduct an extensive set of experiments tackling various HPO tasks, including HPO for neural networks, SVMs, XGBoost, random forests, and neural architecture search. As a benchmark library, we use YAHPO Gym (Pfisterer et al., 2022) which provides fast-to-evaluate surrogate benchmarks for hyperparameter optimization with particular support for multi-fidelity optimization, which makes it a perfect fit for our study. From YAHPO Gym, we select the benchmarks listed in Table 1. Except for nb301 only comprising CIFAR10 as a dataset, all other benchmarks include several datasets allowing for a broad comparison of ID-HB to the original Hyperband, subsequently denoted as IH-HB. We evaluate all benchmarks for \(\eta=2\) and \(\eta=3\), but due to space limitations only present results for \(\eta=2\) here. Results for \(\eta=3\) as well as detailed results for single datasets can be found in Appendix C. Furthermore, we set the initial max size \(R_{t-1}=16\) and increase it after the first run by a factor of \(\eta\) to \(R_{t}=R_{t-1}\eta\). For benchmarks considering a fraction of the training dataset as fidelity parameter, we translate a budget \(r\) by \(\nicefrac{{r}}{{R_{t}}}\) into a fraction between 0 and 1. Furthermore, we repeat each combination of algorithm, parameterization, and benchmark instance for 30 seeds resulting in a total amount of \(30\times 4\times 2\times 379=90,960\) hyperparameter optimization runs. We computed all experiments on a single workstation equipped with 2xIntel Xeon Gold 5122 and 256GB RAM. The code and data are publicly available via GitHub1. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Benchmark & Model & \# Inst. & Objective & Fidelity \\ \hline \hline lcbench & neural network & 34 & val\_accuracy & epochs \\ nb301 & neural network & 1 & val\_accuracy & epochs \\ rbv2\_svm & SVM & 106 & acc & fraction \\ rbv2\_ranger & random forest & 119 & acc & fraction \\ rbv2\_xgboost & XGBoost & 119 & acc & fraction \\ \hline \hline \end{tabular} \end{table} Table 1: List of considered benchmarks from YAHPO-Gym, the type of learner, the number of considered datasets, the objective function, and the type of budget that can be used as a fidelity parameter. Figure 2: Fraction of number of pulls of eID-SH and SH for different values of rounds of SH \(s\) and maximal budget per round \(R\). ### Empirical Results In Figure 3, we present the experimental results for the benchmarks with more than one dataset lcbench, rbv2_svm, rbv2_range, and rbv2_xgboost. In the top row we present scatter plots comparing the final incumbent's performance obtained by IH-HB on the x-axis against the performance obtained by an ID-HB strategy on the y-axis. The diagonal line represents the situation that both performances are on a par. From these plots, it is quite obvious that any ID-HB strategy is performing similarly well as the original Hyperband variant. Similar observations can be made for the nb301 benchmark, where all approaches achieve an accuracy of \(0.9061\) (see Appendix C). Concerning **RQ1**, for the benchmarks considered here, we can thus confirm empirically that the proposed ID-HB extension is indeed able to retain the quality of the returned hyperparameter configurations. Especially notable is the performance of eID-HB, as it does not revise any decisions and thus would intuitively be likely to show deterioration in performance; yet, in the hyperparameter optimization considered here, this is never the case. Without any exception, all Hyperband variants perform equally well with respect to the quality of returned solutions. Furthermore, the results are also representative for \(\eta=3\). Considering the average budget consumed for a single run, however, significant improvements can be achieved with ID-HB. As can be seen from the bottom row in Figure 3, which display the distribution of the total budget consumed during a single run, eID-HB represents the most efficient strategy. Confirming intuitive expectations, pID-HB is more efficient than dID-HB, although the differences are often rather marginal. Perhaps more surprisingly, both pID-HB and dID-HB are significantly more efficient than the IH-HB in all benchmarks. Even the worst runs still yield a 20% reduction of the total budget consumed, answering **RQ2**. Although the theoretical worst case analysis for pID-HB and dID-HB gives only slide, almost negligible improvements, in practice, these strategies seem to be quite efficient and revise only few evaluations. ## 8 Conclusion and Future Work In this paper, we have proposed an extension to the well-known HPO method Hyperband called Iterative Deepening Hyperband (ID-HB) aiming to improve its efficiency when the max size hyperparameter of Hyperband needs to be increased post-hoc. We derived three strategies with varying truthfulness with respect to run Hyperband from scratch on the same sample of hyperparameter configurations. For all of these three strategies, we gave theoretical guarantees on the quality of the final choice as well as on the saved budget when a previously Hyperband run is continued. In an empirical study, we also find all three strategies to yield similar results as the much more expensive baseline variant of Hyperband. In fact, in the most efficient strategy, our approach only requires the budget of the one run with the increased max size. In future work, we plan to combine our more efficient Hyperband extensions with more sophisticated sampling of hyperparameter configurations as for example done in (Awad et al., 2021) or (Falkner Figure 3: Comparison of ID-HB to the original version of Hyperband. Top: Scatter plots plotting the final incumbents’ accuracy obtained by an ID-HB strategy versus IH-HB. Bottom: Violin plots showing the average total budget consumed for a single run. et al., 2018) and HyperJump to improve ID-HB's efficacy and efficiency even more. ## Software and Data The software and experimental data ispublicly available via GitHub: [https://github.com/mwever/iterative-deepening-hyperband](https://github.com/mwever/iterative-deepening-hyperband). ## Acknowledgements This research was supported by the research training group Dataninja (Trustworthy AI for Seamless Problem Solving: Next Generation Intelligence Joins Robust Data Analysis) funded by the German federal state of North Rhine-Westphalia.
2303.17344
Topological Hochschild homology, truncated Brown-Peterson spectra, and a topological Sen operator
In this article, we study the topological Hochschild homology of $\mathbf{E}_3$-forms of truncated Brown-Peterson spectra, taken relative to certain Thom spectra $X(p^n)$ (introduced by Ravenel and used by Devinatz-Hopkins-Smith in the proof of the nilpotence theorem). We prove analogues of B\"okstedt's calculations $\mathrm{THH}(\mathbf{F}_p) \simeq \mathbf{F}_p[\Omega S^3]$ and $\mathrm{THH}(\mathbf{Z}_p) \simeq \mathbf{Z}_p[\Omega S^3\langle{3}\rangle]$. We also construct a topological analogue of the Sen operator of Bhatt-Lurie-Drinfeld, and study a higher chromatic extension. The behavior of these "topological Sen operators" is dictated by differentials in the Serre spectral sequence for Cohen-Moore-Neisendorfer fibrations.
Sanath K Devalapurkar
2023-03-30T12:53:58Z
http://arxiv.org/abs/2303.17344v1
# Topological Hochschild homology, truncated Brown-Peterson spectra, and a topological Sen operator ###### Abstract. In this article, we study the topological Hochschild homology of \(\mathbf{E}_{3}\)-forms of truncated Brown-Peterson spectra, taken relative to certain Thom spectra \(X(p^{n})\) (introduced by Ravenel and used by Devinatz-Hopkins-Smith in the proof of the nilpotence theorem). We prove analogues of Bokstedt's calculations \(\mathrm{THH}(\mathbf{F}_{p})\simeq\mathbf{F}_{p}[\Omega S^{3}]\) and \(\mathrm{THH}(\mathbf{Z}_{p})\simeq\mathbf{Z}_{p}[\Omega S^{3}(3)]\). We also construct a topological analogue of the Sen operator of Bhatt-Lurie-Drinfeld, and study a higher chromatic extension. The behavior of these "topological Sen operators" is dictated by differentials in the Serre spectral sequence for Cohen-Moore-Neisenodorfer fibrations. Part of this work was done when the author was supported by the PD Soros Fellowship and NSF DGE-2140743. The present article is far from being a final version, so any comments and suggestions for improvement are greatly appreciated. I'll post major updates to the arXiv, but I'll upload minor edits to my website; so please see there for the most up-to-date version. ## 1. Introduction The study of the \(W^{\times}[F^{n}]\)-algebra of functions \(f\) and \(W^{\times}[F^{n}]\) is a very important tool in the theory of functions \(f\) and \(W^{\times}[F^{n}]\). The \(W^{\times}[F^{n}]\)-algebra \(W^{\times}[F^{n}]\) is a \(W^{\times}[F^{n}]\)-algebra \(W^{\times}[F^{n}]\), where \(F^{n}\) is a \(W^{\times}[F^{n}]\)-algebra. The \(W^{\times}[F^{n}]\)-algebra \(W^{\times}[F^{n}]\) is a \(W^{\times}[F^{n}]\)-algebra \(W^{\times}[F^{n}]\), where \(F^{n}\) is a \(W^{\times}[F^{n}]\)-algebra. The \(W^{\times}[F^{n}]\)-algebra \(W^{\times}[F^{n}]\) is a \(W^{\times}[F^{n}]\)-algebra \(W^{\times}[F^{n}]\), where \(F^{n}\) is a \(W^{\times}[F^{n}]\)-algebra. The \(W^{\times}[F^{n}]\)-algebra \(W^{\times}[F^{n}]\) is a \(W^{\times}[F^{n}]\)-algebra \(W^ ## 1. Introduction ### Summary Fix a prime \(p\). A fundamental calculation of Bokstedt's [1] says that \(\pi_{*}\mathrm{THH}(\mathbf{F}_{p})\) is isomorphic to a polynomial ring \(\mathbf{F}_{p}[\sigma]\) with \(|\sigma|=2\). Recent work of Hahn-Wilson shows that this polynomiality phenomenon persists at higher heights, provided one works relative to MU instead of the sphere. Namely, [1, Theorem E] states that if \(\mathrm{BP}\langle n\rangle\) is an \(\mathbf{E}_{3}\)-form of the truncated Brown-Peterson spectrum, then \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n\rangle/\mathrm{MU})\) is a polynomial algebra over \(\pi_{*}\mathrm{BP}\langle n\rangle\) on generators in even degree. Moreover, the first such generator is the double suspension \(\sigma^{2}(v_{n+1})\). In this article, we will show that the "polynomial THH" phenomenon persists if one instead considers THH relative to the Ravenel spectra \(X(p^{n})\), introduced in [10] and used by [11] in the proof of the nilpotence theorem. Motivated by [12], the thesis of this article is that many statements involving the study of \(\mathbf{F}_{p}\)- or \(\mathbf{Z}_{p}\)-algebras relative to the sphere spectrum admit natural generalizations when studying \(\mathrm{BP}\langle n-1\rangle\)- or \(\mathrm{BP}\langle n\rangle\)-algebras relative to \(X(p^{n})\). Many of the results presented here were motivated by the perspective that there should be a chromatic analogue of integral \(p\)-adic Hodge theory (where \(p\) is replaced by the chromatic element \(v_{n}\); see Figure 1)1. Footnote 1: I’d also like to direct the reader to [https://www.royalacademy.org.uk/art-artists/work-of-art/prismatic-colour-wheel](https://www.royalacademy.org.uk/art-artists/work-of-art/prismatic-colour-wheel); but I hope our Figure 1 is more mathematically informativel The \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring \(X(p^{n})\) is the Thom spectrum of the \(\mathbf{E}_{2}^{\mathrm{fr}}\)-map \(\Omega\mathrm{SU}(p^{n})\to\Omega\mathrm{SU}\simeq\mathrm{BU}\), so that \(X(1)=S^{0}\) and \(X(\infty)=\mathrm{MU}\). Just as \(\mathrm{MU}_{(p)}\) splits as a direct sum of shifts of \(\mathrm{BP}\), the spectrum \(X(p^{n})_{(p)}\) splits into a direct sum of shifts of an \(\mathbf{E}_{1}\)-ring denoted2\(T(n)\). If \(\mathscr{C}\) is a left \(X(p^{n})\)-linear \(\infty\)-category, then [1, Corollary 2.9 and Corollary 3.7] ensures that it makes sense to define the relative topological Hochschild homology \(\mathrm{THH}(\mathscr{C}/X(p^{n}))\), and furthermore that \(\mathrm{THH}(\mathscr{C}/X(p^{n}))\) admits an \(S^{1}\)-action.3 Footnote 2: This is _not_ the telescope of a \(v_{n}\)-self map! See Warning 2.1.6. Footnote 3: We warn the reader that even if \(\mathscr{C}\) admits the structure of a monoidal \(\infty\)-category, \(\mathrm{THH}(\mathscr{C}/X(p^{n}))\) rarely inherits any multiplicative structure from \(\mathscr{C}\), since \(X(p^{n})\) does not admit the structure of an \(\mathbf{E}_{3}\)-ring (see Remark 2.1.3). Our main result is an analogue of Bokstedt's calculation. If \(R\) is a ring spectrum, let \(R[B\Delta_{n}]\) denote the free \(R\)-module whose homotopy groups are isomorphic to a divided power algebra \(\pi_{*}(R)\langle y_{i}|1\leq i\leq p^{n}-1,i\neq p^{k}\rangle\) where \(|y_{j}|=2j\).4 Morally, \(R[B\Delta_{n}]\) is the \(R\)-chains on the "classifying space of \(\prod_{i=1}^{n}\mathrm{SU}(p^{i}-1)/\mathrm{SU}(p^{i-1})\)"; so, if \(X\) is another space, we will write \(R[B\Delta_{n}\times X]\) to denote \(R[B\Delta_{n}]\otimes_{R}R[X]\). Fix an \(\mathbf{E}_{3}\)-form of the truncated Brown-Peterson spectrum \(\mathrm{BP}\langle n-1\rangle\) (which exists by [1, Theorem A]). Motivated by the results of [12], and using the calculations of [1, 1], we show: Footnote 4: The contribution \(B\Delta_{n}\) plays essentially no practical/meaningful role in this article. Its appearance in the equivalences below can be removed if \(T(n)\subseteq X(p^{n})_{(p)}\) admits the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-algebra. We strongly believe this to be possible (enough to state it as Conjecture 2.1.9!), so we suggest the reader ignore \(B\Delta_{n}\) — and simultaneously replace \(X(p^{n})\) by \(T(n)\) — on a first pass. **Theorem** (Theorem 2.2.4(a)).: _There is a \(p\)-complete equivalence_ \[\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\simeq\mathrm{BP}\langle n -1\rangle[B\Delta_{n}\times\Omega S^{2p^{n}+1}]\] of \(\mathrm{BP}\langle n-1\rangle\)-modules; in particular, there is a \(p\)-complete isomorphism_ \[\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\simeq\pi_{*}\mathrm{ BP}\langle n-1\rangle[B\Delta_{n}][\theta_{n}],\] _where \(\theta_{n}\in\pi_{2p^{n}}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) is \(\sigma^{2}(v_{n})\)._ _Moreover, there are \(p\)-complete isomorphisms_ \[\pi_{*}\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle/X(p^{n})) \cong\pi_{*}(\mathrm{BP}\langle n\rangle[B\Delta_{n}])[\hbar][\theta_{n}]/( \theta_{n}\hbar-v_{n}),\] \[\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n})) \cong\pi_{*}(\mathrm{BP}\langle n\rangle^{tS^{1}}[B\Delta_{n}]),\] _where \(\hbar\in\pi_{-2}\mathrm{BP}\langle n\rangle^{hS^{1}}\). Under the map \(\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\to\mathrm{TP}(\mathrm{BP} \langle n-1\rangle/\mathrm{MU})\), the image of \(v_{n}\in\pi_{2p^{n}-2}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) can be identified with the image of \(v_{n}\in\pi_{2p^{n}-2}\mathrm{MU}^{tS^{1}}\) under the map \(\mathrm{MU}^{tS^{1}}\to\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/\mathrm{MU})\)._ **Remark 1.1.1**.: If \(T(n)\subseteq X(p^{n})_{(p)}\) admits the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-algebra (Conjecture 2.1.9), then Theorem 2.2.4 would give the cleaner statements that \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T(n))\simeq\mathrm{BP}\langle n-1 \rangle[\Omega S^{2p^{n}+1}]\), and that \(\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/T(n))\cong\pi_{*}\mathrm{BP} \langle n\rangle^{tS^{1}}\). The map \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T(n))\to\pi_{*}\mathrm{THH}( \mathrm{BP}\langle n-1\rangle/\mathrm{MU})\) is injective, and exhibits the source as the submodule \(\pi_{*}\mathrm{BP}\langle n-1\rangle[\sigma^{2}(v_{n})]\) of \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/\mathrm{MU})\). Theorem 2.2.4 implies the following result, which, for \(n=0\), is a very special case of the main result of [10]: **Corollary** (Proposition 3.3.8).: _Let \(R=\mathrm{BP}\langle n\rangle[\mathbf{Z}_{\geq 0}^{j}]\) be a flat polynomial ring over \(\mathrm{BP}\langle n\rangle\), viewed as a \(\mathbf{Z}_{\geq 0}^{j}\)-graded \(\mathbf{E}_{2}^{\mathrm{fr}}\)-\(\mathrm{BP}\langle n\rangle\)-algebra. Then there is a \(p\)-complete isomorphism of \(\mathbf{Z}_{\geq 0}^{j}\)-graded modules equipped with a map from \(\pi_{*}\mathrm{BP}\langle n\rangle^{tS^{1}}[B\Delta_{n}]\cong\pi_{*} \mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\):_ \[\pi_{*}\mathrm{TP}^{\mathrm{gr}}((R/v_{n})/X(p^{n}))\cong\pi_{*}\mathrm{HP}^ {\mathrm{gr}}(R/\mathrm{BP}\langle n\rangle)[B\Delta_{n}].\] _Here, the superscript \(\mathrm{gr}\) denotes the Tate construction taken in \(\mathbf{Z}_{\geq 0}^{j}\)-graded spectra._ **Remark 1.1.2**.: Theorem 2.2.4 quickly implies redshift for \(K(\mathrm{BP}\langle n-1\rangle)\) (see Corollary 2.2.9). When \(n=0\), the first part of Theorem 2.2.4(a) recovers Bokstedt's calculation of \(\mathrm{THH}(\mathbf{F}_{p})\), since \(\mathrm{BP}\langle-1\rangle=\mathbf{F}_{p}\) and \(X(0)=S^{0}\). When \(p=2\), the statement of Theorem 2.2.4 can be simplified using [11, Remark 3.1.9]; for instance, we obtain the following _additive_ equivalences and isomorphisms: for \(n=1\), we have \[\mathrm{THH}(\mathbf{Z}_{2}/T(1))\simeq\mathbf{Z}_{2}[\sigma^{2}(v_{1})],\,\pi _{*}\mathrm{TP}(\mathbf{Z}_{2}/T(1))_{2}^{\wedge}\simeq\pi_{*}(\mathrm{ku}^{tS ^{1}})_{2}^{\wedge}.\] Since \(\mathrm{tmf}_{1}(3)\) is a form of \(\mathrm{BP}\langle 2\rangle\) by [10], for \(n=2\), we have \[\mathrm{THH}(\mathrm{ku}_{2}^{\wedge}/T(2))\simeq\mathrm{ku}_{2}^{\wedge}[ \sigma^{2}(v_{2})],\,\pi_{*}\mathrm{TP}(\mathrm{ku}_{2}^{\wedge}/T(2))_{2}^{ \wedge}\simeq\pi_{*}(\mathrm{tmf}_{1}(3)^{tS^{1}})_{2}^{\wedge}.\] We also prove an analogue of Bokstedt's calculation [1] of \(\mathrm{THH}(\mathbf{Z}_{p})\): **Theorem** (Theorem 2.2.4(b)).: _There is an equivalence of \(\mathrm{BP}\langle n\rangle\)-modules_ \[\mathrm{THH}(\mathrm{BP}\langle n\rangle/X(p^{n}))_{p}^{\wedge}\cong\mathrm{ BP}\langle n\rangle[B\Delta_{n}]_{p}^{\wedge}\oplus\left(\bigoplus_{j\geq 1} \Sigma^{2jp^{n+1}-1}\mathrm{BP}\langle n\rangle[B\Delta_{n}]/pj\right)_{p}^{ \wedge}.\] _Moreover, \(\pi_{2p^{n+1}-3}\mathrm{TC}^{-}(\mathrm{BP}\langle n\rangle/X(p^{n}))_{p}^{\wedge}\) detects the class \(\sigma_{n}\in\pi_{2p^{n+1}-3}X(p^{n})\) from [11, Lemma 3.1.12]._ * The spectra sandwiched between diagonal lines of slope \(1\) (partitioned by a red line) display similar structural behaviour. Here, \(A\) and \(B\) are studied in [1] (where \(A\) is denoted \(X_{5}\)), [3, Construction 3.1], and [1]. * The horizontal double arrows indicate the topological Sen operators of Theorem 3.1.4, i.e., the descent spectral sequence for the map \(\operatorname{THH}(-/T(n-1))\to\operatorname{THH}(-/T(n))\). This is closely related to the Cohen-Moore-Neisenendorfer map \(\Omega^{2}S^{2p^{n}+1}\to S^{2p^{n}-1}\). * The (slightly offset) vertical dashed lines going from \((n,n-1)\) to \((n,n)\) indicate the \(p\)-completed isomorphism \(\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/T(n))\cong\pi_{*}\mathrm{BP} \langle n\rangle^{tS^{1}}\) of Theorem 2.2.4. The other vertical arrow from \((0,0)\) to \((0,1)\) is the identification of \(\operatorname{THH}(\mathbf{Z}_{p})\) with \(\tau_{\geq 0}(j^{t\mathbf{Z}/p})\), which will appear in future work with Arpon Raksit. (Here, \(j\) is the connective complex image-of-J spectrum.) This equivalence is already predicted by the pioneering work of Bokstedt-Madsen in [1]. * The downwards-sloping blue arrows indicate that \(\operatorname{THH}(\mathrm{BP}\langle n\rangle/T(n+1))/v_{n}\) is a submodule of \(\operatorname{THH}(\mathrm{BP}\langle n-1\rangle/T(n))\) generated by \(\theta_{n}^{pk}\) for \(k\geq 0\). See Example 4.2.2, Remark 4.2.4, and Example 4.2.6 for an explanation of this phenomenon using the EHP sequence. * The columns continue infinitely far out (i.e., \(\operatorname{THH}(\mathrm{BP}\langle n-1\rangle/T(m))\) for \(m>n\)). However, the drawing is truncated because these terms do not detect any more information than \(\operatorname{THH}(\mathrm{BP}\langle n-1\rangle/T(n))\) itself. The "exception" is the final column, where the descent from \(\operatorname{THH}(\mathrm{BP}\langle n-1\rangle/\mathrm{BP})\) to \(\operatorname{THH}(\mathrm{BP}\langle n-1\rangle)\) can be described algebro-geometrically via the \(p\)-typical Witt ring scheme. Figure 1. Heuristic picture suggested by this article, where we have assumed for simplicity that \(T(m)\) admits the structure of a framed \(\mathbf{E}_{2}\)-ring. **Remark 1.1.3**.: If one replaces \(X(p^{n})\) in the left-hand side of Theorem 2.2.4(b) with \(X(p^{n+1}-1)\), the only change to the right-hand side is that \(B\Delta_{n}\) is replaced by \(B\Delta_{n+1}\). Let us mention the following mild variant of Theorem 2.2.4 (see (4)): the \(\mathbf{F}_{p}[v_{n-j},\cdots,v_{n-1}]\)-module \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{j}))/(p,\cdots,v_{n-1 -j})\) is isomorphic to the tensor product of \(\mathrm{BP}\langle n-1\rangle[\Omega S^{2p^{n}+1}\times B\Delta_{j}]/(p,\cdots,v_{n-1-j})_{*}\) with an exterior algebra on classes \(\lambda_{j+1},\cdots,\lambda_{n}\), where \(|\lambda_{m}|=2p^{m}-1\). We also prove an analogue of Theorem 2.2.4 for \(\mathrm{ko}\) and \(\mathrm{tmf}\) in Appendix A. For example, if the spectra \(A\) and \(B\)[10, Section 3] lift to \(\mathbf{E}_{2}^{\mathrm{fr}}\)-rings, there are \(2\)-complete equivalences \[\mathrm{THH}(\mathrm{ko}/A) \simeq\mathrm{ko}\oplus\left(\bigoplus_{j\geq 1}\Sigma^{8j-1} \mathrm{ko}/2j\right),\] \[\mathrm{THH}(\mathrm{tmf}/B) \simeq\mathrm{tmf}\oplus\left(\bigoplus_{j\geq 1}\Sigma^{16j-1} \mathrm{tmf}/2j\right).\] **Remark 1.1.4**.: If Conjecture 2.1.9 (or rather, a weaker version which only asks that \(T(n)\) admit the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring) were true, then the contribution of \(B\Delta_{n}\) could be eliminated from Theorem 2.2.4(b): namely, there would be a \(p\)-complete equivalence \[\mathrm{THH}(\mathrm{BP}\langle n\rangle/T(n))\simeq\mathrm{BP}\langle n \rangle\oplus\bigoplus_{j\geq 1}\Sigma^{2jp^{n+1}-1}\mathrm{BP}\langle n \rangle/pj.\] We warn the reader that all the equivalences proved above are only additive, so one cannot directly use them to study the stacks associated to \(\mathrm{THH}\) (defined via the even filtration of [11]). As a perhaps more digestible example of this phenomenon (see Remark 2.3.16), note that since \(\mathbf{F}_{2}\) is the Thom spectrum of an \(\mathbf{E}_{1}\)-map \(\mathrm{U}(2)\to\mathrm{BGL}_{1}(\mathrm{ku})\), there is an equivalence \(\mathrm{HH}(\mathbf{F}_{2}/\mathrm{ku})\simeq\mathbf{F}_{2}[\mathrm{BU}(2)]\); however, this cannot be upgraded to an equivalence of \(\mathbf{F}_{2}\)-algebras, since the right-hand side is not even obviously a ring! In Example 4.2.2, Remark 4.2.4, and Example 4.2.6, we use the EHP sequence to explain the similarity in the calculation of \(\mathrm{THH}(\mathrm{BP}\langle n\rangle/X(p^{n+1}))\) and \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) given by Theorem 2.2.4. This discussion in fact yields the following more general structural uniformity in the truncated Brown-Peterson spectra (see Figure 1 for a visual illustration): **Slogan 1.1.5** (Remark 4.2.3 and Remark 4.2.5 for precise statements).: If \(n\geq j-1\), the structure of \(\mathrm{BP}\langle n\rangle\) as an \(\mathbf{E}_{1}\)-\(X(p^{j})\)-algebra (i.e., \(\mathrm{THH}(\mathrm{BP}\langle n\rangle/X(p^{j}))\)) mirrors the structure of \(\mathrm{BP}\langle n-1\rangle\) as an \(\mathbf{E}_{1}\)-\(X(p^{j-1})\)-algebra (i.e., \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{j-1}))\)), which in turn mirrors the structure of \(\mathrm{BP}\langle n-j\rangle\) as an \(\mathbf{E}_{1}\)-algebra over the sphere (i.e., \(\mathrm{THH}(\mathrm{BP}\langle n-j\rangle)\)). Let \(\mathscr{C}\) be a left \(X(p^{n})\)-linear \(\infty\)-category. Then, the descent spectral sequence from \(\mathrm{THH}\) relative to \(X(p^{n})\) to \(\mathrm{THH}\) relative to \(X(p^{n}-1)\) runs: **Theorem** (Theorem 3.1.4).: _There is a map \(\Theta_{\mathscr{C}}:\Sigma^{-2p^{n}}\mathrm{THH}(\mathscr{C}/X(p^{n}))\to \mathrm{THH}(\mathscr{C}/X(p^{n}))\) such that there is a cofiber sequence_ \[\mathrm{THH}(\mathscr{C}/X(p^{n}-1))\xrightarrow{\iota}\mathrm{THH}(\mathscr{C }/X(p^{n}))\xrightarrow{\Theta_{\mathscr{C}}}\Sigma^{2p^{n}}\mathrm{THH}( \mathscr{C}/X(p^{n})), \tag{1}\] _where the map \(\iota\) is \(S^{1}\)-equivariant, and the cofiber of \(\iota\) is (at least nonequivariantly) identified with \(\Sigma^{2p^{n}}\mathrm{THH}(\mathscr{C}/X(p^{n}))\)._ **Remark 1.1.6**.: Motivated by [1, 2], we dub the map \(\Theta_{\mathscr{C}}\) the _topological Sen operator_; its construction is motivated by the work of [10] relating \(\mathrm{BP}\langle n\rangle\) to Cohen-Moore-Neisendorfer type fiber sequences (11). When \(\mathscr{C}=\mathrm{LMod}_{\mathrm{BP}\langle n-1\rangle}\), Theorem 2.2.4 implies that the map \(\Theta\) sends \[\Theta:\theta_{n}^{j}\mapsto jp\theta_{n}^{j-1}.\] When \(n=1\), it therefore behaves like the Sen operator on the diffracted Hodge complex of \(\mathbf{Z}_{p}\) which computes \(\overline{\mathbb{A}}_{\mathbf{Z}_{p}}\{*\}\). **Remark 1.1.7**.: In Appendix A (see Remark A.24), we describe a quaternionic analogue of (1), obtained by replacing \(X(n)\) by the Thom spectrum \(X_{\mathbf{H}}(n)\) of the tautological symplectic bundle over \(\Omega(\mathrm{SU}(2n)/\mathrm{Sp}(n))\) obtained via the map \(\Omega(\mathrm{SU}(2n)/\mathrm{Sp}(n))\to\Omega(\mathrm{SU}/\mathrm{Sp})\simeq \mathrm{BSp}\) given by Bott periodicity. In Construction 2.3.1, we define an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring \(J(p)\) which admits an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-map \(J(p)\to X(p)\) such that \(\mathrm{THH}(T(1)/J(p))\simeq T(1)[J_{p-1}(S^{2})]\). The underlying \(\mathbf{E}_{1}\)-ring of \(J(p)\) is \(S[\mathbf{Z}]=S[t^{\pm 1}]\) with \(|t|=0\), but they differ as \(\mathbf{E}_{2}^{\mathrm{fr}}\)-rings. The _raison d'etre_ for \(J(p)\) is that \(\mathrm{THH}(\mathbf{Z}_{p}/J(p))\) is polynomial on a class \(x\) in degree \(2\) which is a \(p\)th root of \(\theta\in\pi_{2p}\,\mathrm{THH}(\mathbf{Z}_{p}/X(p))\). More precisely, there is an equivalence \(\mathrm{THH}(\mathbf{Z}_{p}/J(p))\simeq\mathbf{Z}_{p}[\Omega S^{3}]\) such that the map \(\mathrm{THH}(\mathbf{Z}_{p}/J(p))\to\mathrm{THH}(\mathbf{Z}_{p}/X(p))\) is induced by \(\mathbf{Z}_{p}\)-chains of the Hopf map \(\Omega S^{3}\to\Omega S^{2p+1}\). In Construction 2.3.9, we also construct two \(\mathbf{E}_{2}^{\mathrm{fr}}\)-rings (as Thom spectra over \(\Omega\mathrm{U}(2)\) and \(\Omega\mathrm{Spin}(4)\)) which play the role of \(J(p)\) for ku when \(p=2\). We construct the following cofiber sequence analogous to (1) for any \(J(p)\)-linear \(\infty\)-category \(\mathscr{C}\): \[\mathrm{THH}(\mathscr{C})\xrightarrow{\iota}\mathrm{THH}(\mathscr{C}/J(p)) \xrightarrow{\Theta^{\prime}_{\mathscr{C}}}\Sigma^{2}\mathrm{THH}(\mathscr{C} /J(p)).\] It turns out that upon reducing the above cofiber sequence mod \(p\), one obtains the following important example: **Example 1.1.8**.: If \(\mathscr{C}\) is a \(\mathbf{Z}_{p}\)-linear \(\infty\)-category, there is a cofiber sequence (see Variant 3.1.10) \[\mathrm{THH}(\mathscr{C})\otimes_{\mathbf{Z}_{p}}\mathbf{F}_{p}\xrightarrow{ \iota}\mathrm{THH}(\mathscr{C}\otimes_{\mathbf{Z}_{p}}\mathbf{F}_{p}) \xrightarrow{\Theta^{\prime}}\Sigma^{2}\mathrm{THH}(\mathscr{C}\otimes_{ \mathbf{Z}_{p}}\mathbf{F}_{p}). \tag{2}\] When \(\mathscr{C}=\mathrm{Mod}_{\mathbf{Z}_{p}}\), the effect of the map \(\Theta^{\prime}\) on homotopy is given by the map \(\mathbf{F}_{p}[\sigma]\to\Sigma^{2}\mathbf{F}_{p}[\sigma]\) which sends \(\sigma^{j}\mapsto j\sigma^{j-1}\). There is also a cofiber sequence \[\mathrm{THH}(\mathscr{C})^{t\mathbf{Z}/p}\otimes_{\mathbf{Z}_{p}}\mathbf{F}_{p }\xrightarrow{\iota}\mathrm{HP}(\mathscr{C}\otimes_{\mathbf{Z}_{p}}\mathbf{F} _{p}/\mathbf{F}_{p})\xrightarrow{\Theta^{\prime}}\mathrm{HP}(\mathscr{C} \otimes_{\mathbf{Z}_{p}}\mathbf{F}_{p}/\mathbf{F}_{p}). \tag{3}\] If \(\mathscr{C}=\mathrm{Mod}_{R}\) for an animated \(\mathbf{Z}_{p}\)-algebra \(R\), we expect the maps in (2) and (3) to respect the motivic filtrations. Taking \(\mathrm{gr}_{\mathrm{mot}}^{i}[-2i]\) would then produce the following cofiber sequences involving the associated graded pieces of the Nygaard filtration on the prismatic cohomologies of \(R\) and \(R/p\): \[(\mathscr{N}^{i}\hat{\mathbb{A}}_{R})/p \to\mathrm{F}_{i}^{\mathrm{conj}}\mathrm{dR}_{(R/p)/\mathbf{F}_{p }}\to\mathrm{F}_{i-1}^{\mathrm{conj}}\mathrm{dR}_{(R/p)/\mathbf{F}_{p}},\] \[\overline{\mathbb{A}}_{R}/p \to\mathrm{dR}_{(R/p)/\mathbf{F}_{p}}\to\mathrm{dR}_{(R/p)/ \mathbf{F}_{p}}.\] Such cofiber sequences on Hodge-Tate cohomology do indeed exist, and can be constructed purely algebraically using the methods of [1] and [1, Proposition 6.4.8]; see (19) and (21). We also show by explicit calculation: **Proposition** (Example 3.3.3 and Proposition 3.3.11 for precise statements).: _There is an isomorphism \(\pi_{*}\mathrm{TP}(\mathbf{Z}_{p}[t]/X(p))\cong\pi_{*}\mathrm{HP}(\mathrm{BP} \langle 1\rangle[t]/\mathrm{BP}\langle 1\rangle)\)._ _Furthermore, the map \(\mathrm{TP}^{\mathrm{gp}}(\mathrm{BP}\langle n-1\rangle[t]/X(p^{n}))\to\mathrm{ TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) is an equivalence after \(K(n)\)-localization, and Conjecture 2.2.18 implies that (up to a Nygaard-type completion) \(L_{K(n)}\mathrm{TP}(-/X(p^{n}))\) is \(\mathbf{A}^{1}\)-invariant._ We also have: **Conjecture** (Conjecture 3.1.14 and Conjecture 3.3.5).: _Let \(R\) be an animated \(\mathbf{Z}_{p}\)-algebra, and let \(\mathrm{F}_{\star}^{\mathrm{conj}}\widehat{\Omega}_{R}^{\not{D}}\) denote the conjugate-filtered (\(p\)-completed) diffracted Hodge complex of [1]. Then \(\mathrm{THH}(R/J(p))\) admits a motivic filtration such that \(\mathrm{gr}_{\mathrm{mot}}^{i}\mathrm{THH}(R/J(p))\simeq(\mathrm{F}_{i}^{ \mathrm{conj}}\widehat{\Omega}_{R}^{\not{D}})[2i]\), and such that the map \(\Theta_{R}^{\prime}:\mathrm{THH}(R/J(p))\to\Sigma^{2}\mathrm{THH}(R/J(p))\) respects the motivic filtration and induces the map \(\Theta+i:\mathrm{F}_{i}^{\mathrm{conj}}\widehat{\Omega}_{R}^{\not{D}}\to \mathrm{F}_{i-1}^{\mathrm{conj}}\widehat{\Omega}_{R}^{\not{D}}\) on \(\mathrm{gr}_{\mathrm{mot}}^{i}\)._ _Similarly, \(\mathrm{THH}(R/X(p))\) admits a motivic filtration such that \(\mathrm{gr}_{\mathrm{mot}}^{i}\mathrm{THH}(R/X(p))\simeq(\mathrm{F}_{pi}^{ \mathrm{conj}}\widehat{\Omega}_{R}^{\not{D}})[2pi]\otimes_{R}R[\mathrm{BSU}(p -1)]\). Moreover, \(\mathrm{TP}(R/X(p))\) admits a motivic filtration \(\mathrm{F}_{\mathrm{mot}}^{*}\mathrm{TP}(R/X(p))\) such that \(\mathrm{gr}_{\mathrm{mot}}^{i}\mathrm{TP}(R/X(p))\simeq\hat{\mathbb{A}}_{R/ \mathbf{Z}_{p}[\bar{p}]}[2i]\otimes_{R}\epsilon^{R}\), where \(\hat{\mathbb{A}}_{R/\mathbf{Z}_{p}[\bar{p}]}\) is the Nygaard completion of \(\vec{p}\Omega_{R}\)._ In Section 3, we supplement Conjecture 3.1.14 with some examples (such as \(R\) being a \(p\)-complete perfectoid ring, \(R=\mathbf{Z}/p^{n}\) for odd \(p\), \(R\) being a complete DVR of mixed characteristic \((0,p)\), and \(R=\mathbf{Z}_{p}[t]\)). **Remark 1.1.9**.: For the case \(R=\mathbf{Z}/p^{n}\), we give "two" calculations of the diffracted Hodge complex \(\widehat{\Omega}_{\mathbf{Z}/p^{n}}^{\not{D}}\); one uses abstract properties of the diffracted Hodge complex (and was explained to us by Bhatt), and the other (provided in Appendix B) is via concrete calculations in the ring \(W(\mathbf{Z}_{p})\). In particular, in Corollary 3.2.15, we refine the calculation of [1, Example 5.15] to show that there is an equivalence \(\mathrm{W}\mathrm{C}\mathrm{ar}_{\mathbf{Z}/p^{n}}^{\mathrm{HT}}\cong\mathbf{ G}_{a}^{\sharp}/\mathbf{G}_{m}^{\sharp}\) of stacks over \(\mathbf{Z}/p^{n}\). In Section 3.4, we also study an analogue of the Segal conjecture for THH relative to \(J(p)\) and \(T(n)\). One interesting consequence (Proposition 3.4.7) is that if \(R\) is a \(p\)-torsionfree discrete commutative ring such that \(R/p\) is regular Noetherian and \(L\Omega_{R}^{n}=0\) for \(n\gg 0\), then [1, Remark 4.7.4] and Conjecture 3.1.14 imply that \(R\) satisfies a version of the Segal conjecture for THH relative to \(J(p)\). In Proposition 3.5.3, we prove an analogue of the Cartier isomorphism in Hochschild homology for a flat polynomial algebra over any \(\mathbf{E}_{2}\)-ring, and show that it specializes to homotopical analogues of several known examples of the Cartier isomorphism. (This is quite likely well-known to some experts, but we could not find a source.) **Proposition** (Proposition 3.5.3).: _Let \(R\) be an \(\mathbf{E}_{2}\)-ring. Then there is a \(S^{1}\)-equivariant map \(\mathfrak{C}:\mathrm{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p})\to\mathrm{HH} (R[t]/R)^{t\mathbf{Z}/p}\) sending \(t\mapsto t^{p}\), where \(S^{1}\) acts on \(\mathrm{HH}(R[t]/R)^{t\mathbf{Z}/p}\) via the residual \(S^{1}/\mu_{p}\)-action, and on \(\mathrm{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p})\) via the diagonal action on Hochschild homology and residual \(S^{1}/\mu_{p}\)-action on \(R^{t\mathbf{Z}/p}\). If \(t\) is given weight \(1\), then \(\mathfrak{C}\) induces an \(S^{1}\)-equivariant equivalence \(\mathrm{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p})_{\mathrm{wt}\leq m}\to( \mathrm{HH}(R[t]/R)_{\mathrm{wt}\leq mp})^{t\mathbf{Z}/p}\) of graded \(R^{t\mathbf{Z}/p}\)-modules._ In Section 4, we describe the topological Sen operator from the perspective of the moduli stack \(\mathcal{M}_{\mathrm{FG}}\) of formal groups. We begin by describing an algebraic analogue of THH. This is given by an Adams-Novikov analogue of the Bokstedt spectral sequence: if \(R\) is a \(p\)-local homotopy commutative ring such that \(\operatorname{MU}_{*}(R)\) is concentrated in even degrees, one can define a stack \(\mathcal{M}_{R}\) whose coherent cohomology is the \(E_{2}\)-page of the Adams-Novikov spectral sequence for \(R\) (see [10, Chapter 9]). **Proposition** (Remark 4.1.5).: _If \(\operatorname{gr}_{\operatorname{ev}}^{\bullet}\operatorname{THH}(R)\) denotes the associated graded of the even filtration of [11] on \(\operatorname{THH}(R)\) (which recovers the motivic filtration on \(\operatorname{THH}(R)\) of [12] when \(R\) is quasisyntomic), then there is a spectral sequence:_ \[\pi_{*}\mathrm{HH}(\mathcal{M}_{R}/\mathcal{M}_{\mathrm{FG}})\Rightarrow\pi_ {*}\mathrm{gr}_{\operatorname{ev}}^{\bullet}\mathrm{THH}(R).\] _There is also an analogue for relative \(\operatorname{THH}\)._ This spectral sequence behaves essentially like the Bokstedt spectral sequence in most examples. In particular, if \(R\to R^{\prime}\) is a map of \(p\)-local homotopy commutative rings whose \(\operatorname{MU}\)-homologies are concentrated in even degrees, then \(\operatorname{HH}(\mathcal{M}_{R}/\mathcal{M}_{R^{\prime}})\) can be viewed as the "Adams-Novikov-Bokstedt associated graded" of \(\operatorname{gr}_{\operatorname{ev}}^{\bullet}\operatorname{THH}(R/R^{ \prime})\). Motivated by this perspective, we describe an analogue of the topological \(\operatorname{Sen}\) operator of Theorem 3.1.4 as a Gauss-Manin connection on stacks related to \(\mathcal{M}_{\mathrm{FG}}\) (see Example 4.1.11): **Theorem** (Example 4.1.11 and Variant 4.1.13).: _The stack \(\mathcal{M}_{T(n)}\) is isomorphic to the moduli stack of graded \(p\)-typical formal groups equipped with a \(p\)-typical coordinate of order \(\leq p^{n}\). Moreover, the Adams-Novikov analogue of Theorem 3.1.4 is a fiber sequence_ \[\operatorname{HH}(X/\mathcal{M}_{T(n-1)})\to\operatorname{HH}(X/\mathcal{M}_{ T(n)})\xrightarrow{\Theta_{\mathrm{mat}}}\Sigma^{2p^{n},p^{n}}\operatorname{HH}(X/ \mathcal{M}_{T(n)})\] _associated to any stack \(X\to\mathcal{M}_{T(n)}\), where \(\Sigma^{n,w}\) denotes a shift by homological degree \(n\) and weight \(w\)._ _Similarly, there is a fiber sequence_ \[\operatorname{HH}(X/\mathcal{M}_{\mathrm{FG}})\to\operatorname{HH}(X/ \mathcal{M}_{J(p)})\xrightarrow{\Theta_{\mathrm{mat}}}\Sigma^{2,1} \operatorname{HH}(X/\mathcal{M}_{J(p)})\] _associated to any stack \(X\to\mathcal{M}_{J(p)}\)._ **Remark 1.1.10**.: In Appendix A (Proposition A.23 and Remark A.24), we also study a quaternionic analogue of the above fiber sequence. This description crucially relies on the twistor fibration \(\mathbf{C}P^{2n-1}\to\mathbf{H}P^{n-1}\), which is given in coordinates by the map \([z_{1}:\dots:z_{2n}]\mapsto[z_{1}+z_{2}\mathbf{j}:\dots:z_{2n-1}+z_{2n}\mathbf{ j}]\). ### Some complements In Conjecture 2.2.18, we suggest that the identification of \(\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) can be extended to an equivalence \(\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\simeq\mathrm{BP}\langle n \rangle^{tS^{1}}[B\Delta_{n}]\) of spectra: **Conjecture** (Conjecture 2.2.18).: _The spectrum \(\operatorname{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) admits the structure of an \(S^{1}\)-equivariant \(\mathrm{BP}\langle n\rangle\)-module, and the isomorphism \(\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/T(n))\cong\pi_{*}\mathrm{BP} \langle n\rangle^{tS^{1}}\) lifts to an equivalence of spectra \(\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/T(n))\simeq\mathrm{BP}\langle n \rangle^{tS^{1}}\)._ This discussion suggests viewing the pair \((\pi_{0}\mathrm{BP}\langle n\rangle^{tS^{1}},(\frac{[p](\hbar)}{\hbar}))\) as a higher chromatic analogue of the crystalline prism \((\mathbf{Z}_{p},(p))\), where \(\hbar\) is the complex orientation of \(\mathrm{BP}\langle n\rangle\) and \([p](\hbar)\) is its \(p\)-series. Note that the pair \((\pi_{0}\mathrm{BP}\langle n\rangle^{tS^{1}},(\frac{[p](\hbar)}{\hbar}))\) has no reason to naturally admit the structure of a prism. Finally, it would be interesting to know whether Slogan 1.1.5 can be used to prove [10, Conjecture 6.1]. A first step in this direction would be to show that the topological Sen operators on \(\operatorname{THH}(\operatorname{BP}\langle n\rangle/X(p^{j}))\), \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{j-1}))\),..., and \(\operatorname{THH}(\operatorname{BP}\langle n-j\rangle)\) can also be matched up under the structural uniformity of Slogan 1.1.5. (Also see Remark 2.2.5.) This article suggests several directions in which the work presented here can be extended; we have recorded these as Conjecture 2.1.9, Conjecture 2.2.18, Conjecture 2.3.22, Conjecture 3.1.14, the closely related Conjecture 3.3.5, Conjecture 3.3.16, and Conjecture A.2. We wish to emphasize that, unlike [4, Theorem A and Corollary B], the main results of this article are entirely unconditional, and can be viewed as (in our opinion, substantial) evidence for the conjectures presented here and in [4]. ### Acknowledgements I'm grateful to Ben Antieau, Elden Elmanto, Jeremy Hahn, Ishan Levy, Sasha Petrov, Arpon Raksit, and Andy Senger for conversations on these and related topics; to Bhargav Bhatt for explaining Lemma 3.2.11 to me; to Akhil Mathew for telling me about the cofiber sequence (21); and to Andy Baker for a discussion about the spectra \(X_{\mathbf{H}}(n)\) from Definition A.18. Some of the ideas in this article started during a visit to Northwestern in March 2022, and I'm especially grateful to Ben Antieau for the opportunity to visit; I would have never been able to understand [1] -- more generally, this subject area -- were it not for him. I would also like to thank my advisors Dennis Gaitsgory and Mike Hopkins for their advice, support, and influence on me. ## 2. Calculation of THH ### Review of \(X(p^{n})\) **Definition 2.1.1** (Ravenel, [11, Section 3]).: Let \(X(n)\) denote the Thom spectrum of the \(\mathbf{E}_{2}\)-map \(\Omega\mathrm{SU}(n)\subseteq\mathrm{BU}\xrightarrow{J}B\mathrm{GL}_{1}(S)\), where the first map arises from Bott periodicity. **Example 2.1.2**.: The \(\mathbf{E}_{2}\)-ring \(X(1)\) is the sphere spectrum, while \(X(\infty)\) is MU. Since the map \(\Omega\mathrm{SU}(n)\to\mathrm{BU}\) is an equivalence in dimensions \(\leq 2n-2\), the same is true for the map \(X(n)\to\mathrm{MU}\); the first dimension in which \(X(n)\) has an element in its homotopy which is not detected by MU is \(2n-1\). In other words, writing \(\pi_{*}\mathrm{MU}=\mathbf{Z}[b_{1},b_{2},\cdots]\) with \(|b_{i}|=2i\), the classes \(b_{1},\cdots,b_{n-1}\) lift to \(X(n)\); there is an inclusion \(\mathbf{Z}[b_{1},\cdots,b_{n-1}]\subseteq\pi_{*}X(n)\). **Remark 2.1.3**.: The \(\mathbf{E}_{2}\)-structure on \(X(n)\) does _not_ extend to an \(\mathbf{E}_{3}\)-structure (see [11, Example 1.5.31]). After localizing at a prime \(p\), the spectrum MU splits as a wedge of suspensions of \(\mathrm{BP}\); this splitting comes from the Quillen idempotent on \(\mathrm{MU}\). The same is true of the \(X(n)\) spectra, as explained in [11, Section 6.5]: a multiplicative map \(X(n)_{(p)}\to X(n)_{(p)}\) is determined by a polynomial \(f(x)=\sum_{0\leq i\leq n-1}a_{i}x^{i+1}\), with \(a_{0}=1\) and \(a_{i}\in\pi_{2i}(X(n)_{(p)})\). One can use this to define a truncated form of the Quillen idempotent \(\epsilon_{n}\) on \(X(n)_{(p)}\) (see [12, Proposition 1.3.7]), and thereby obtain a summand of \(X(n)_{(p)}\). We summarize the necessary results in the following theorem. **Theorem 2.1.4**.: _Let \(n\) be such that \(p^{n}\leq k\leq p^{n+1}-1\). Then \(X(k)_{(p)}\) splits as a wedge of suspensions of the spectrum \(T(n)=\epsilon_{p^{n}}\cdot X(p^{n})_{(p)}\)._ * \(T(n)\) _admits the structure of an_ \(\mathbf{E}_{1}\)_-ring such that the map_ \(T(n)\to X(p^{n})\) _is a map of_ \(\mathbf{E}_{1}\)_-rings (see_ _[_1_, Section 7.5]__)._ * _The map_ \(T(n)\to\mathrm{BP}\) _is an equivalence in dimensions_ \(\leq|v_{n+1}|-2\)_, so there is an indecomposable element_ \(v_{i}\in\pi_{*}T(n)\) _which maps to an indecomposable element in_ \(\pi_{*}\mathrm{BP}\) _for_ \(0\leq i\leq n\)_. In particular (by (a)), there is an inclusion_ \(\mathbf{Z}_{(p)}[v_{1},\cdots,v_{n}]\subseteq\pi_{*}T(n)\)_._ * _The map_ \(T(n)\to\mathrm{BP}\) _induces the inclusion_ \(\mathrm{BP}_{*}T(n)=\mathrm{BP}_{*}[t_{1},\cdots,t_{n}]\subseteq\mathrm{BP}_{ *}(\mathrm{BP})\) _on_ \(\mathrm{BP}\)_-homology, and the inclusions_ \(\mathbf{F}_{2}[\zeta_{1}^{2},\cdots,\zeta_{n}^{2}]\subseteq\mathbf{F}_{2}[ \zeta_{1}^{2},\zeta_{2}^{2},\cdots]\) _and_ \(\mathbf{F}_{p}[\zeta_{1},\cdots,\zeta_{n}]\subseteq\mathbf{F}_{p}[\zeta_{1}, \zeta_{2},\cdots]\) _on_ \(\mathrm{mod}\ 2\) _and_ \(\mathrm{mod}\ p\) _homology, respectively._ **Example 2.1.5**.: The \(\mathbf{E}_{1}\)-ring \(T(1)\) is the Thom spectrum of the \(\mathbf{E}_{1}\)-map \(\Omega S^{2p-1}\to\mathrm{BGL}_{1}(S)\) which detects \(\alpha_{1}\in\pi_{2p-2}\mathrm{BGL}_{1}(S)\cong\pi_{2p-3}S\) on the bottom cell of \(\Omega S^{2p-1}\). Since \(p\alpha_{1}=0\), a nullhomotopy of \(p\alpha_{1}\) defines a class \(v_{1}\in\pi_{2p-2}T(1)\). Under the unit map \(T(1)\to\mathrm{BP}\), this class is sent to the eponymous class \(v_{1}\in\pi_{2p-2}\mathrm{BP}\). **Warning 2.1.6**.: Unfortunately, Theorem 2.1.4 leads to an egregious clash of notation, since \(T(n)\) is also often used to denote the telescope of a \(v_{n}\)-self map of a finite type \(n\) spectrum. In this article, we will _only_ use \(T(n)\) to mean the \(\mathbf{E}_{1}\)-ring from Theorem 2.1.4. We propose using the notation \(\mathrm{Tel}(n)\) to denote the telescope of a \(v_{n}\)-self map. **Notation 2.1.7**.: If \(R\) is a commutative ring, we write \(\Lambda_{R}(x)\) to denote an exterior \(R\)-algebra on a class \(x\), and \(R\langle x\rangle\) to denote a divided power \(R\)-algebra on a class \(x\). The notation \(\gamma_{j}(x)\) denotes the \(j\)th divided power of \(x\), so that \(j!\gamma_{j}(x)=x^{j}\). We will also often write \(R\langle x\rangle\) to denote the underlying \(R\)-module of \(R\langle x\rangle\). **Construction 2.1.8**.: Define a space \(\Delta_{n}\) by \[\Delta_{n}=\prod_{i=1}^{n}\mathrm{SU}(p^{i}-1)/\mathrm{SU}(p^{i-1}),\] and let \(\overline{\Delta}_{i}\) denote the \(i\)th term in this product. If \(R\) is a ring spectrum, write \(R[\Omega\Delta_{n}]\) to denote the \(\mathbf{E}_{2}\)-polynomial \(R\)-algebra \(R[x_{i}|1\leq i\leq p^{n}-1,i\neq p^{k}-1]\), where \(|x_{i}|=2i\). Let \(R[B\Delta_{n}]\) denote the \(2\)-fold bar construction of the augmentation \(R[\Omega\Delta_{n}]\to R\), so that it is an \(\mathbf{E}_{2}\)-\(R\)-coalgebra whose homotopy groups are isomorphic to \(\pi_{*}(R)\langle y_{i}|1\leq i\leq p^{n}-1,i\neq p^{k}\rangle\) where \(|y_{j}|=2j\). As mentioned in the introduction, \(R[B\Delta_{n}]\) morally should be viewed as the \(R\)-chains on the "classifying space of \(\prod_{i=1}^{n}\mathrm{SU}(p^{i}-1)/\mathrm{SU}(p^{i-1})\)"; to this end, if \(X\) is another space, we will write \(R[B\Delta_{n}\times X]\) to denote \(R[B\Delta_{n}]\otimes_{R}R[X]\); and if \(R\) is a discrete ring, we will often write \(\mathrm{H}_{*}(B\Delta_{n};R)\) to denote \(\pi_{*}R[B\Delta_{n}]\). The factor \(R[B\Delta_{n}]\) will primarily be an unfortunate annoyance in this article. Note that \(\Delta_{1}=\mathrm{SU}(p-1)\). Then, we have \(X(p^{n})=T(n)[\Omega\Delta_{n}]\) and \(X(p^{n}-1)=T(n-1)[\Omega\Delta_{n}]\), so that \[\mathrm{H}_{*}(X(2^{n});\mathbf{F}_{2}) \cong\mathbf{F}_{2}[\zeta_{1}^{2},\cdots,\zeta_{n}^{2}]\otimes_{ \mathbf{F}_{p}}\mathrm{H}_{*}(\Omega\Delta_{n};\mathbf{F}_{p}),\] \[\mathrm{H}_{*}(X(p^{n});\mathbf{F}_{p}) \cong\mathbf{F}_{p}[\zeta_{1},\cdots,\zeta_{n}]\otimes_{\mathbf{F} _{p}}\mathrm{H}_{*}(\Omega\Delta_{n};\mathbf{F}_{p}),\] and similarly for \(X(p^{n}-1)\). It is believed that \(T(n)\) admits more structure (see also [1] for some discussion): **Conjecture 2.1.9**.: _The \(\mathscr{Q}_{1}\)-ring structure on \(T(n)\) extends to a framed \(\mathbf{E}_{2}\)-ring structure._ **Remark 2.1.10**.: When \(p=2\), both \(X(2)=T(1)\) and \(T(2)\) admit the structure of \(\mathbf{E}_{2}^{\mathrm{fr}}\)-algebras by [1, Remark 3.8]: they are Thom spectra of U-bundles over \(\Omega\mathrm{Sp}(1)\simeq\Omega S^{3}\) and \(\Omega\mathrm{Sp}(2)\), respectively. These U-bundles are defined via double loops of the the composite \[\mathrm{BSp}(n)\to\mathrm{BSU}(2n)\to\mathrm{BSU}\simeq B^{3}\mathrm{U}.\] **Proposition 2.1.11** ([1, Corollary 2.9 and Corollary 3.7]).: _The \(\mathbf{E}_{2}\)-structure on \(X(n)\) refines to an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-structure._ **Corollary 2.1.12**.: _Let \(\mathscr{C}\) be an \(X(n)\)-linear \(\infty\)-category. Then \(\mathrm{THH}(\mathscr{C}/X(n))\) acquires the structure of an \(S^{1}\)-equivariant spectrum with an \(S^{1}\)-equivariant unit map \(X(n)\to\mathrm{THH}(\mathscr{C}/X(n))\)._ ### Computation of \(\mathrm{THH}\) relative to \(X(p^{n})\) Unless explicitly stated otherwise, all fiber sequences in this section (as well as the following sections) will be localized at \(p\). **Recollection 2.2.1**.: There are isomorphisms \[\mathrm{H}_{*}(\mathrm{BP}\langle n-1\rangle;\mathbf{F}_{2}) \cong\mathbf{F}_{2}[\zeta_{1}^{2},\cdots,\zeta_{n}^{2},\zeta_{n+1},\cdots]\] \[\cong\mathrm{H}_{*}(T(n);\mathbf{F}_{2})\otimes_{\mathbf{F}_{2}} \mathbf{F}_{2}[\zeta_{j}|j\geq n+1],\] \[\mathrm{H}_{*}(\mathrm{BP}\langle n-1\rangle;\mathbf{F}_{p}) \cong\Lambda_{\mathbf{F}_{p}}[\tau_{j}|j\geq n]\otimes_{\mathbf{F }_{p}}\mathbf{F}_{p}[\zeta_{1},\zeta_{2},\cdots]\] \[\cong\mathrm{H}_{*}(T(n);\mathbf{F}_{p})\otimes_{\mathbf{F}_{p}} \mathbf{F}_{p}[\zeta_{j}|j\geq n+1]\otimes_{\mathbf{F}_{p}}\Lambda_{\mathbf{F}_ {p}}[\tau_{j}|j\geq n],p>2.\] We note that the "\(Q_{0}\)-Margolis homology" of \(\mathrm{H}_{*}(\mathrm{BP}\langle n-1\rangle;\mathbf{F}_{2})\) (i.e., the homology of \(\mathrm{Sq}^{1}\) viewed as a differential acting on \(\mathrm{H}_{*}(\mathrm{BP}\langle n-1\rangle;\mathbf{F}_{2})\)) is precisely \(\mathrm{H}_{*}(T(n);\mathbf{F}_{2})\), because \(\mathrm{Sq}^{1}\) is a derivation and \(\mathrm{Sq}^{1}(\zeta_{j})=\zeta_{j-1}^{2}\). **Recollection 2.2.2**.: We need to recall some results from [10]. First, [10, Theorem A] tells us that there exists an \(\mathbf{E}_{3}\)-form of \(\mathrm{BP}\langle n\rangle\). First, [10, Theorem 2.5.4] states that \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/\mathrm{MU})\) is isomorphic to a polynomial algebra over \(\pi_{*}\mathrm{BP}\langle n-1\rangle\) on infinitely many generators, the first of which is denoted \(\sigma^{2}(v_{n})\). The class \(\sigma^{2}(v_{n})\) lives in degree \(2p^{n}\). Finally, [10, Theorem 5.0.1] states that there is an isomorphism \(\pi_{*}\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle/\mathrm{MU})\simeq(\pi_{* }\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/\mathrm{MU}))[\![\hbar]\!]\) of \(\mathbf{Z}_{p}[v_{1},\cdots,v_{n-1}]\)-algebras. Moreover, under the map \(\mathrm{MU}^{hS^{1}}\to\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle/ \mathrm{MU})\), the class \(v_{n}\in\pi_{*}\mathrm{MU}^{hS^{1}}\cong(\pi_{*}\mathrm{MU})[\![\hbar]\!]\) is sent to \(\sigma^{2}(v_{n})\hbar\). In particular, \(\pi_{*}\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle/\mathrm{MU})\) detects the classes \(p,v_{1},\cdots,v_{n-1},v_{n}:=\sigma^{2}(v_{n})\hbar\). Similarly, \(\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/\mathrm{MU})\) detects the classes \(p,\cdots,v_{n}\) under the map \(\mathrm{MU}^{tS^{1}}\to\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/\mathrm{MU})\), and \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/\mathrm{MU})^{t\mathbf{Z}/p}\) detects the classes \(p,\cdots,v_{n-1}\) under the map \(\mathrm{MU}^{t\mathbf{Z}/p}\to\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/ \mathrm{MU})^{t\mathbf{Z}/p}\). **Notation 2.2.3**.: If \(R\) is a complex-oriented ring spectrum, we will write \(\hbar\) to denote the complex orientation of \(R\), viewed as a class in \(\pi_{-2}R^{hS^{1}}\). The motivation for this notation comes from geometric representation theory (in the case where \(R\) is a \(\mathbf{Z}_{p}\)-algebra), where the complex orientation \(\hbar\in\mathrm{H}^{2}(\mathbf{C}P^{\infty};R)\) plays the role of a quantization parameter. The main result of this section is the following analogue of Bokstedt's theorem on \(\mathrm{THH}(\mathbf{F}_{p})\) and \(\mathrm{THH}(\mathbf{Z}_{p})\). **Theorem 2.2.4**.: _Fix \(\mathbf{E}_{3}\)-forms of the truncated Brown-Peterson spectra \(\mathrm{BP}\langle n-1\rangle\) and \(\mathrm{BP}\langle n\rangle\). We have:_ 1. _There is a_ \(p\)_-complete equivalence of_ \(\mathrm{BP}\langle n-1\rangle\)_-modules:_ \[\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\simeq\mathrm{BP} \langle n-1\rangle[B\Delta_{n}\times\Omega S^{2p^{n}+1}].\] _Write_ \(\theta_{n}\in\pi_{2p^{n}}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) _to denote the class corresponding to the map_ \(E:S^{2p^{n}}\to\Omega S^{2p^{n}+1}\)_. Under the_ \(S^{1}\)_-equivariant map_ \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\to\mathrm{THH}(\mathrm{ BP}\langle n-1\rangle/\mathrm{MU})\)_, the class_ \(\theta_{n}\) _is sent to the class_ \(\sigma^{2}(v_{n})\) _from Recollection 2.2.2. There are also_ \(p\)_-complete isomorphisms_ \[\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))^{t \mathbf{Z}/m} \cong\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/m}[B\Delta_{n}]_{*},\] \[\pi_{*}\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle/X(p^{n})) \cong\mathrm{BP}\langle n\rangle[B\Delta_{n}]_{*}[\![\hbar]\!][ \frac{v_{n}}{\hbar}]\] \[\cong\mathrm{BP}\langle n\rangle[B\Delta_{n}]_{*}[\![\hbar]\!][ \theta_{n}]/(\theta_{n}\hbar-v_{n}),\] \[\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n})) \cong\mathrm{BP}\langle n\rangle^{tS^{1}}[B\Delta_{n}]_{*}\] \[\cong\mathrm{BP}\langle n\rangle[B\Delta_{n}]_{*}(\!(\hbar)\!).\] _Here, the equation \(\theta_{n}\hbar=v_{n}\) is to be understood modulo decomposables. These isomorphisms satisfy the following property: under the maps_ \[\operatorname{TC}^{-}(\operatorname{BP}\langle n-1\rangle/X(p^{n})) \to\operatorname{TC}^{-}(\operatorname{BP}\langle n-1\rangle/ \mathrm{MU}),\] \[\operatorname{TP}(\operatorname{BP}\langle n-1\rangle/X(p^{n})) \to\operatorname{TP}(\operatorname{BP}\langle n-1\rangle/ \mathrm{MU}),\] _the classes_ \(\{v_{i}\}_{0\leq i\leq n}\) _on the left-hand side are sent to the eponymous classes in the right-hand side (via Recollection 2.2.2)._ 2. _There is an equivalence of_ \(\operatorname{BP}\langle n\rangle\)_-modules:_ \[\operatorname{THH}(\operatorname{BP}\langle n\rangle/X(p^{n}))_{p}^{\wedge} \cong\operatorname{BP}\langle n\rangle[B\Delta_{n}]_{p}^{\wedge}\oplus\left( \bigoplus_{j\geq 1}\Sigma^{2jp^{n+1}-1}\operatorname{BP}\langle n\rangle[B\Delta_{n }]/p^{v_{p}(j)+1}\right)_{p}^{\wedge}.\] _In particular, there is an additive equivalence_ \[\operatorname{THH}(\operatorname{BP}\langle n\rangle/X(p^{n}))/p\cong \operatorname{BP}\langle n\rangle[S^{2p^{n+1}-1}\times\Omega S^{2p^{n+1}+1} \times B\Delta_{n}]/p.\] _Moreover,_ \(\pi_{2p^{n+1}-3}\mathrm{TC}^{-}(\operatorname{BP}\langle n\rangle/X(p^{n}))_{p} ^{\wedge}\) _detects the class_ \(\sigma_{n}\in\pi_{2p^{n+1}-3}X(p^{n})\) _from_ _[_10_, Lemma 3.1.12]__._ **Remark 2.2.5**.: Let \(v_{[j,m)}\) denote the regular sequence \(v_{j},\cdots,v_{m-1}\) in \(\pi_{*}\mathrm{BP}\). Then the argument used to prove Theorem 2.2.4 in fact shows the following (somewhat more general) result: for \(j\leq n\), there is an isomorphism of \(\operatorname{BP}\langle n-1\rangle_{*}\)-modules \[\pi_{*}\mathrm{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{j}))/v_{[0,n-j)} \cong\operatorname{BP}\langle n-1\rangle[B\Delta_{j}]_{*}[\theta_{n}]/v_{[0,n- j)}\otimes_{\mathbf{F}_{p}}\Lambda_{\mathbf{F}_{p}}(\lambda_{j+1},\cdots, \lambda_{n}), \tag{4}\] where \(|\lambda_{i}|=2p^{i}-1\). When \(j=0\), (4) recovers [1, Proposition 2.9]. For brevity, the discussion below only includes the cases \(j=n\) and \(j=n-1\). Similarly, using that \(T(1)\) (resp. \(T(2)_{(2)}\)) is a Thom spectrum over \(\Omega S^{2p+1}\) (resp. \(\Omega\mathrm{Sp}(2)\)), there are equivalences \[\operatorname{THH}(\mathbf{Z}_{p})/p \simeq\mathbf{F}_{p}[S^{2p-1}\times\Omega S^{2p+1}],\] \[\operatorname{THH}(\operatorname{ku})/(2,\beta) \simeq\mathbf{F}_{2}[\operatorname{Sp}(2)\times\Omega S^{9}].\] **Remark 2.2.6**.: If we write \(\pi_{*}\mathrm{MU}=\mathbf{Z}[x_{1},x_{2},\cdots]\) where \(|x_{i}|=2i\), and define \(\mathrm{MU}\langle n-1\rangle=\mathrm{MU}/(x_{n},x_{n+1},\cdots)\), then one can similarly prove an analogue of Theorem 2.2.4 with \(\operatorname{BP}\langle n-1\rangle\) replaced by \(\mathrm{MU}\langle n-1\rangle\). Namely, if \(n\) is a power of \(p\), there is an equivalence \[\operatorname{THH}(\mathrm{MU}\langle n-1\rangle/X(n))_{p}^{\wedge}\simeq \mathrm{MU}\langle n-1\rangle[\Omega S^{2n+1}]_{p}^{\wedge}\] of \(\mathrm{MU}\langle n-1\rangle\)-modules. There is also a \(p\)-complete isomorphism \[\pi_{*}\mathrm{TP}(\mathrm{MU}\langle n-1\rangle/X(n))_{p}^{\wedge}\cong\pi_{ *}(\mathrm{MU}\langle n\rangle^{tS^{1}})_{p}^{\wedge}.\] We expect (see Conjecture 2.2.18 below) that this refines to a \(p\)-complete equivalence \(\mathrm{TP}(\mathrm{MU}\langle n-1\rangle/X(n))_{p}^{\wedge}\simeq(\mathrm{MU }\langle n\rangle^{tS^{1}})_{p}^{\wedge}\). **Example 2.2.7**.: One can make Theorem 2.2.4(a) very explicit for \(\mathbf{Z}_{p}\) (note that Theorem 2.2.4(b) for \(\mathbf{Z}_{p}\) is Bokstedt's result). For instance, \[\pi_{*}\mathrm{TC}^{-}(\mathbf{Z}_{p}/T(1))\cong\mathbf{Z}_{p}[v_{1}][\![\hbar ]\!](\theta]/(\hbar\theta=v_{1}).\] Let us view \(\operatorname{BP}\langle 1\rangle\) as \((\operatorname{ku}_{p}^{\wedge})^{\hbar\mathbf{F}_{p}^{\times}}\), and let \(\beta\in\pi_{2}\mathrm{ku}\) be the Bott class. Then, \(\pi_{*}\mathrm{ku}^{tS^{1}}\cong\mathbf{Z}[\beta](\![\hbar]\!)\) is isomorphic to \(\mathbf{Z}[\![q-1]\!](\!(\hbar)\!)\), where \(q=1+\beta\hbar\) lives in degree \(0\). If \(\mathbf{Z}_{p}[\![\overline{p}]\!]\) is as in [1, Corollary 3.8.8], then \(\pi_{*}\mathrm{BP}\langle 1\rangle^{tS^{1}}\cong\mathbf{Z}_{p}[\![\overline{p}]\!]( \hbar)\!\). If we assume (for simplicity) that \(T(1)\) is an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-algebra, then replacing \(X(p)\) by \(T(1)\), we obtain: \[\pi_{*}\mathrm{TP}(\mathbf{Z}_{p}/T(1))\cong\mathbf{Z}_{p}[\![\tilde{p}]\!](( \hbar)\!).\] Here, \(\mathbf{F}_{p}^{\times}\) acts on \(\mathbf{Z}_{p}[\![q-1]\!]\) as specified before [1, Proposition 3.8.6]; indeed, the \(\mathbf{Z}_{p}^{\times}\)-action on \(\mathbf{Z}_{p}[\![q-1]\!]=\pi_{0}(\mathrm{ku}_{p}^{\wedge})^{tS^{1}}\) agrees with the action of the Adams operations on \(\pi_{*}(\mathrm{ku}_{p}^{\wedge})^{tS^{1}}\), as one can check by calculating the Adams operations on the \(p\)-completed complex K-theory of \(\mathbf{C}P^{\infty}\). Indeed, if \(g\in\mathbf{Z}_{p}^{\times}\), then \[\psi^{g}(\hbar)=\frac{1}{g}\sum_{j\geq 1}\binom{g}{j}\beta^{j-1}\hbar^{j}= \frac{1}{g}\frac{(1+\beta\hbar)^{g}-1}{\beta},\] so that \[\psi^{g}(q)=\psi^{g}(1+\beta\hbar)=1+g\beta\psi^{g}(\hbar)=(1+\beta\hbar)^{g} =q^{g}.\] **Remark 2.2.8**.: Recall from [1, Section 3.4] that there is an \(\mathbf{E}_{2}\)-monoidal functor \(\mathrm{sh}:\mathrm{Sp}^{\mathrm{gr}}\to\mathrm{Sp}^{\mathrm{gr}}\) given by shearing: this functor sends \(M_{\bullet}\mapsto M_{\bullet}[2\bullet]\). Assume for simplicity that \(T(n)\) admits the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-algebra. From this perspective, part of Theorem 2.2.4(a) simply states that there is an equivalence of ungraded \(\mathrm{BP}\langle n-1\rangle\)-modules \[\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T(n))\simeq\mathrm{sh}(\mathrm{ gr}_{v_{n}}\mathrm{BP}\langle n\rangle),\] where \(\mathrm{sh}(\mathrm{gr}_{v_{n}}\mathrm{BP}\langle n\rangle)\) denotes the shearing of the associated graded of the \(v_{n}\)-adic filtration \(\mathrm{F}_{v_{n}}^{*}\mathrm{BP}\langle n\rangle\) on \(\mathrm{BP}\langle n\rangle\). An immediate implication of Theorem 2.2.4 is the following. **Corollary 2.2.9** ([1, Corollary 5.0.2]).: _Fix an \(\mathbf{E}_{3}\)-form of the truncated Brown-Peterson spectrum \(\mathrm{BP}\langle n-1\rangle\). We have \(L_{K(n)}K(\mathrm{BP}\langle n-1\rangle)\neq 0\)._ Proof.: There is a trace map \(K(\mathrm{BP}\langle n-1\rangle)\to\mathrm{TP}(\mathrm{BP}\langle n-1\rangle)\), which is a map of \(\mathbf{E}_{2}\)-rings. It therefore suffices to exhibit a nonzero module over \(L_{K(n)}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle)\) -- but we may take the module \(L_{K(n)}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\), which is nonzero by Theorem 2.2.4(a). (In fact, Theorem 2.2.4(a) implies \(\pi_{*}L_{K(n)}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) is isomorphic to \(\mathbf{Z}_{p}[v_{1},\cdots,v_{n-1},v_{n}^{\pm 1}]^{\wedge}_{(p,\cdots,v_{n-1})}( \hbar)\!]\) tensored with the \(\mathbf{Z}_{p}\)-homology of \(B\Delta_{n}\).) **Remark 2.2.10**.: It is easy to see that \(T(n)\to\mathrm{BP}\langle n\rangle\) is a nilpotent extension. This implies in particular that the following square is Cartesian by the Dundas-Goodwillie-McCarthy theorem [1, Theorem 7.2.2.1]: Note that there is also a commutative square \[\begin{CD}\mathrm{TC}(T(n))@>{\mathrm{TC}(\mathrm{BP}\langle n\rangle)}>{}> \\ \mathrm{TC}^{-}(T(n))@>{\mathrm{TC}^{-}(\mathrm{BP}\langle n\rangle)}>{}>\end{CD},\] and Theorem 2.2.4 and Theorem 3.1.4 give an inductive approach to calculating the bottom row. One might therefore view the results of this article as a first step to fully computing \(K(\mathrm{BP}\langle n\rangle)\). It would be very interesting to describe \(\mathrm{TC}(T(n))\). For example, we expect that for a general odd prime, the spectrum \(\mathrm{TP}(T(1))\) is closely related to the \(\mathbf{E}_{1}\)-quotient \(S/\!\!/\alpha_{p/p}\). (Here, \(\alpha_{p/p}\in\pi_{2p(p-1)-1}(S)\) is an element in the \(\alpha\)-family.) However, more is true about the map \(T(n)\to\mathrm{BP}\langle n\rangle\): in fact, every element in \(\ker(\pi_{*}T(n)\to\pi_{*}\mathrm{BP}\langle n\rangle)\) is nilpotent. To see this, first observe that this map is a rational equivalence (indeed, it is an equivalence on \(Q_{0}\)-Margolis homology), so \(\mathrm{fib}(T(n)\to\mathrm{BP}\langle n\rangle)\) is torsion. Moreover, the map \(T(n)\to\mathrm{BP}\langle n\rangle\) is surjective on homotopy (since it is a ring map, and the generators \(p,v_{1},\cdots,v_{n}\in\pi_{*}\mathrm{BP}\langle n\rangle\) lift to \(T(n)\)), so that the map \(\mathrm{fib}(T(n)\to\mathrm{BP}\langle n\rangle)\to T(n)\) induces an injection on homotopy. If \(x\in\pi_{*}T(n)\) is in the image of the map \(\mathrm{fib}(T(n)\to\mathrm{BP}\langle n\rangle)\to T(n)\), then the image of \(x\) under the Hurewicz map \(\pi_{*}T(n)\to\mathrm{MU}_{*}T(n)\) is also torsion; but \(\mathrm{MU}_{*}T(n)\cong\mathrm{MU}_{*}[t_{1},\cdots,t_{n}]\) is torsion-free, so \(x\) must be nilpotent by the main theorem of [1]. This is the desired claim. More generally, recall [1, Table 1], reproduced here as Table 1 (for the definitions of these spectra, see [10] for \(A\), where it is denoted \(X_{5}\); [1, Construction 3.1] and [1] for \(B\); [11] for \(y(n)\); and [1] for \(y_{\mathbf{Z}}(n)\)). In a manner similar to above, if \(R\) is an \(\mathbf{E}_{1}\)-ring as in the second line of Table 1, and \(\Theta(R)\) is the associated designer spectrum, one can show that every element in \(\ker(\pi_{*}R\to\pi_{*}\Theta(R))\) is nilpotent. It follows, for example, that there is a Cartesian square Moreover, the proof of Theorem 2.2.4 shows that were \(R\) to admit the structure of an \(\mathbf{E}_{2}\)-ring (which is generally _not true_6, \(\mathrm{THH}(\Theta(R)/R)\) would be \(p\)-completely equivalent to \(R\oplus\bigoplus_{j\geq 1}\Sigma^{2jp^{n+1}-1}R/pj\) (where \(n\) is the "height" of \(R\)). If \(R=y(n)\) or \(y_{\mathbf{Z}}(n)\), this result is literally true by Theorem 2.2.4, as long as one assumes Conjecture 2.1.9 and interprets \(\mathrm{THH}(\Theta(R)/R)\) to mean \(\mathrm{THH}(\mathrm{BP}\langle n\rangle/T(n))\otimes_{T(n)}R\). This does not cover the cases \(R=A,B\), though; see Appendix A for further discussion of these cases. Footnote 6: For instance, \(y(n)\) cannot admit the structure of an \(\mathbf{E}_{2}\)-ring, thanks to the Steinberger identity on the action of the Dyer-Lashof operation \(Q_{1}\) on the dual Steenrod algebra (see [1, Theorems III.2.2 and III.2.3]). \begin{table} \begin{tabular}{c|c c c c c} Height & 0 & 1 & 2 & \(n\) & \(n\) & \(n\) \\ \hline Base \(\mathbf{E}_{1}\)-ring \(R\) & \((S^{0})_{p}^{\wedge}\) & \(A\) & \(B\) & \(T(n)\) & \(y(n)\) & \(y_{\mathbf{Z}}(n)\) \\ \hline Designer chromatic spectrum \(\Theta(R)\) & \(\mathbf{Z}_{p}\) & bo & \(\mathrm{tmf}\) & \(\mathrm{BP}\langle n\rangle\) & \(k(n)\) & \(k_{\mathbf{Z}}(n)\) \\ \end{tabular} \end{table} Table 1. The relation between \(R\) and \(\Theta(R)\) is analogous to the relationship between \(T(n)\) and \(\mathrm{BP}\langle n\rangle\). **Remark 2.2.11**.: It is natural to ask whether Theorem 2.2.4 can be generalized to describe \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{m}))\) if \(m\neq n\). For \(m<n\), we do not know a full description (after killing \(p,\cdots,v_{n-m-1}\), see Remark 2.2.5); but the techniques of Theorem 3.1.4 below provide a conceptual approach to addressing this question. For \(m>n\), the proof of Theorem 2.2.4 easily implies that there is an additive isomorphism \[\pi_{*}\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p ^{m})) \cong\pi_{*}\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p ^{n}))\otimes_{\operatorname{BP}\langle n-1\rangle_{*}}\operatorname{BP} \langle n-1\rangle_{*}\langle y_{i}|p^{n}<i\leq p^{m}\rangle\] \[\cong\operatorname{BP}\langle n-1\rangle[\Omega S^{2p^{n}+1}]_{* }\langle y_{i}|1\leq i\leq p^{m}\text{ such that }i\neq p^{k}\text{ for }0\leq k\leq n\rangle.\] Here, \(y_{i}\) lives in degree \(2i\). For example, if \(n=0\), the divided power factor is just \(\operatorname{BP}\langle n-1\rangle_{*}[\operatorname{BSU}(p^{m})]\). For instance, in the limit as \(m\to\infty\), we recover the statement that \(\pi_{*}\operatorname{THH}(\mathbf{F}_{p}/\mathrm{MU})\simeq\mathbf{F}_{p}[ \operatorname{BSU}\times\Omega S^{3}]_{*}\). **Remark 2.2.12**.: Theorem 2.2.4(b) implies that \[\pi_{*}\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{n}-1))\cong \operatorname{BP}\langle n-1\rangle[B\Delta_{n}]_{*}\oplus\bigoplus_{j\geq 1} \operatorname{BP}\langle n-1\rangle[B\Delta_{n}]_{*-2jp^{n}+1}/p^{v_{p}(j)+1}.\] This can be compared to Theorem 2.2.4(a) (we will study in this in further detail in Section 3): the complexity of \(\pi_{*}\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{n}-1))\) compared to \(\pi_{*}\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{n}))\) can be understood as arising via the descent spectral sequence for the map \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{n}-1))\to \operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{n}))\). Note that \(X(p^{n})\otimes_{X(p^{n}-1)}X(p^{n})\simeq X(p^{n})[\Omega S^{2p^{n}-1}]\); using this, one can calculate using methods similar to the proof of Theorem 2.2.4 that the \(E_{2}\)-page of the descent spectral sequence is \[E_{2}^{*,*}\cong\pi_{*}\operatorname{THH}(\operatorname{BP}\langle n-1\rangle /X(p^{n}))[\epsilon]/\epsilon^{2},\] where \(|\epsilon|=2p^{n}-1\). Calculating the differentials gives an "alternative" proof of Theorem 2.2.4(b) given Theorem 2.2.4(a); we will expand on this below in Remark 2.2.17. In fact, inductively studying \(\operatorname{THH}\) of \(\operatorname{BP}\langle n-1\rangle\) relative to \(X(p^{j})\) for \(j\leq n\) gives a conceptual explanation for the families of differentials visible in the calculations of \(\pi_{*}\operatorname{THH}(\operatorname{BP}\langle n-1\rangle)\) in [1, Section 8], [20], and [1]; see Theorem 3.1.4 and Example 4.1.11. The proof of Theorem 2.2.4 will be broken into several components. Let us begin by illustrating Theorem 2.2.4(a) in the case \(n=0,1\). Proof of Theorem 2.2.4(a) for \(n=0,1\). We need to show that there are equivalences of spectra \(\operatorname{THH}(\mathbf{F}_{p})\simeq\mathbf{F}_{p}[\Omega S^{3}]\) and \(\operatorname{THH}(\mathbf{Z}_{p}/X(p))\simeq\mathbf{Z}_{p}[\operatorname{ BSU}(p-1)\times\Omega S^{2p+1}]\). The first equivalence is classical (see [1]), so we argue the second equivalence. There is a \(p\)-local map \(f:\operatorname{SU}(p)\to\Omega S^{3}\langle 3\rangle\) of spaces given by the composite \[\operatorname{SU}(p)\to\operatorname{SU}(p)/\operatorname{SU}(p-1)\simeq S^{ 2p-1}\xrightarrow{\alpha_{1}}\Omega S^{3}\langle 3\rangle.\] In [1, Remark 4.1.4], we described a fiber sequence (which was also known to Toda in [12]) \[S^{2p-1}\xrightarrow{\alpha_{1}}\Omega S^{3}\langle 3\rangle\to\Omega S^{2p+1}. \tag{5}\] This induces a fiber sequence of \(\mathbf{E}_{1}\)-spaces \[\Omega\mathrm{SU}(p)\xrightarrow{f}\Omega^{2}S^{3}\langle 3\rangle\to \operatorname{SU}(p-1)\times\Omega^{2}S^{2p+1}.\] We now compute: \[\mathrm{THH}(\mathbf{Z}_{p}/X(p)) \simeq\mathrm{THH}(\mathbf{Z}_{p})\otimes_{\mathrm{THH}(X(p))}X(p)\] \[\simeq\mathrm{THH}(\mathbf{Z}_{p})\otimes_{\mathbf{Z}_{p}\otimes \mathrm{THH}(X(p))}\mathbf{Z}_{p}.\] The map \(X(p)\to\mathbf{Z}_{p}\) is precisely the map induced by \(f:\Omega\mathrm{SU}(p)\to\Omega^{2}S^{3}\langle 3\rangle\), so the above tensor product is given by \(\mathbf{Z}_{p}[\Omega S^{2p+1}\times\mathrm{BSU}(p-1)]\), as desired. **Remark 2.2.13**.: Recall that the calculation \(\mathrm{THH}(\mathbf{F}_{p})\simeq\mathbf{F}_{p}[\Omega S^{3}]\) follows from [1] and the Hopkins-Mahowald theorem that \(\mathbf{F}_{p}\) is the Thom spectrum of the \(\mathbf{E}_{2}\)-map \(\Omega^{2}S^{3}\to\mathrm{BGL}_{1}(S^{\wedge}_{p})\) which detects \(1-p\in\pi_{1}\mathrm{BGL}_{1}(S^{\wedge}_{p})\cong\mathbf{Z}^{\times}_{p}\) on the bottom cell of \(\Omega^{2}S^{3}\). In [1], we prove (unconditionally!) that \(\mathbf{Z}_{p}\) is the Thom spectrum of a map \(\mu:\Omega^{2}S^{2p+1}\to\mathrm{BGL}_{1}(T(1))\) which detects \(v_{1}\in\pi_{2p-1}\mathrm{BGL}_{1}(T(1))\cong\pi_{2p-2}T(1)\) on the bottom cell of \(\Omega^{2}S^{2p+1}\). (Unlike in the classical Hopkins-Mahowald theorem, the map \(\mu\) is not an \(\mathbf{E}_{2}\)-map.) This result implies that \(\mathbf{Z}_{p}\) is the Thom spectrum of a map \(\mathrm{SU}(p-1)\times\Omega^{2}S^{2p+1}\to\mathrm{BGL}_{1}(X(p))\), which can also be used to prove Theorem 2.2.4(a) for \(n=1\). We now turn to Theorem 2.2.4(a) in the general case; the strategy is to compute the homology of each of the spectra under consideration, and run the Adams spectral sequence. In the case of \(\mathrm{THH}^{t\mathbf{Z}/m}\), \(\mathrm{TC}^{-}\), and \(\mathrm{TP}\), we will need the "continuous homology" of [1, Equation 2.3]. **Proposition 2.2.14**.: 1. _There are isomorphisms_ \[\mathrm{H}_{*}(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}));\mathbf{F} _{p})\cong\begin{cases}\mathrm{H}_{*}(\mathrm{BP}\langle n-1\rangle[B\Delta_ {n}];\mathbf{F}_{2})[\sigma(\zeta_{n+1})]&p=2,\\ \mathrm{H}_{*}(\mathrm{BP}\langle n-1\rangle[B\Delta_{n}];\mathbf{F}_{p})[ \sigma(\tau_{n})]&p>2.\end{cases}\] 2. _There are isomorphisms_ \[\mathrm{H}_{*}(\mathrm{THH}(\mathrm{BP}\langle n\rangle/X(p^{n}));\mathbf{F} _{p})\cong\begin{cases}\mathrm{H}_{*}(\mathrm{BP}\langle n\rangle[B\Delta_{n} ];\mathbf{F}_{2})[\sigma(\zeta_{n+2})]\otimes_{\mathbf{F}_{2}}\Lambda_{ \mathbf{F}_{2}}(\sigma(\zeta_{n+1}^{2}))&p=2,\\ \mathrm{H}_{*}(\mathrm{BP}\langle n\rangle[B\Delta_{n}];\mathbf{F}_{p})[ \sigma(\tau_{n+1})]\otimes_{\mathbf{F}_{p}}\Lambda_{\mathbf{F}_{p}}(\sigma( \zeta_{n+1}))&p>2.\end{cases}\] _Moreover, there is a Bockstein_ \(\beta:\sigma(\zeta_{n+2})\mapsto\sigma(\zeta_{n+1}^{2})\) _for_ \(p=2\)_, and a Bockstein_ \(\beta:\sigma(\tau_{n+1})\mapsto\sigma(\zeta_{n+1})\) _for_ \(p>2\)_._ Proof.: We begin by proving (a). We will use the Bokstedt spectral sequence, which runs \[E_{*,*}^{2}=\mathrm{HH}_{*}(\mathrm{H}_{*}(\mathrm{BP}\langle n-1\rangle; \mathbf{F}_{p})/\mathrm{H}_{*}(X(p^{n});\mathbf{F}_{p}))\Rightarrow\mathrm{H}_ {*}(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}));\mathbf{F}_{p}).\] Since \(\mathrm{H}_{*}(X(p^{n});\mathbf{F}_{p})\cong\mathrm{H}_{*}(T(n);\mathbf{F}_{p} )\otimes_{\mathbf{F}_{p}}\mathrm{H}_{*}(\Omega\Delta_{n};\mathbf{F}_{p})\) and the action of \(\mathrm{H}_{*}(X(p^{n});\mathbf{F}_{p})\) on \(\mathrm{H}_{*}(\mathrm{BP}\langle n-1\rangle;\mathbf{F}_{p})\) factors through the map \(\mathrm{H}_{*}(X(p^{n});\mathbf{F}_{p})\to\mathrm{H}_{*}(T(n);\mathbf{F}_{p})\) induced by the map crushing \(\Omega\Delta_{n}\) to a point, we will ignore the contribution from \(\Delta_{n}\) in this discussion. The final contribution from these terms will only be \(\mathrm{H}_{*}(B\Delta_{n};\mathbf{F}_{p})\). (The following may therefore be interpreted as a computation of \(\mathrm{H}_{*}(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T(n));\mathbf{F}_{p})\); however, since Conjecture 2.1.9 is not known to be true, the spectrum \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T(n))\) cannot yet be defined.) We will continue to write \(E_{*,*}^{2}\) to denote the Hochschild homology groups of \(\mathrm{H}_{*}(\mathrm{BP}\langle n-1\rangle;\mathbf{F}_{p})\) over \(\mathrm{H}_{*}(T(n);\mathbf{F}_{p})\). Recall that if \(R\) is any discrete commutative ring, there are isomorphisms \(\pi_{*}\mathrm{HH}(R[x]/R)\simeq R[x]\otimes\Lambda_{R}(\sigma x)\) and \(\pi_{*}\mathrm{HH}(\Lambda_{R}(x))\simeq\Lambda_{R}(x)\otimes R\langle\sigma x\rangle\). It therefore follows from Recollection 2.2.1 that we have \[E_{*,*}^{2}=\begin{cases}\operatorname{H}_{*}(\operatorname{BP}\langle n-1\rangle; \mathbf{F}_{2})\otimes_{\mathbf{F}_{2}}\Lambda_{\mathbf{F}_{2}}(\sigma\zeta_{j} |j\geq n+1)&p=2,\\ \operatorname{H}_{*}(\operatorname{BP}\langle n-1\rangle;\mathbf{F}_{p}) \otimes_{\mathbf{F}_{p}}\Lambda_{\mathbf{F}_{p}}(\sigma\zeta_{j}|j\geq n+1) \otimes_{\mathbf{F}_{p}}\mathbf{F}_{p}\langle\sigma\tau_{j}|j\geq n\rangle&p>2.\end{cases}\] The map \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle)\to\operatorname{THH}( \operatorname{BP}\langle n-1\rangle/X(p^{n}))\) induces a map from the Bokstedt spectral sequence computing \(\operatorname{H}_{*}(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle))\) to our spectral sequence. The differentials in the Bokstedt spectral sequence computing \(\operatorname{H}_{*}(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle))\) are calculated in [1, Proposition 5.6], where it is shown that for \(p\) odd, \(j\geq p\), and \(m\geq n\), there are differentials \[d^{p-1}(\gamma_{j}(\sigma\tau_{m}))=\sigma(\zeta_{m+1})\gamma_{j-p}(\sigma\tau _{m}). \tag{6}\] The argument of [1, Proposition 5.7] implies that \[E_{*,*}^{\infty}=\begin{cases}\operatorname{H}_{*}(\operatorname{BP}\langle n -1\rangle;\mathbf{F}_{2})\otimes_{\mathbf{F}_{2}}\Lambda_{\mathbf{F}_{2}}( \sigma\zeta_{j}|j\geq n+1)&p=2,\\ \operatorname{H}_{*}(\operatorname{BP}\langle n-1\rangle;\mathbf{F}_{p}) \otimes_{\mathbf{F}_{p}}\mathbf{F}_{p}[\sigma\tau_{j}|j\geq n]/(\sigma\tau_{j })^{p}&p>2.\end{cases}\] The extensions on the \(E^{\infty}\)-page of the Bokstedt spectral sequence computing \(\operatorname{H}_{*}(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle))\) are determined by [1, Theorem 5.12]: there, it is shown that for \(j\geq n+1\), we have \((\sigma\zeta_{j})^{2}=\sigma\zeta_{j+1}\) when \(p=2\), and \((\sigma\tau_{j})^{p}=\sigma\tau_{j+1}\). These imply extensions on the \(E^{\infty}\)-page of the Bokstedt spectral sequence for \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{n}))\), and the resulting answer is that of the proposition. We now turn to (b). The calculation is similar to (a), the only difference being that the \(E^{2}\)-page of the Bokstedt spectral sequence is now \[E_{*,*}^{2}=\begin{cases}\operatorname{H}_{*}(\operatorname{BP}\langle n \rangle;\mathbf{F}_{2})\otimes_{\mathbf{F}_{2}}\Lambda_{\mathbf{F}_{2}}( \sigma(\zeta_{n+1}^{2}),\sigma\zeta_{j}|j\geq n+2)&p=2,\\ \operatorname{H}_{*}(\operatorname{BP}\langle n\rangle;\mathbf{F}_{p}) \otimes_{\mathbf{F}_{p}}\Lambda_{\mathbf{F}_{p}}(\sigma\zeta_{j}|j\geq n+1) \otimes_{\mathbf{F}_{p}}\mathbf{F}_{p}\langle\sigma\tau_{j}|j\geq n+1\rangle &p>2.\end{cases}\] Again, the differentials in the Bokstedt spectral sequence computing \(\operatorname{H}_{*}(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle))\) give rise to differentials in the above Bokstedt spectral sequence, and we have \[E_{*,*}^{\infty}=\begin{cases}\operatorname{H}_{*}(\operatorname{BP}\langle n \rangle;\mathbf{F}_{2})\otimes_{\mathbf{F}_{2}}\Lambda_{\mathbf{F}_{2}}( \sigma(\zeta_{n+1}^{2}),\sigma\zeta_{j}|j\geq n+2)&p=2,\\ \operatorname{H}_{*}(\operatorname{BP}\langle n-1\rangle;\mathbf{F}_{p}) \otimes_{\mathbf{F}_{p}}\mathbf{F}_{p}[\sigma\tau_{j}|j\geq n+1]/(\sigma\tau_{ j})^{p}\otimes_{\mathbf{F}_{p}}\Lambda_{\mathbf{F}_{p}}(\sigma\zeta_{n+1})&p>2.\end{cases}\] Again, the extensions on the \(E^{\infty}\)-page of the Bokstedt spectral sequence computing \(\operatorname{H}_{*}(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle))\) imply extensions on the above \(E^{\infty}\)-page, and the resulting answer is that of the proposition. The Bockstein follows from the fact that \(\beta(\tau_{i})=\zeta_{i}\) for \(p\) odd and \(\beta(\zeta_{i})=\zeta_{i-1}^{2}\) for \(p=2\). **Proposition 2.2.15**.: _There are isomorphisms_ \[\operatorname{H}_{*}^{c}(\operatorname{TC}^{-}(\operatorname{BP} \langle n-1\rangle/X(p^{n}));\mathbf{F}_{p}) \cong\operatorname{H}_{*}(\operatorname{BP}\langle n\rangle[B\Delta_{ n}];\mathbf{F}_{p})[\![\hbar]\!]\oplus\hbar\text{-torsion},\] \[\operatorname{H}_{*}^{c}(\operatorname{TP}(\operatorname{BP} \langle n-1\rangle/X(p^{n}));\mathbf{F}_{p}) \cong\operatorname{H}_{*}(\operatorname{BP}\langle n\rangle[B\Delta_{ n}];\mathbf{F}_{p})(\!(\hbar)\!),\] \[\operatorname{H}_{*}^{c}(\operatorname{THH}(\operatorname{BP} \langle n-1\rangle/X(p^{n}))^{t\mathbf{Z}/m};\mathbf{F}_{p}) \cong\operatorname{H}_{*}(\operatorname{BP}\langle n\rangle^{t \mathbf{Z}/m}[B\Delta_{n}];\mathbf{F}_{p})(\!(\hbar)\!).\] _Here, \(|\hbar|=-2\), and the \(\hbar\)-torsion terms will be specified in the proof._ Proof.: As in Proposition 2.2.14, the contribution from \(\Delta_{n}\) is just the \(\mathbf{F}_{p}\)-homology of \(B\Delta_{n}\), and we will ignore this term in the calculations. Moreover, the calculation for \(\operatorname{H}_{*}^{c}(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle /X(p^{n}))^{t\mathbf{Z}/p^{k}};\mathbf{F}_{p})\) is similar to the calculation of \(\operatorname{H}_{*}^{c}(\operatorname{TC}^{-}(\operatorname{BP}\langle n-1 \rangle/X(p^{n}));\mathbf{F}_{p})\) (and \(\operatorname{H}_{*}^{c}(\operatorname{TP}(\operatorname{BP}\langle n-1\rangle/X(p^ {n}));\mathbf{F}_{p})\)), so we will only do the latter. (The only difference is that \(\mathbf{F}_{p}(\!(\hbar)\!)\) below is replaced by \(\mathbf{F}_{p}(\hbar)[\epsilon_{k}]/\epsilon_{k}^{2}\).) The \(E^{2}\)-page of the homological homotopy fixed points spectral sequence computing \(\mathrm{H}_{*}^{c}(\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle/X(p^{n})); \mathbf{F}_{p})\) is given by \[E_{*,*}^{2} \cong\mathrm{H}_{*}(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p ^{n}));\mathbf{F}_{p})\otimes_{\mathbf{F}_{p}}\mathbf{F}_{p}[\hbar]\] \[\cong\begin{cases}\mathbf{F}_{2}[\sigma(\zeta_{n+1}),\hbar,\zeta _{1}^{2},\cdots,\zeta_{n}^{2},\zeta_{j}|j\geq n+1]&p=2\\ \mathbf{F}_{p}[\sigma(\tau_{n}),\hbar,\zeta_{i}|i\geq 1]\otimes_{\mathbf{F}_{p}} \Lambda_{\mathbf{F}_{p}}[\tau_{j}|j\geq n]&p>2.\end{cases}\] There is a map to the above spectral sequence from the homological homotopy fixed points spectral sequence computing \(\mathrm{H}_{*}^{c}(\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle);\mathbf{F}_ {p})\), and [1, Proposition 6.1] calculates that there are differentials \(d^{2}(x)=\hbar\sigma(x)\) for every \(x\in\mathrm{H}_{*}(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n})); \mathbf{F}_{p})\). For \(j\geq n\), the following classes survive to the \(E^{3}\)-page: \[\zeta_{j+1}^{\prime} =\zeta_{j+1}+\zeta_{j}\sigma(\zeta_{j})=\zeta_{j+1}+\zeta_{j} \sigma(\zeta_{n+1})^{2^{n+1-j}},\ p=2\] \[\tau_{j+1}^{\prime} =\tau_{j+1}+\tau_{j}\sigma(\tau_{j})^{p-1},\ p>2.\] Moreover, (powers of) the classes \(\sigma(\zeta_{n+1})\) at \(p=2\) and \(\sigma(\tau_{n})\) at \(p>2\) are simple \(\hbar\)-torsion: for example, \(\hbar\sigma(\zeta_{n+1})^{2^{n+1-j}}\) is killed by a \(d^{2}\)-differential on \(\zeta_{j}\), and the case for a general power of \(\sigma(\zeta_{n+1})\) follows from taking a binary expansion of the exponent. This leaves \[E_{*,*}^{3} \cong\mathbf{F}_{2}[\hbar][\zeta_{1}^{2},\cdots,\zeta_{n}^{2}, \zeta_{n+1}^{2},\zeta_{j+1}^{\prime}|j\geq n+1],\ p=2,\] \[E_{*,*}^{3} \cong\mathbf{F}_{p}[\hbar][\zeta_{i}|i\geq 1]\otimes_{\mathbf{F}_{p}} \Lambda_{\mathbf{F}_{p}}[\tau_{j+1}^{\prime}|j\geq n],\ p>2,\] and the image of \(\sigma\) in filtration zero (these classes being simple \(\hbar\)-torsion). We claim that the spectral sequence degenerates at the \(E^{3}\)-page, which then implies the desired result. (In the case of \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))^{t\mathbf{Z}/p}\), for instance, the class \(\epsilon_{1}\hbar^{1-p^{n}}\) plays the role of \(\tau_{n}\) in \(\mathrm{H}_{*}^{c}(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))^{t \mathbf{Z}/p};\mathbf{F}_{p})\) for \(p\) odd.) As with the proof of Proposition 2.2.14, this follows from [1, Proposition 6.1]: were there any differentials in the homological homotopy fixed points spectral sequence for \(\mathrm{H}_{*}(\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle/X(p^{n}));\mathbf{ F}_{p})\), there would also exist corresponding differentials in the homological homotopy fixed points spectral sequence for \(\mathrm{H}_{*}(\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle);\mathbf{F}_{p})\). However, the statement of [1, Proposition 6.1] assumes that \(\mathrm{BP}\langle n-1\rangle\) admits the structure of an \(\mathbf{E}_{\infty}\)-algebra; this is not necessary, since their appeal to [1, Proposition 5.1] only uses the existence of the Dyer-Lashof operations \(Q_{0}\) and \(Q_{1}\) on \(\mathrm{H}_{*}(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle);\mathbf{F}_{p})\), which already exist in the homology of any \(\mathbf{E}_{2}\)-algebra. It therefore suffices to know that \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle)\) admits the structure of an \(\mathbf{E}_{2}\)-algebra, which is a consequence of our assumption that \(\mathrm{BP}\langle n-1\rangle\) is an \(\mathbf{E}_{3}\)-form of the truncated Brown-Peterson spectrum. Proof of Theorem 2.2.4(a).: We will ignore the contribution from \(B\Delta_{n}\) below: the contribution from this term is simply its homology. We will first calculate \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) via the Adams spectral sequence \[E_{2}^{*,*}=\mathrm{Ext}_{\mathscr{A}_{\star}}^{*,*}(\mathbf{F}_{p},\mathrm{H}_ {*}(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}));\mathbf{F}_{p}))\Rightarrow \pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))_{p}^{\wedge}.\] Using Proposition 2.2.14(a), there is a change-of-rings isomorphism \[E_{2}^{*,*}\cong\mathrm{Ext}_{\mathscr{E}(n-1)_{*}}^{*,*}(\mathbf{F}_{p}, \mathbf{F}_{p}[\sigma(\zeta_{n+1})])\cong\mathbf{F}_{p}[\sigma(\zeta_{n+1}),v_{ j}|0\leq j\leq n-1],\] where \(v_{j}\) lives in bidegree \((s,t-s)=(1,2p^{j}-2)\). The Adams spectral sequence is concentrated in even total degree (and therefore degenerates at the \(E_{2}\)-page). The class \(\sigma(\zeta_{n+1})\) in degree \(|\zeta_{n+1}|+1=2p^{n}\) is denoted \(\theta_{n}\), so that the above calculation says that there is an isomorphism \[\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\simeq\mathrm{BP} \langle n-1\rangle[B\Delta_{n}]_{*}[\theta_{n}].\] Since \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\simeq\mathrm{THH}( \mathrm{BP}\langle n-1\rangle)\otimes_{\mathrm{THH}(X(p^{n}))}X(p^{n})\), we see that \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) admits the structure of a \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle)\)-module. There is an \(\mathbf{E}_{2}\)-map \(\mathrm{BP}\langle n-1\rangle\to\mathrm{THH}(\mathrm{BP}\langle n-1\rangle)\), so that \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) acquires the structure of a \(\mathrm{BP}\langle n-1\rangle\)-module by restriction of scalars. Therefore, each of the \(\mathrm{BP}\langle n-1\rangle_{*}\)-module generators of \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) lift to maps of spectra from shifts of \(\mathrm{BP}\langle n-1\rangle\) to \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\). Moreover, the resulting map \(\mathrm{BP}\langle n-1\rangle[B\Delta_{n}\times\Omega S^{2p^{n}+1}]\to \mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) induces an isomorphism on homotopy by construction, so we obtain the first part of Theorem 2.2.4(a). The calculation for \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))^{t\mathbf{Z}/m}\) is similar to the calculation of \(\pi_{*}\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) (and \(\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\)); moreover, it will be illustrative to calculate \(\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\), since the case of \(\pi_{*}\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) will just involve bookkeeping of the \(\hbar\)-torsion terms in Proposition 2.2.15. There is an Adams spectral sequence \[E_{2}^{*,*}=\mathrm{Ext}_{\mathscr{A}_{*}}^{*,*}(\mathbf{F}_{p},\mathrm{H}_{* }^{c}(\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}));\mathbf{F}_{p})) \Rightarrow\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))_{p}^{ \wedge},\] which is in general only conditionally convergent, but is strongly convergent in this case. (This is because \(\mathrm{H}_{*}(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}));\mathbf{F}_{p})\) is bounded-below and of finite type.) By Proposition 2.2.15, there is a change-of-rings isomorphism \[E_{2}^{*,*}\cong\mathrm{Ext}_{\mathscr{E}(n)_{*}}^{*,*}(\mathbf{F}_{p}, \mathbf{F}_{p}(\hbar))\cong\mathbf{F}_{p}[v_{j}|0\leq j\leq n](\hbar),\] so that the Adams spectral sequence is concentrated in even total degree (and therefore degenerates at the \(E_{2}\)-page); this gives the desired calculation. **Remark 2.2.16**.: The homotopy fixed points spectral sequence for \(\pi_{*}\mathrm{TC}^{-}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) has \(E_{2}\)-page given by \[E_{2}^{*,*}=\mathrm{BP}\langle n-1\rangle[B\Delta_{n}]_{*}[\theta_{n}][\hbar].\] By evenness, this spectral sequence degenerates at the \(E_{2}\)-page. The calculation of Theorem 2.2.4(a) tells us that the class \(\hbar\theta_{n}\) on the \(E_{\infty}\)-page represents the class \(v_{n}\in\pi_{*}\mathrm{BP}\langle n\rangle\) (modulo decomposables). Note that Theorem 2.2.4(a) says in particular that \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))^{t\mathbf{Z}/p}\cong\pi _{*}\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p}[B\Delta_{n}]\). There is an isomorphism \(\pi_{*}\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p}\cong\pi_{*}\mathrm{BP} \langle n-1\rangle^{tS^{1}}\) (which was proved in [1, Proposition 2.3], and conjectured to lift to an equivalence of spectra in [1, Conjecture 1.2]), so that \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))^{t\mathbf{Z}/p}\cong \mathrm{BP}\langle n-1\rangle[B\Delta_{n}]_{*}(\hbar)\). Note that unless \(n=0\), this is _not_ isomorphic to \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))[\theta_{n}^{-1}]\), since \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))[\theta_{n}^{-1}]\) is \(2p^{n}\)-periodic, while \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))^{t\mathbf{Z}/p}\) is \(2\)-periodic. Proof of Theorem 2.2.4(b).: We now calculate \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n\rangle/X(p^{n}))\), this time with the use of Bockstein spectral sequences. (Similar arguments can be found in [1].) Again, we will ignore the contribution from \(B\Delta_{n}\) below: the contribution from this term is simply its homology. For simplicity, let us write \[x=\begin{cases}\sigma(\zeta_{n+1}^{2})&p=2,\\ \sigma(\zeta_{n+1})&p>2\end{cases},\,y=\begin{cases}\sigma(\zeta_{n+2})&p=2, \\ \sigma(\tau_{n+1})&p>2,\end{cases}\] so that \(|x|=2p^{n+1}-1\) and \(|y|=2p^{n+1}\). If \(M\) is a (left) BP\(\langle n\rangle\)-module, let \(\operatorname{THH}(\text{BP}\langle n\rangle/X(p^{n});M)\) denote \(\operatorname{THH}(\text{BP}\langle n\rangle/X(p^{n}))\otimes_{\text{BP} \langle n\rangle}M\), so that we may informally view \(\operatorname{THH}(\text{BP}\langle n\rangle/X(p^{n});\mathbf{F}_{p})\) as \(\operatorname{THH}(\text{BP}\langle n\rangle/X(p^{n}))/(p,\cdots,v_{n})\). Using Proposition 2.2.14(b), one can show that \[\pi_{*}\text{THH}(\text{BP}\langle n\rangle/X(p^{n});\mathbf{F}_{p})\cong \mathbf{F}_{p}[x,y]/x^{2};\] we will compute \(\operatorname{THH}(\text{BP}\langle n\rangle/X(p^{n});\text{BP}\langle n\rangle)\) using this calculation and \(n+1\) Bockstein spectral sequences. The \(v_{0}\)-Bockstein spectral sequence is given by \[E_{1}^{*,*}=\pi_{*}\text{THH}(\text{BP}\langle n\rangle/X(p^{n});\mathbf{F}_{p })[v_{0}]\cong\mathbf{F}_{p}[v_{0},x,y]/x^{2}\Rightarrow\pi_{*}\text{THH}( \text{BP}\langle n\rangle/X(p^{n});\mathbf{Z}_{p}). \tag{7}\] It follows from the Bockstein calculation in Proposition 2.2.14(b) that there is a \(d_{1}\)-differential \[d_{1}(y)=v_{0}x, \tag{8}\] which implies \(d_{1}(yv_{0}^{n})=v_{0}^{n+1}x\) (by \(\mathbf{F}_{p}[v_{0}]\)-linearity). However, (8) does not immediately imply differentials on powers of \(y\), since \(\operatorname{THH}(\text{BP}\langle n\rangle/X(p^{n}))\) does not admit the structure of a ring (so the spectral sequence is not multiplicative). However, this is easily resolved: there is a map to the above Bockstein spectral sequence from the Bockstein spectral sequence computing \(\pi_{*}\text{THH}(\text{BP}\langle n\rangle;\mathbf{Z}_{p})\), whose \(E_{1}\)-page is \[{}^{\prime}E_{1}^{*,*}\cong\pi_{*}\text{THH}(\text{BP}\langle n\rangle; \mathbf{F}_{p})[v_{0}].\] The calculation of \(\operatorname{H}_{*}(\operatorname{THH}(\text{BP}\langle n\rangle);\mathbf{F} _{p})\) is described in [1, Theorem 5.12]; from this, one can compute \(\pi_{*}\text{THH}(\text{BP}\langle n\rangle;\mathbf{F}_{p})\). Here, we will only need to observe that the classes \(x,y\in E_{1}^{*,*}\) lift along the map \({}^{\prime}E_{1}^{*,*}\to E_{1}^{*,*}\). We will continue to denote these lifts by \(x\) and \(y\); there is still a \(d_{1}\)-differential \(d_{1}(y)=v_{0}x\) in \({}^{\prime}E_{1}^{*,*}\). Since \(\operatorname{THH}(\text{BP}\langle n\rangle;\mathbf{Z}_{p})\) admits the structure of an \(\mathbf{E}_{2}\)-ring, the above spectral sequence is multiplicative. Therefore, we may appeal to [1, Proposition 6.8], which gives higher differentials on powers of \(y\). In particular, we claim: \[d_{v_{p}(j)+1}(y^{j})=v_{0}^{v_{p}(j)+1}xy^{j-1}, \tag{9}\] up to a unit in \(\mathbf{F}_{p}^{\times}\). By taking base-\(p\) expansions, it suffices to prove this differential when \(j\) is a power of \(p\), say \(j=p^{k}\): then, (9) says that \(d_{k+1}(y^{p^{k}})=v_{0}^{k+1}xy^{p^{k}-1}\). Using [1, Proposition 6.8] for \(k>1\), we have \[d_{k+1}((y^{p^{k-1}})^{p})=v_{0}(y^{p^{k-1}})^{p-1}d_{k}(y^{p^{k-1}})=v_{0}y^{ p^{k}-p^{k-1}}d_{k}(y^{p^{k-1}});\] this inductively implies (9) once we establish the case \(k=1\). For \(p=2\), [1, Proposition 6.8] says that \[d_{2}(y^{2})=v_{0}yd_{1}(y)+Q_{1}(d_{1}(y))=v_{0}^{2}xy^{2}+Q_{1}(v_{0}x).\] But \[Q_{1}(x)=Q_{1}(\sigma(\zeta_{n+1}^{2}))=\sigma(Q_{2}(\zeta_{n+1}^{2}))=\sigma( \zeta_{n+2}^{2}),\] which is zero. Therefore, we see that \(d_{2}(y^{2})=v_{0}xy^{2}\), as desired. For \(p>2\), [1, Proposition 6.8] says that \[d_{2}(y^{p})=v_{0}y^{p-1}d_{1}(y)+\sum_{1\leq j\leq r}j[d_{1}(y)y^{j-1},d_{1}(y )y^{p-j-1}],\] for some integer \(r\). The "correction" term is a \(v_{0}\)-multiple of sum of terms of the form \([xy^{j-1},xy^{p-j-1}]\). Note that this class lives in \(\pi_{*}\text{THH}(\text{BP}\langle n\rangle;\mathbf{F}_{p})\), but for the calculation of (7), we are only concerned with the image of this class in \(\pi_{*}{\rm THH}({\rm BP}\langle n\rangle/X(p^{n});{\bf F}_{p})\). We claim that the image of \([xy^{j-1},xy^{p-j-1}]\) in \(\pi_{*}{\rm THH}({\rm BP}\langle n\rangle/X(p^{n});{\bf F}_{p})\) vanishes, so the correction terms above vanish. To prove this, observe that the Leibniz rule implies that, in \(\pi_{*}{\rm THH}({\rm BP}\langle n\rangle;{\bf F}_{p})\), we have \[\begin{split}[xy^{j-1},xy^{p-j-1}]&=x[y^{j-1},xy^{ p-j-1}]+y^{j-1}[x,xy^{p-j-1}]\\ &=x^{2}[y^{j-1},y^{p-j-1}]+xy^{p-j-1}[y^{j-1},x]+y^{p-2}[x,x]+xy^{ j-1}[x,y^{p-j-1}].\end{split}\] Here, all terms are written up to sign; this will not matter, since we will show that each of the terms in the sum above vanish. The first term vanishes since \(x^{2}=0\), and the third term vanishes since \([x,x]=0\). For the second and fourth term, we will argue more generally that the image of \([x,y^{k}]\) in \(\pi_{*}{\rm THH}({\rm BP}\langle n\rangle/X(p^{n});{\bf F}_{p})\) vanishes for any \(k\geq 0\). The Leibniz rule implies that \([x,y^{k}]=ky^{k-1}[x,y]\), so it suffices to show that the image of \([x,y]\) in \(\pi_{*}{\rm THH}({\rm BP}\langle n\rangle/X(p^{n});{\bf F}_{p})\) vanishes. Since \([x,y]\) lives in degree \(|x|+|y|+1=(2p^{n+1}-1)+2p^{n+1}+1=4p^{n+1}\) and \(\pi_{4p^{n+1}}{\rm THH}({\rm BP}\langle n\rangle/X(p^{n});{\bf F}_{p})\cong{ \bf F}_{p}\{y^{2}\}\), we must have \([x,y]\dot{=}y^{2}\) in \(\pi_{*}{\rm THH}({\rm BP}\langle n\rangle/X(p^{n});{\bf F}_{p})\) if \([x,y]\) is nonzero. To show that \([x,y]\dot{=}y^{2}\), we observe that the \({\bf E}_{2}\)-map \(\iota:{\rm THH}({\rm BP}\langle n\rangle;{\bf F}_{p})\to{\rm THH}({\rm BP} \langle n\rangle/{\rm MU};{\bf F}_{p})\) factors through \({\rm THH}({\rm BP}\langle n\rangle/X(p^{n});{\bf F}_{p})\). The classes \(x\) and \(y\) are in the image of the map \({\rm THH}({\rm BP}\langle n\rangle;{\bf F}_{p})\to{\rm THH}({\rm BP}\langle n \rangle/X(p^{n});{\bf F}_{p})\), and \(x\) is killed by the map \(\iota\). Since \(\iota\) is an \({\bf E}_{2}\)-map, we must have \(\iota([x,y])=[\iota(x),\iota(y)]=0\); however, \(\iota(y^{2})=\iota(y)^{2}\) is nonzero. Therefore, \([x,y]\dot{=}y^{2}\); but since \(\pi_{4p^{n+1}}{\rm THH}({\rm BP}\langle n\rangle/X(p^{n});{\bf F}_{p})\) is a 1-dimensional \({\bf F}_{p}\)-vector space spanned by \(y^{2}\), we must have \([x,y]=0\). The upshot of this discussion is that the \(E_{r}\)-page of (7) is given by \[E_{r}^{*,*}={\bf F}_{p}[v_{0},y^{p^{r-1}}]\{1,x,xy,xy^{2},\cdots\}/(v_{0}^{i} xy^{p^{i-1}j-1},1\leq i\leq r-1,1\leq j\leq p-1).\] In particular, no power of \(y\) survives to the \(E_{\infty}\)-page, and since \(v_{0}\) represents \(p\), we can resolve the \(v_{0}\)-extensions to conclude that \[\pi_{*}{\rm THH}({\rm BP}\langle n\rangle/X(p^{n});{\bf Z}_{p})\cong{\bf Z}_{ p}\oplus\bigoplus_{j\geq 1}{\bf Z}_{p}/p^{v_{p}(j)+1}\{xy^{j-1}\}. \tag{10}\] Note that \(|xy^{j-1}|=2jp^{n+1}-1\). The higher Bockstein spectral sequences (for \(v_{1},\cdots,v_{n}\)) all collapse at the \(E_{1}\)-page for degree reasons, as we now explain. For the \(v_{m}\)-Bockstein spectral sequence with \(1\leq m\leq n\), one can argue by induction on \(m\) (the base case is the same argument as the inductive step). First, observe that \(v_{1},\cdots,v_{n}\) survive the Bockstein spectral sequence, since \({\rm BP}\langle n\rangle\) splits off \({\rm THH}({\rm BP}\langle n\rangle/X(p^{n}))\). In particular, there cannot be any differential with target given by a product of monomials in the \(v_{i}\)s. By \({\bf Z}_{p}[v_{1},\cdots,v_{m}]\)-linearity, any differential must therefore be of the form \[d_{r}(xy^{j-1})=v_{i_{1}}^{r_{1}}\cdots v_{i_{a}}^{r_{a}}v_{m}^{r}xy^{k-1}\] for some \(j,k\), exponents \(r_{1},\cdots,r_{a}\), and \(1\leq i_{1},\cdots,i_{a}<m\). (More precisely, it will be a sum of monomials of the above form, but this point will not matter.) But \(d_{r}(xy^{j-1})\) has bidegree \((t-s,s)=(2jp^{n+1}-2,r)\), while \(v_{i_{1}}^{r_{1}}\cdots v_{i_{a}}^{r_{a}}v_{m}^{r}xy^{k-1}\) has bidegree \((t-s,s)=(2r_{1}(p^{i_{1}}-1)+\cdots+2r_{a}(p^{i_{a}}-1)+2r(p^{m}-1)+2kp^{n+1}-1,r)\). Such a differential is therefore not possible, since \(2jp^{n+1}-2\) is even, while \(2r_{1}(p^{i_{1}}-1)+\cdots+2r_{a}(p^{i_{a}}-1)+2r(p^{m}-1)+2kp^{n+1}-1\) is odd. The calculation of \(\pi_{*}{\rm THH}({\rm BP}\langle n\rangle/X(p^{n}))\) now follows from (10). Since \({\rm THH}({\rm BP}\langle n\rangle/X(p^{n}))\simeq{\rm THH}({\rm BP}\langle n \rangle)\otimes_{{\rm THH}(X(p^{n}))}X(p^{n})\), we see that \({\rm THH}({\rm BP}\langle n\rangle/X(p^{n}))\) admits the structure of a \({\rm THH}({\rm BP}\langle n\rangle)\)-module. There is an \(\mathbf{E}_{2}\)-map \(\mathrm{BP}\langle n\rangle\to\mathrm{THH}(\mathrm{BP}\langle n\rangle)\), so that \(\mathrm{THH}(\mathrm{BP}\langle n\rangle/X(p^{n}))\) acquires the structure of a \(\mathrm{BP}\langle n\rangle\)-module by restriction of scalars. Therefore, each of the \(\mathrm{BP}\langle n\rangle_{*}\)-module generators of \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n\rangle/X(p^{n}))\) lift to maps of spectra from shifts of \(\mathrm{BP}\langle n\rangle\) to \(\mathrm{THH}(\mathrm{BP}\langle n\rangle/X(p^{n}))\). Moreover, the resulting map \(\mathrm{BP}\langle n\rangle[B\Delta_{n}]\oplus\bigoplus_{j\geq 1}\Sigma^{2jp^{n+1}-1} \mathrm{BP}\langle n\rangle[B\Delta_{n}]/p^{v_{p}(j)+1}\to\mathrm{THH}( \mathrm{BP}\langle n\rangle/X(p^{n}))\) induces an isomorphism on homotopy by construction, so we obtain Theorem 2.2.4(b). **Remark 2.2.17**.: When \(n=0\), one may view the Bockstein calculation of Theorem 2.2.4(b) as a translation of the Serre spectral sequence for the fibration (5). Assume that \(p>2\). Indeed, the Serre spectral sequence is given by \[E_{*,*}^{2}=\mathrm{H}_{*}(S^{2p-1};\mathbf{Z}_{p})\otimes\mathrm{H}_{*}( \Omega S^{2p+1};\mathbf{Z}_{p})\cong\mathbf{Z}_{p}[x,y]/x^{2}\Rightarrow\mathrm{ H}_{*}(\Omega S^{3}\langle 3\rangle;\mathbf{Z}_{p}).\] There is a single family of differentials, determined multiplicatively from \[d^{2p}(y)=px;\] this implies that \(d^{2p}(y^{m})=mpy^{m-1}x\). The Serre spectral sequence collapses at the \(E^{2p+1}\)-page, and the resulting answer is precisely (10). In fact, if \(\phi_{n}:\Omega^{2}S^{2p^{n}+1}\to S^{2p^{n}-1}\) is a charming map in the sense of [10, Definition 4.1.1] (such as the Cohen-Moore-Neisendorfer map of [11, 12, 13]), the proof of Theorem 2.2.4(b) can be understood as a calculation of \(\pi_{*}\mathrm{BP}\langle n-1\rangle[B\operatorname{fib}(\phi_{n})]\) using the Serre spectral sequence for the Cohen-Moore-Neisendorfer type fibration \[S^{2p^{n}-1}\to B\operatorname{fib}(\phi_{n})\to\Omega S^{2p^{n}+1}. \tag{11}\] The Serre spectral sequence for (11) is exactly the same as that of (5): the \(E^{2}\)-page is given by \[E_{*,*}^{2}=\mathrm{H}_{*}(S^{2p^{n}-1};\mathbf{Z}_{p})\otimes\mathrm{H}_{*}( \Omega S^{2p^{n}+1};\mathbf{Z}_{p})\cong\mathbf{Z}_{p}[x,y]/x^{2}\Rightarrow \mathrm{H}_{*}(B\operatorname{fib}(\phi_{n});\mathbf{Z}_{p}).\] There is a single family of differentials, determined multiplicatively from \[d^{2p^{n}}(y)=px;\] this implies that \(d^{2p^{n}}(y^{m})=mpy^{m-1}x\), and the Serre spectral sequence collapses at the \(E^{2p^{n}+1}\)-page. The upshot is that \[\mathrm{H}_{i}(B\operatorname{fib}(\phi_{n});\mathbf{Z}_{p})\cong\begin{cases} \mathbf{Z}_{p}&i=0,\\ \mathbf{Z}_{p}/pk&2kp^{n}-1,\\ 0&\text{else}.\end{cases}\] In fact, Theorem 2.2.4(b) implies that there is an equivalence of \(\mathrm{BP}\langle n-1\rangle\)-modules \[\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n-1}))\simeq\mathrm{BP} \langle n-1\rangle[B\Delta_{n-1}\times B\operatorname{fib}(\phi_{n})].\] The calculations of Theorem 2.2.4 can be predicted from the results of [10]. Let us suppose that \(p\) is odd for simplicity. Assuming [10, Conjectures D and E], [10, Corollary B] implies that there is a map \(\Omega^{2}S^{2p^{n}+1}\to\mathrm{BGL}_{1}(X(p^{n}))\) whose Thom spectrum is \(\mathrm{BP}\langle n-1\rangle[\Omega\Delta_{n}]\). This implies that there is an equivalence of spectra \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\simeq\mathrm{BP} \langle n-1\rangle[B\Delta_{n}\times\Omega S^{2p^{n}+1}]\); this is precisely the first part of Theorem 2.2.4(a). Moreover, [10, Theorem A] says (still assuming the aforementioned conjectures) that the Thom spectrum of the composite \(\operatorname{fib}(\phi_{n})\to\Omega^{2}S^{2p^{n}+1}\to\mathrm{BGL}_{1}(X(p^ {n}))\) is \(\mathrm{BP}\langle n\rangle[\Omega\Delta_{n}]\). This can be shown to imply that \(\pi_{*}\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\simeq\pi_{*} \mathrm{BP}\langle n\rangle^{tS^{1}}[B\Delta_{n}]\), which is indeed confirmed by Theorem 2.2.4(a). This result also implies that there is an equivalence of spectra \(\mathrm{THH}(\mathrm{BP}\langle n\rangle/X(p^{n}))\simeq\mathrm{BP}\langle n \rangle[B\Delta_{n}\times B\operatorname{fib}(\phi_{n+1})]\), which is indeed true by Theorem 2.2.4(b). We will state the results predicted by this discussion as a conjecture. **Conjecture 2.2.18**.: _Fix an \(\mathbf{E}_{3}\)-form of the truncated Brown-Peterson spectrum \(\mathrm{BP}\langle n-1\rangle\). Then \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) admits the structure of an \(S^{1}\)-equivariant \(\mathrm{BP}\langle n\rangle\)-module (where \(S^{1}\) acts trivially on \(\mathrm{BP}\langle n\rangle\)), and the equivalences of Theorem 2.2.4(a) refine to \(p\)-complete equivalences of spectra_ \[\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))^{t\mathbf{Z}/m} \simeq\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/m}[B\Delta_{n}],\] \[\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n})) \simeq\mathrm{BP}\langle n\rangle^{tS^{1}}[B\Delta_{n}].\] _The first equivalence is \(S^{1}\)-equivariant for the residual \(S^{1}/\mu_{m}\)-action on \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))^{t\mathbf{Z}/m}\) and \(\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/m}\)._ **Remark 2.2.19**.: The primary difficulty with proving Conjecture 2.2.18 is that it is not clear how to endow \(\mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) or \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))^{t\mathbf{Z}/m}\) with the structure of \(\mathrm{BP}\langle n\rangle\)-modules. Nevertheless, a small part of the final equivalence in Conjecture 2.2.18 can be proved unconditionally when \(n=1\). Namely, there is a map \(\mathrm{TP}(\mathbf{Z}_{p}/X(p))\to\bigoplus_{j>-(p-1)}\Sigma^{2j}\mathrm{BP} \langle 1\rangle\) which induces the inclusion of summands on mod \(p\) cohomology. (This is the "easy" range, since the first predicted summand of \(\mathrm{TP}(\mathbf{Z}_{p}/X(p))\) which is not covered by this claim is \(\Sigma^{-2(p-1)}\mathrm{BP}\langle 1\rangle\); but \(\pi_{0}\) of this spectrum this is exactly where the class \(v_{1}\) lives.) We computed the mod \(p\) homology of \(\mathrm{TP}(\mathbf{Z}_{p}/X(p))\) in Proposition 2.2.15. This implies that \(\mathrm{H}^{*,c}(\mathrm{TP}(\mathbf{Z}_{p}/X(p));\mathbf{F}_{p})\cong\mathrm{ H}^{*}(\mathrm{BP}\langle 1\rangle;\mathbf{F}_{p})(\!(\hbar)\!)\otimes_{ \mathbf{F}_{p}}\mathrm{H}^{*}(\mathrm{BSU}(p-1);\mathbf{F}_{p})\). There is an Adams spectral sequence \[\mathrm{Ext}^{s,t+2j}_{\mathscr{A}_{*}}(\mathscr{A}/\!\!/\mathscr{E}(1), \mathscr{A}/\!\!/\mathscr{E}(1))(\!(\hbar)\!)\otimes_{\mathbf{F}_{p}}\!\mathrm{ H}^{*}(\mathrm{BSU}(p-1);\mathbf{F}_{p})\Rightarrow\pi_{0}\mathrm{Map}( \mathrm{TP}(\mathbf{Z}_{p}/X(p)),\Sigma^{2j}\mathrm{BP}\langle 1\rangle)_{p}^{ \wedge}.\] We wish to show that for \(j>-(p-1)\), any class in bidegree \((s,t-s)=(0,2j)\) survives to the \(E_{\infty}\)-page. For this, it suffices to show that there can be no nonzero \(d_{r}\)-differential off this class for \(r\geq 2\). This differential would necessarily land in \((r,2j-1)\). By [1, Proposition 4.1], \(\mathrm{Ext}^{s,t}_{\mathscr{A}_{*}}(\mathscr{A}/\!\!/\mathscr{E}(1),\mathscr{ A}/\!\!/\mathscr{E}(1))\) vanishes for \(s\geq 1\), \(t-s\) odd, and \(t-s\geq-2(p-1)\). In particular, we see that taking \((s,t-s)=(r,2j-1)\), we have \(2j-1\geq-2(p-1)\) precisely when \(j>-(p-1)\). Therefore, we get a map \(\mathrm{TP}(\mathbf{Z}_{p}/X(p))\to\Sigma^{2j}\mathrm{BP}\langle 1\rangle\) for every \(j>-(p-1)\), which gives the desired claim. ### Variant: \(\mathrm{THH}\) over a deeper base In Theorem 2.2.4, we saw a "polynomial" generator in degree \(2p^{n}\), where \(n\) is the height. When \(n=0\), this reduces the Bokstedt generator in degree \(2\); we will now discuss a variant of Theorem 2.2.4 when \(n=1\), where one obtains a generator in degree \(2\). **Construction 2.3.1**.: Let \(\mathrm{U}(1)\to\mathrm{SU}(p)\) denote the inclusion given by the homomorphism \[\lambda\mapsto\mathrm{diag}(\lambda,\cdots,\lambda,\lambda^{1-p}).\] There is an induced map \(\mathrm{BU}(1)\to\mathrm{BSU}(p)\), which defines an \(\mathbf{E}_{2}\)-map \(\Omega\mathrm{U}(1)\simeq\mathbf{Z}\to\Omega\mathrm{SU}(p)\). Let \(J(p)\) denote the Thom spectrum of the composite \(\mathbf{E}_{2}\)-map \(\mu:\Omega\mathrm{U}(1)\to\Omega\mathrm{SU}(p)\to\Omega\mathrm{SU}\simeq \mathrm{BU}\). Then \(J(p)\) admits an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-structure by Proposition 2.1.11 such that there is an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-algebra map \(J(p)\to X(p)\). Note that the underlying \(\mathbf{E}_{1}\)-map of \(\mu\) is null, since \(B\mu:S^{1}\to\mathrm{B}^{2}\mathrm{U}\simeq\mathrm{SU}\) is a class in \(\pi_{1}(\mathrm{SU})=0\). Therefore, the underlying \(\mathbf{E}_{1}\)-ring of \(J(p)\) is \(S[\mathbf{Z}]=S[t^{\pm 1}]\). Moreover, the underlying \(\mathbf{E}_{1}\)-map of \(J(p)\to X(p)\to\mathbf{Z}_{p}\) is the map \(S[t^{\pm 1}]\to\mathbf{Z}_{p}\) sending \(t\mapsto 1\). **Proposition 2.3.2**.: _There is an equivalence \(\operatorname{THH}(T(1)/J(p))\simeq T(1)[J_{p-1}(S^{2})]\). Similarly, \(\operatorname{THH}(X(p)/J(p))\simeq X(p)[J_{p-1}(S^{2})\times\operatorname{ SU}(p-1)]\)._ Proof.: Indeed, \(\operatorname{THH}(T(1)/J(p))\simeq\operatorname{THH}(T(1))\otimes_{ \operatorname{THH}(J(p))}J(p)\) is equivalent to \(T(1)[S^{2p-1}]\otimes_{T(1)[S^{1}]}T(1)\); but there is a fiber sequence \[S^{1}\to S^{2p-1}\to S^{2p-1}/S^{1}=\mathbf{C}P^{p-1}\simeq J_{p-1}(S^{2}),\] from which the desired claim follows. **Proposition 2.3.3**.: _The following statements are true:_ 1. _There is an equivalence_ \(\operatorname{THH}(\mathbf{Z}_{p}/J(p))\simeq\mathbf{Z}_{p}[\Omega S^{3}]\)_. In particular,_ \(\pi_{*}\operatorname{THH}(\mathbf{Z}_{p}/J(p))\cong\mathbf{Z}_{p}[x]\) _with_ \(|x|=2\)_. On homotopy, the map_ \(\operatorname{THH}(\mathbf{Z}_{p}/J(p))\to\operatorname{THH}(\mathbf{Z}_{p}/X(p))\) _is given by_ \[x^{j}\mapsto\begin{cases}\theta^{j/p}&j\in p\mathbf{Z},\\ 0&\text{else}.\end{cases}\] 2. _The canonical map_ \(\operatorname{THH}(\mathbf{Z}_{p}/J(p))\to\operatorname{THH}(\mathbf{F}_{p}/J( p))\) _factors through the unit_ \(\operatorname{THH}(\mathbf{F}_{p})\to\operatorname{THH}(\mathbf{F}_{p}/J(p))\)_, and defines an equivalence_ \(\mathbf{F}_{p}\otimes_{\mathbf{Z}_{p}}\operatorname{THH}(\mathbf{Z}_{p}/J(p)) \xrightarrow{\sim}\operatorname{THH}(\mathbf{F}_{p})\) _of_ \(\operatorname{THH}(\mathbf{Z}_{p})\)_-modules._ Proof.: For part (a), we begin by observing that there is an equivalence \[\operatorname{THH}(\mathbf{Z}_{p}/J(p))\simeq\operatorname{THH}(\mathbf{Z}_{p} )\otimes_{\operatorname{THH}(J(p))}J(p)\simeq\mathbf{Z}_{p}[\Omega S^{3} \langle 3\rangle]\otimes_{\mathbf{Z}_{p}[\operatorname{U}(1)]}\mathbf{Z}_{p}.\] The map \(\mathbf{Z}_{p}\otimes_{J(p)}\operatorname{THH}(J(p))\to\operatorname{THH}( \mathbf{Z}_{p})\) factors through \(\mathbf{Z}_{p}\otimes_{X(p)}\operatorname{THH}(X(p))\to\operatorname{THH}( \mathbf{Z}_{p})\), and can be identified with \(\mathbf{Z}_{p}\)-chains of the composite \[\operatorname{U}(1)\to\operatorname{SU}(p)\to S^{2p-1}\xrightarrow{\alpha_{1}} \Omega S^{3}\langle 3\rangle.\] Note that the map \(\operatorname{U}(1)\to S^{2p-1}\) is the fiber of the map \(S^{2p-1}\to\mathbf{C}P^{p-1}\). This composite can be identified with action of \(S^{1}\) on \(\Omega S^{3}\langle 3\rangle\). Since there is a fiber sequence \[S^{1}\to\Omega S^{3}\langle 3\rangle\to\Omega S^{3},\] we see that \(\operatorname{THH}(\mathbf{Z}_{p}/J(p))\simeq\mathbf{Z}_{p}[\Omega S^{3}]\). To identify the map \(\operatorname{THH}(\mathbf{Z}_{p}/J(p))\to\operatorname{THH}(\mathbf{Z}_{p}/X( p))\), observe that \(\mathbf{C}P^{p-1}\simeq J_{p-1}(S^{2})\) and that there is a square where each row and column is a fiber sequence: The effect of the map \(\operatorname{THH}(\mathbf{Z}_{p}/J(p))\to\operatorname{THH}(\mathbf{Z}_{p}/X( p))\) is dictated by the bottom-right vertical map, which is induced by the James-Hopf map \(H_{p}:\Omega S^{3}\to\Omega S^{2p+1}\). On \(\mathbf{Z}_{p}\)-homology, the effect of the James-Hopf map is as stated in Proposition 2.3.3(a). For part (b), there is an equivalence \[\operatorname{THH}(\mathbf{F}_{p}/J(p))\simeq\operatorname{THH}(\mathbf{F}_{p })\otimes_{\operatorname{THH}(J(p))}J(p)\simeq\mathbf{F}_{p}[\Omega S^{3}] \otimes_{\mathbf{F}_{p}[\operatorname{U}(1)]}\mathbf{F}_{p}.\] However, the map \({\bf F}_{p}\otimes_{J(p)}{\rm THH}(J(p))\to{\rm THH}({\bf F}_{p})\) factors through \({\bf F}_{p}\otimes_{{\bf Z}_{p}}{\rm THH}({\bf Z}_{p})\to{\rm THH}({\bf F}_{p})\), and can be identified with \({\bf F}_{p}\)-chains of the composite of \({\rm U}(1)\to\Omega S^{3}\langle 3\rangle\) with the canonical map \(\Omega S^{3}\langle 3\rangle\to\Omega S^{3}\). This composite is null as an \({\bf E}_{1}\)-map (in fact, as an \({\bf E}_{2}\)-map), since there is a fiber sequence of \({\bf E}_{1}\)-spaces \[{\rm BU}(1)\simeq{\bf C}P^{\infty}\to S^{3}\langle 3\rangle\to S^{3}.\] Therefore, we see that \[{\rm THH}({\bf F}_{p}/J(p))\simeq{\bf F}_{p}[\Omega S^{3}]\otimes_{{\bf F}_{p }}({\bf F}_{p}\otimes_{{\bf F}_{p}[{\rm U}(1)]}{\bf F}_{p})\simeq{\bf F}_{p}[ \Omega S^{3}\times{\bf C}P^{\infty}].\] This implies that the map \({\rm THH}({\bf Z}_{p}/J(p))\to{\rm THH}({\bf F}_{p}/J(p))\) factors through \({\rm THH}({\bf F}_{p})\to{\rm THH}({\bf F}_{p}/J(p))\). In turn, we obtain a map \({\bf F}_{p}\otimes_{{\bf Z}_{p}}{\rm THH}({\bf Z}_{p}/J(p))\to{\rm THH}({\bf F} _{p})\) which sends the generators in \(\pi_{*}({\bf F}_{p}\otimes_{{\bf Z}_{p}}{\rm THH}({\bf Z}_{p}/J(p)))\cong{\bf F }_{p}[x]\) to the generators in \(\pi_{*}{\rm THH}({\bf F}_{p})\cong{\bf F}_{p}[\sigma]\). Therefore, the map \({\bf F}_{p}\otimes_{{\bf Z}_{p}}{\rm THH}({\bf Z}_{p}/J(p))\to{\rm THH}({\bf F} _{p})\) is an equivalence, as desired. **Remark 2.3.4**.: The map \(J(p)\to X(p)\) induces a map \(u:{\rm THH}({\bf Z}_{p}/J(p))\to{\rm THH}({\bf Z}_{p}/X(p))\). Under Theorem 2.2.4 and Proposition 2.3.3, the map \(u\) can be identified with the \({\bf Z}_{p}\)-chains of the composite \[\Omega S^{3}\to\Omega S^{2p+1}\to\Omega S^{2p+1}\times{\rm BSU}(p-1);\] here, the map \(\Omega S^{3}\to\Omega S^{2p+1}\) is the Hopf map. This claim follows from the proof of Proposition 2.3.3, Proposition 2.3.2, and the EHP fibration \[J_{p-1}(S^{2})\to\Omega S^{3}\to\Omega S^{2p+1}.\] In particular, the map \(u\) induces the map \({\bf Z}_{p}[x]\to{\bf Z}_{p}[\theta]\otimes_{{\bf Z}_{p}}{\bf Z}_{p}[{\rm BSU}( p-1)]\) which sends \(x^{m}\mapsto\theta^{m/p}\) if \(p\mid m\) and \(x^{m}\mapsto 0\) otherwise. Note that if \(T(1)\) were an \({\bf E}_{2}^{\rm fr}\)-algebra, the map \(u\) would factor through \({\rm THH}({\bf Z}_{p}/J(p))\to{\rm THH}({\bf Z}_{p}/T(1))\); and under the equivalences of Theorem 2.2.4 and Proposition 2.3.3, this would identify with the \({\bf Z}_{p}\)-chains of the Hopf map. **Remark 2.3.5**.: Proposition 2.3.3 demonstrates the dependence of \({\rm THH}(R^{\prime}/R)\) on the \({\bf E}_{1}\)-\(R\)-algebra structure on \(R^{\prime}\). Indeed, recall that the underlying \({\bf E}_{1}\)-map of the \({\bf E}_{2}\)-map \(J(p)\to X(p)\to{\bf Z}_{p}\) is the map \(S[t^{\pm 1}]\to{\bf Z}_{p}\) sending \(t\mapsto 1\). Proposition 2.3.3 states that \({\rm THH}({\bf Z}_{p}/J(p))\simeq{\bf Z}_{p}[\Omega S^{3}]\). However, suppose that \(S[t^{\pm 1}]=S[{\bf Z}]\) is equipped with its standard \({\bf E}_{2}\)-structure, and \({\bf Z}_{p}\) is viewed as an \({\bf E}_{1}\)-\(S[{\bf Z}]\)-algebra via the composite \(S[{\bf Z}]\to S\to{\bf Z}_{p}\). Then \({\rm THH}({\bf Z}_{p}/S[{\bf Z}])\simeq{\rm THH}({\bf Z}_{p})\otimes S[{\bf C}P ^{\infty}]\simeq{\bf Z}_{p}[\Omega S^{3}\langle 3\rangle\times{\bf C}P^{\infty}]\). Since \({\bf Z}_{p}[\Omega S^{3}\langle 3\rangle\times{\bf C}P^{\infty}]\not\simeq{\bf Z}_{p}[ \Omega S^{3}]\), we conclude that \({\rm THH}({\bf Z}_{p}/S[{\bf Z}])\not\simeq{\rm THH}({\bf Z}_{p}/J(p))\). **Corollary 2.3.6**.: _There is an isomorphism \(\pi_{*}{\rm TP}({\bf Z}_{p}/J(p))\simeq{\bf Z}_{p}[t^{\pm 1}]^{\wedge}_{(t-1)}( \!(\hbar)\!)\) with \(|\hbar|=-2\)._ **Corollary 2.3.7**.: _If \(\mathscr{C}\) is a \({\bf Z}_{p}\)-linear \(\infty\)-category, there is a (non-\(S^{1}\)-equivariant) equivalence \({\rm THH}(\mathscr{C}/J(p))\otimes_{{\bf Z}_{p}}{\bf F}_{p}\simeq{\rm THH}( \mathscr{C}\otimes_{{\bf Z}_{p}}{\bf F}_{p})\)._ Proof.: By Proposition 2.3.3(b), there is an equivalence \({\rm THH}({\bf Z}_{p}/J(p))\otimes_{{\bf Z}_{p}}{\bf F}_{p}\simeq{\rm THH}({\bf F }_{p})\) of \({\rm THH}({\bf Z}_{p})\)-modules. It follows that \[{\rm THH}(\mathscr{C}/J(p))\otimes_{{\bf Z}_{p}}{\bf F}_{p} \simeq{\rm THH}(\mathscr{C})\otimes_{{\rm THH}({\bf Z}_{p})}{\rm THH}({\bf Z }_{p}/J(p))\otimes_{{\bf Z}_{p}}{\bf F}_{p}\] \[\xrightarrow{\sim}{\rm THH}(\mathscr{C})\otimes_{{\rm THH}({\bf Z}_{p })}{\rm THH}({\bf F}_{p})\simeq{\rm THH}(\mathscr{C}\otimes_{{\bf Z}_{p}}{\bf F }_{p}),\] as desired. **Remark 2.3.8**.: Recall from [1, Theorem 3.5] that if \(S[z]=S[\mathbf{Z}_{\geq 0}]\) denotes the flat polynomial ring on a class in degree \(0\), then there is an isomorphism \(\pi_{*}\mathrm{THH}(\mathbf{Z}_{p}/S[z])\cong\mathbf{Z}_{p}[\sigma^{2}(z-p)]\), where the \(\mathbf{E}_{\infty}\)-map \(S[z]\to\mathbf{Z}_{p}\) sends \(z\mapsto p\). This implies that \(\pi_{*}\mathrm{TP}(\mathbf{Z}_{p}/S[z])\cong\mathbf{Z}_{p}[z]_{(z-p)}^{\wedge} (\!(\hbar)\!)\). Similarly, there is an isomorphism \(\pi_{*}\mathrm{TP}(\mathbf{Z}_{p}/S[\![\widetilde{p}]\!])\cong\mathbf{Z}_{p}[ \![\widetilde{p}]\!]_{(\widetilde{p}-p)}^{\wedge}(\!(\hbar)\!)\), where \(\widetilde{p}\mapsto p\) and \(S[\![\widetilde{p}]\!]=\left(S[q^{\pm 1}]_{(p,q-1)}^{\wedge}\right)^{h\mathbf{F}_{p}^{ \times}}\). In the same way, there is an isomorphism \(\pi_{*}\mathrm{THH}(\mathbf{Z}_{p}/S[t^{\pm 1}])\cong\mathbf{Z}_{p}[\sigma^{2}(t+p-1)]\), where the \(\mathbf{E}_{\infty}\)-map \(S[t^{\pm 1}]\to\mathbf{Z}_{p}\) sends \(t\mapsto 1-p\). This implies that \(\pi_{*}\mathrm{TP}(\mathbf{Z}_{p}/S[t^{\pm 1}])\cong\mathbf{Z}_{p}[t^{\pm 1}]_{(t+p-1)} ^{\wedge}(\!(\hbar)\!)\). In light of the obvious analogy to Proposition 2.3.3 and Corollary 2.3.6, it is natural to ask: what is the role of \(J(p)\)? To answer this, let us assume for simplicity that \(T(1)\) admits the structure of an \(\mathbf{E}_{2}\)-ring. The main utility of \(J(p)\) is that it admits, by construction, a direct comparison to \(T(1)\); one can view \(J(p)\) as containing roughly the same "height \(1\)" information as \(T(1)\). On the other hand, we do not know how to directly compare \(S[t^{\pm 1}]\) (with the standard \(\mathbf{E}_{2}\)-structure) to \(T(1)\). (Both admit \(\mathbf{E}_{1}\)-algebra maps to \(T(1)[t^{\pm 1}]\), but this is somewhat unsatisfactory.) One can therefore view Construction 2.3.1 as an explicit modification of the \(\mathbf{E}_{2}\)-structure on \(S[t^{\pm 1}]\) such that the resulting \(\mathbf{E}_{2}\)-algebra admits an interesting map to \(T(1)\). It is natural to ask if Proposition 2.3.3 admits a generalization to \(\mathrm{BP}\langle n-1\rangle\). At height \(1\) and \(p=2\), we can explicitly construct some \(\mathbf{E}_{2}^{\mathrm{fr}}\)-rings which give higher analogues of \(J(p)\), but a general construction at higher heights and other primes eludes us. **Construction 2.3.9**.: Recall from Remark 2.1.10 that there is an \(\mathbf{E}_{2}\)-map \(\Omega\mathrm{Sp}(2)\to\mathrm{BU}\) whose Thom spectrum is equivalent to \(T(2)\) at \(p=2\). Let \(T_{2}(2)\) denote the \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring defined as the Thom spectrum of the composite \(\mathbf{E}_{2}\)-map \[\Omega\mathrm{Spin}(4)\to\Omega\mathrm{Sp}(2)\to\mathrm{BU},\] where the first map is induced by the inclusion \(\mathrm{Spin}(4)\subseteq\mathrm{Spin}(5)\cong\mathrm{Sp}(2)\). Similarly, let \(T_{4}(2)\) denote the \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring defined as the Thom spectrum of the composite \(\mathbf{E}_{2}\)-map \[\Omega\mathrm{U}(2)\to\Omega\mathrm{Sp}(2)\to\mathrm{BU},\] where the first map is induced by the inclusion \(\mathrm{U}(2)\subseteq\mathrm{Sp}(2)\). Note that this inclusion factors as \(\mathrm{U}(2)\to\mathrm{Spin}(4)\to\mathrm{Sp}(2)\), so that there is a composite map of \(\mathbf{E}_{2}^{\mathrm{fr}}\)-rings \[T_{4}(2)\to T_{2}(2)\to T(2).\] **Remark 2.3.10**.: There is a fiber sequence \[\Omega S^{3}\to\Omega\mathrm{Spin}(4)\to\Omega S^{3},\] which implies that \(\mathrm{MU}_{*}(T_{2}(2))\simeq\mathrm{MU}_{*}[t_{1},x_{2}]\) where \(|x_{2}|=2\). Similarly, there is a fiber sequence \[\Omega S^{3}\to\Omega\mathrm{U}(2)\to\Omega S^{1}\simeq\mathbf{Z},\] which implies that \(\mathrm{MU}_{*}(T_{4}(2))\simeq\mathrm{MU}_{*}[t_{1},x_{0}^{\pm 1}]\) where \(|x_{0}|=0\). **Lemma 2.3.11**.: _There is a diffeomorphism \(\mathrm{Sp}(2)/\mathrm{Spin}(4)\cong S^{4}\), as well as a homotopy equivalence \(\mathrm{Sp}(2)/\mathrm{U}(2)\simeq J_{3}(S^{2})\)._ Proof.: The first diffeomorphism follows immediately from the isomorphism \(\operatorname{Sp}(2)\cong\operatorname{Spin}(5)\) and the resulting chain \[\operatorname{Sp}(2)/\operatorname{Spin}(4)\cong\operatorname{Spin}(5)/ \operatorname{Spin}(4)\cong\operatorname{SO}(5)/\operatorname{SO}(4)\cong S^{4}.\] To prove the second equivalence, the key input is [1, Proposition 4.3], which says that there is a fiber sequence \[V_{2}(\mathbf{R}^{5})\to J_{3}(S^{2})\to\mathbf{C}P^{\infty};\] in other words, there is an \(S^{1}\)-action on the Stiefel manifold \(V_{2}(\mathbf{R}^{5})\) such that \(V_{2}(\mathbf{R}^{5})/S^{1}\cong J_{3}(S^{2})\). Recall that \(V_{2}(\mathbf{R}^{5})\) is diffeomorphic to \(\operatorname{SO}(5)/\operatorname{SO}(3)\cong\operatorname{Spin}(5)/ \operatorname{SU}(2)\). It is not difficult to see that the claimed \(S^{1}\)-action on \(V_{2}(\mathbf{R}^{5})\) via the above fiber sequence is precisely the residual action of \(\operatorname{U}(2)/\operatorname{SU}(2)\cong S^{1}\) on \(\operatorname{Spin}(5)/\operatorname{SU}(2)\); in particular, we may identify \(J_{3}(S^{2})\simeq\operatorname{Spin}(5)/\operatorname{U}(2)\), as desired. **Remark 2.3.12**.: The quotient \(\operatorname{Sp}(2)/\operatorname{U}(2)\) is also known as the complex Lagrangian Grassmannian \(\operatorname{Gr}_{2}^{\operatorname{Lag}}(T^{*}\mathbf{C}^{2})\) of Lagrangian subspaces of \(T^{*}\mathbf{C}^{2}\). **Warning 2.3.13**.: One should not confuse \(\operatorname{Sp}(2)/\operatorname{U}(2)\) with the quotient \(\operatorname{Sp}(2)/(\operatorname{Sp}(1)\times\operatorname{U}(1))\): indeed, Lemma 2.3.11 says that the former is homotopy equivalent to \(J_{3}(S^{2})\), while the latter is diffeomorphic to \(S^{7}/\operatorname{U}(1)=\mathbf{C}P^{3}\). These spaces are not homotopy equivalent (although they do become equivalent after inverting 6). Lemma 2.3.11 has the following amusing (inconsequential?) consequence: **Corollary 2.3.14**.: _Let \(Q\subseteq\mathbf{C}P^{4}\) be a complex quadric, and let \(\operatorname{Gr}_{2}^{+}(\mathbf{R}^{5})\) denote the Grassmannian of oriented \(2\)-planes in \(\mathbf{R}^{5}\). Then, there are diffeomorphisms \(Q\cong\operatorname{Gr}_{2}^{\operatorname{Lag}}(T^{*}\mathbf{C}^{2})\cong \operatorname{Gr}_{2}^{+}(\mathbf{R}^{5})\), and these are homotopy equivalent to \(J_{3}(S^{2})\)._ Proof.: Since \(\operatorname{Sp}(2)/\operatorname{U}(2)\cong\operatorname{SO}(5)/( \operatorname{SO}(3)\cdot\operatorname{SO}(2))\), we can identify \(\operatorname{Sp}(2)/\operatorname{U}(2)=\operatorname{Gr}_{2}^{\operatorname {Lag}}(T^{*}\mathbf{C}^{2})\) with \(\operatorname{Gr}_{2}^{+}(\mathbf{R}^{5})\). Therefore, Lemma 2.3.11 gives a homotopy equivalence \(\operatorname{Gr}_{2}^{+}(\mathbf{R}^{5})\simeq J_{3}(S^{2})\). The desired claim now follows from the observation that \(\operatorname{Gr}_{2}^{+}(\mathbf{R}^{5})\) is diffeomorphic to a quadric \(Q\subseteq\mathbf{C}P^{4}\) via the map \(\operatorname{Gr}_{2}^{+}(\mathbf{R}^{5})\to\operatorname{Gr}_{1}(\mathbf{C}^{ 5})\cong\mathbf{C}P^{4}\) induced by the isomorphism \(\mathbf{R}^{10}\xrightarrow{\sim}\mathbf{C}^{5}\); see [1, Example 10.6, Page 280]. **Remark 2.3.15**.: There is a fibration7 (see (57) for a more general statement) Footnote 7: The fibration (12) is analogous to the “twistor” fibration (see (62)) \(S^{2}\to\mathbf{C}P^{3}\to S^{4}\). \[S^{2}\to J_{3}(S^{2})\to S^{4}, \tag{12}\] which, under the diffeomorphism \[\operatorname{Spin}(4)/\operatorname{U}(2)\cong(\operatorname{SU}(2)\times \operatorname{SU}(2))/\operatorname{U}(2)\cong\operatorname{SU}(2)/\operatorname{ U}(1)\cong S^{2},\] can be identified via Lemma 2.3.11 with the fibration \[\operatorname{Spin}(4)/\operatorname{U}(2)\to\operatorname{Sp}(2)/ \operatorname{U}(2)\to\operatorname{Sp}(2)/\operatorname{Spin}(4).\] There is also a commutative diagram where each row and column is a fibration: the rightmost vertical fiber sequence is the Hopf fibration. This diagram captures the relationships between \(J(2)\), \(T_{4}(2)\), \(T(1)\), and \(T(2)\). **Remark 2.3.16**.: The equivalence \(\mathrm{Sp}(2)/\mathrm{U}(2)=\mathrm{Gr}_{2}^{\mathrm{Lag}}(T^{*}\mathbf{C}^{2} )\simeq J_{3}(S^{2})\) of Lemma 2.3.11 can be used to understand the relationship between \(T(2)\) and the Mahowald-Ravenel-Shick spectrum \(y(2)\) from [16] (at the prime \(2\)).8 Recall from Remark 2.1.10 that there is an \(\mathbf{E}_{2}\)-map \(\Omega\mathrm{Sp}(2)\to\mathrm{BU}\) whose Thom spectrum is equivalent to \(T(2)\) at \(p=2\). Similarly, recall that \(y(2)\) is the Thom spectrum of the bundle determined by the map \(\mu:\Omega J_{3}(S^{2})\to\Omega^{2}S^{3}\to\mathrm{BO}\), where the second map is the extension of the Mobius bundle \(S^{1}\to\mathrm{BO}\). Under the equivalence \(\mathrm{Sp}(2)/\mathrm{U}(2)\simeq\mathrm{Sp}/\mathrm{U}\), the map \(\mu:\Omega J_{3}(S^{2})\to\mathrm{BO}\) can be identified with the composite Footnote 8: A simpler version of this discussion simply states that if \(\Omega S^{2}\to\mathrm{BO}\) is the map extending the Möbius bundle \(S^{1}\to\mathrm{BO}\), then [11, Proposition 2.1.6] along with loops on the fibration \[S^{3}\xrightarrow{\eta}S^{2}\to\mathbf{C}P^{\infty}\] implies that there is a map \(S^{1}\to\mathrm{BGL}_{1}(T(1))\) whose Thom spectrum is the \(\mathbf{E}_{1}\)-quotient \(S\!/2=y(1)\). The map \(S^{1}\to\mathrm{BGL}_{1}(T(1))\) detects \(1-2\in\pi_{0}(T(1))^{\times}\) on the bottom cell of the source, so we recover the fact that \(T(1)/2\simeq y(1)\). In particular, \(\mathrm{HH}(y(1)/T(1))\simeq y(1)[\mathbf{C}P^{\infty}]\). Since \(y(1)\otimes_{T(1)}\mathbf{Z}_{2}\simeq\mathbf{F}_{2}\), this recovers the well-known observation that \(\mathrm{HH}(\mathbf{F}_{2}/\mathbf{Z}_{2})\simeq\mathbf{F}_{2}[\mathbf{C}P^{ \infty}]\), at least as _modules_ over \(\mathbf{F}_{2}\). This argument does not give the _\(\mathbf{F}_{2}\)-algebra_ structure, since \(\mathrm{HH}(y(1)/T(1))\) is not a ring. \[\Omega(\mathrm{Sp}(2)/\mathrm{U}(2))\to\Omega(\mathrm{Sp}/\mathrm{U})\to \mathrm{B}^{2}\mathrm{O}\xrightarrow{\eta}\mathrm{BO};\] the middle map is obtained via Bott periodicity. Applying [11, Proposition 2.1.6] to loops on the fibration \[\mathrm{Sp}(2)\to J_{3}(S^{2})\to\mathrm{BU}(2),\] we conclude that \(y(2)=\Omega J_{3}(S^{2})^{\mu}\) is equivalent as an \(\mathbf{E}_{1}\)-ring to the Thom spectrum of an \(\mathbf{E}_{1}\)-map \(\mathrm{U}(2)\to\mathrm{BGL}_{1}(T(2))\). This implies, for instance, that \(\mathrm{THH}(y(2)/T(2))\simeq y(2)[\mathrm{BU}(2)]\). Since \(k(2)\simeq y(2)\otimes_{T(2)}\mathrm{BP}\langle 2\rangle\), this implies that \(\mathrm{THH}(k(2)/\mathrm{BP}\langle 2\rangle)\simeq k(2)[\mathrm{BU}(2)]\). Similarly, since \(y(2)\otimes_{T(2)}\mathrm{ku}\simeq\mathbf{F}_{2}\), we also recover the observation that \(\mathbf{F}_{2}\) is equivalent as an \(\mathbf{E}_{1}\)-ring to the Thom spectrum of an \(\mathbf{E}_{1}\)-map \(\mathrm{U}(2)\to\mathrm{BGL}_{1}(\mathrm{ku})\), and hence that \(\mathrm{HH}(\mathbf{F}_{2}/\mathrm{ku})\simeq\mathbf{F}_{2}[\mathrm{BU}(2)]\) as \(\mathbf{F}_{2}\)-modules. **Proposition 2.3.17**.: _There is an equivalence \(\mathrm{THH}(T(2)/T_{2}(2))\simeq T(2)[S^{4}]\), as well as an equivalence \(\mathrm{THH}(T(2)/T_{4}(2))\simeq T(2)[J_{3}(S^{2})]\)._ Proof.: Note that \(\eta\) is nullhomotopic in \(T_{4}(2)\) (and hence in \(T_{2}(2)\)), since the inclusion \(\mathrm{SU}(2)\to\mathrm{U}(2)\) defines a map \(S^{2}\to\Omega\mathrm{U}(2)\), which in turn Thomas to a map \(C\eta\to T_{4}(2)\) which factors the unit. By Lemma 2.3.11, there are fiber sequences of \(\mathbf{E}_{1}\)-spaces \[\Omega\mathrm{Spin}(4) \to\Omega\mathrm{Sp}(2)\to\Omega S^{4},\] \[\Omega\mathrm{U}(2) \to\Omega\mathrm{Sp}(2)\to\Omega J_{3}(S^{2}),\] which by [1, Proposition 2.1.6] (see also [1]) imply that \(T(2)\) is a Thom spectrum of an \(\mathbf{E}_{1}\)-map \(\Omega S^{4}\to\mathrm{BGL}_{1}(T_{2}(2))\) (resp. \(\Omega J_{3}(S^{2})\to\mathrm{BGL}_{1}(T_{4}(2))\)). Together with [1], this implies the desired claim. **Remark 2.3.18**.: Recall that \(\mathrm{SU}(4)/\mathrm{Sp}(2)\cong S^{5}\). It follows that \(\mathrm{THH}(X(4)/T(2))\simeq X(4)[S^{5}]\). Similarly, recall that \(\mathrm{SU}(4)\cong\mathrm{Spin}(6)\); therefore, there is an diffeomorphism \[\mathrm{SU}(4)/\mathrm{Spin}(4)\cong\mathrm{Spin}(6)/\mathrm{Spin}(4)\cong \mathrm{SO}(6)/\mathrm{SO}(4)\cong V_{2}(\mathbf{R}^{6}).\] It follows that \(\mathrm{THH}(X(4)/T_{2}(2))\simeq X(4)[V_{2}(\mathbf{R}^{6})]\). (Note also that \(\mathrm{SU}(4)/\mathrm{Spin}(4)\cong\mathrm{SU}(4)/(\mathrm{SU}(2)\times \mathrm{SU}(2))\) can be viewed as an "oriented complex Grassmannian" \(\widetilde{\mathrm{Gr}}_{2}(\mathbf{C}^{4})\).) Finally, \(\mathrm{THH}(X(4)/T_{2}(2))\simeq X(4)[\mathrm{SU}(4)/\mathrm{U}(2)]\). **Corollary 2.3.19**.: _There are \(2\)-complete equivalences of \(\mathrm{ku}\)-modules_ \[\mathrm{THH}(\mathrm{ku}/T_{2}(2)) \simeq\mathrm{ku}[\Omega S^{5}],\] \[\mathrm{THH}(\mathrm{ku}/T_{4}(2)) \simeq\mathrm{ku}[\Omega S^{3}].\] _Under these equivalences, the maps_ \[\mathrm{THH}(\mathrm{ku}/T_{4}(2))\to\mathrm{THH}(\mathrm{ku}/T_{2}(2))\to \mathrm{THH}(\mathrm{ku}/T(2))\] _are induced by taking \(\mathrm{ku}\)-chains of the Hopf maps_ \[\Omega S^{3}\xrightarrow{H}\Omega S^{5}\xrightarrow{H}\Omega S^{9}.\] Proof.: Using Proposition 2.3.17, this follows from Theorem 2.2.4(a) (more precisely, the version with \(p=2\) and \(n=2\) for \(\mathrm{THH}(\mathrm{BP}\langle 1\rangle/T(2))\simeq\mathrm{ku}[\Omega S^{9}]\)), and the fiber sequences of \(\mathbf{E}_{1}\)-spaces \[\Omega S^{4}\simeq\Omega(\mathrm{Sp}(2)/\mathrm{Spin}(4)) \to\Omega^{2}S^{5}\to\Omega^{2}S^{9},\] \[\Omega J_{3}(S^{2})\simeq\Omega(\mathrm{Sp}(2)/\mathrm{U}(2)) \to\Omega^{2}S^{3}\to\Omega^{2}S^{9}\] obtained by looping the \(2\)-local EHP fiber sequences for \(S^{4}\) and \(S^{2}\). The identification of the maps \(\mathrm{THH}(\mathrm{ku}/T_{4}(2))\to\mathrm{THH}(\mathrm{ku}/T_{2}(2))\) and \(\mathrm{THH}(\mathrm{ku}/T_{2}(2))\to\mathrm{THH}(\mathrm{ku}/T(2))\) is an immediate consequence. **Remark 2.3.20**.: Recall from Theorem 2.2.4(a) that the generator \(\theta_{2}\in\pi_{8}\mathrm{THH}(\mathrm{ku}/T(2))\) can be understood as \(\sigma^{2}(v_{2})\) (up to decomposables). Taking \(\mathrm{THH}\) relative to the Thom spectrum \(T_{2}(2)\) over \(\Omega\mathrm{Spin}(4)\) can be regarded as extracting a square root of \(\theta_{2}\in\pi_{8}\mathrm{THH}(\mathrm{ku}/T(2))\). Similarly, taking \(\mathrm{THH}\) relative to the Thom spectrum \(T_{4}(2)\) over \(\Omega\mathrm{U}(2)\) can be regarded as extracting a fourth root of \(\theta_{2}\in\pi_{8}\mathrm{THH}(\mathrm{ku}/T(2))\); hence the subscript \(4\). (Roughly, the generator of \(\pi_{4}\mathrm{THH}(\mathrm{ku}/T_{2}(2))\) can be thought of as \(\sigma^{2}(v_{1})\); and the generator of \(\pi_{2}\mathrm{THH}(\mathrm{ku}/T_{4}(2))\) can be thought of as \(\sigma^{2}(2)\).) In particular, one should regard \(T_{4}(2)=(\Omega\mathrm{U}(2))^{\mu}\) as the appropriate analogue of \(J(p)\) at height \(1\) and \(p=2\). **Remark 2.3.21**.: Corollary 2.3.19 suggests that \(\mathrm{ku}_{2}^{\wedge}\) is equivalent to the Thom spectrum of an \(\mathbf{E}_{1}\)-map \(\Omega^{2}S^{3}\to\mathrm{BGL}_{1}(T_{4}(2))\). This could also be rephrased in a manner similar to the results of [11]: assuming [11, Conjectures D and E], [12, Corollary B] says that \(\mathrm{ku}_{2}^{\wedge}\) is the Thom spectrum of a map \(\Omega^{2}S^{9}\to\mathrm{BGL}_{1}(T(2))\). It follows from Proposition 2.3.17 that \(T(2)\simeq\mathrm{colim}_{\Omega J_{3}(S^{2})}\,T_{4}(2)\), so that [11, Corollary B] implies \[\mathrm{ku}_{2}^{\wedge}\simeq\mathrm{colim}_{\Omega^{2}S^{9}}\,T(2)\simeq \mathrm{colim}_{\Omega^{2}S^{9}}\,\mathrm{colim}_{\Omega J_{3}(S^{2})}\,T_{4}( 2)\simeq\mathrm{colim}_{\Omega^{2}S^{3}}\,T_{4}(2),\] where the final equivalence comes from the \(\mathbf{E}_{1}\)-equivalence \(\mathrm{colim}_{\Omega^{2}S^{9}}\,\Omega J_{3}(S^{2})\simeq\Omega^{2}S^{3}\) arising from the EHP sequence. This leads to the following, which we only state for \(T(n)\); there is an analogue for \(X(p^{n})\), too. **Conjecture 2.3.22**.: _Fix a prime \(p\) and \(n\geq 0\). For each \(0\leq j\leq n\), there are \(\mathbf{E}_{2}^{\mathrm{fr}}\)-rings \(T_{p^{j}}(n)\) equipped with \(\mathbf{E}_{2}^{\mathrm{fr}}\)-maps_ \[T_{p^{n}}(n)\to\cdots\to T_{p^{j}}(n)\to T_{p^{j-1}}(n)\to\cdots\to T_{0}(n)=T (n)\] _such that there are \(p\)-complete equivalences_ \[\mathrm{THH}(T(n)/T_{p^{j}}(n))\simeq\mathrm{BP}\langle n-1\rangle[J_{p^{j}-1 }(S^{2p^{n-j}})],\] \[\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T_{p^{j}}(n))\simeq\mathrm{BP} \langle n-1\rangle[\Omega S^{2p^{n-j}+1}].\] _The map \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T_{p^{j}}(n))\to\mathrm{THH}( \mathrm{BP}\langle n-1\rangle/T_{p^{j-1}}(n))\) induced by the \(\mathbf{E}_{2}^{\mathrm{fr}}\)-map \(T_{p^{j}}(n)\to T_{p^{j-1}}(n)\) is given by \(\mathrm{BP}\langle n-1\rangle\)-chains on the Hopf map \(\Omega S^{2p^{n-j}+1}\to\Omega S^{2p^{n-j+1}+1}\). In other words, if \(\theta_{n}^{1/p^{j}}\in\pi_{2p^{n-j}}\mathrm{THH}(\mathrm{BP}\langle n-1 \rangle/T_{p^{j}}(n))\) denotes the generator (roughly, thought of as \(\sigma^{2}(v_{n-j})\)), then_ \[\pi_{2p^{n-j}}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T_{p^{j}}(n))\ni \theta_{n}^{1/p^{j}}\mapsto(\theta_{n}^{1/p^{j-1}})^{p}\in\pi_{2p^{n-j}} \mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T_{p^{j-1}}(n)).\] In particular, Conjecture 2.3.22 says that for the putative \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring \(T_{p^{n}}(n)\), there is an equivalence \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T_{p^{n}}(n))\simeq\mathrm{BP} \langle n-1\rangle[\sigma]\) with \(|\sigma|=2\). **Example 2.3.23**.: There is an inclusion \(\mathrm{Spin}^{c}(5)\cong\mathrm{Sp}(2)\cdot\mathrm{U}(1)\subseteq\mathrm{Sp}(3)\) (whose quotient is \(\mathbf{C}P^{5}\)), so that composition with the inclusion \(\mathrm{Sp}(3)\subseteq\mathrm{SU}(6)\) defines an inclusion \(\mathrm{Sp}(2)\cdot\mathrm{U}(1)\subseteq\mathrm{SU}(6)\). In particular, we obtain an \(\mathbf{E}_{2}\)-map \(\Omega(\mathrm{Sp}(2)\cdot\mathrm{U}(1))\to\Omega\mathrm{SU}(6)\). The Thom spectrum of the resulting composite \(\mathbf{E}_{2}\)-map \[\Omega(\mathrm{Sp}(2)\cdot\mathrm{U}(1))\to\Omega\mathrm{SU}(6)\to\Omega \mathrm{SU}\simeq\mathrm{BU}\] defines an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring, which we expect can be identified with \(T_{8}(3)\) for \(p=2\). ## 3. The topological Sen operator ### Constructing the topological Sen operator There is a much simpler description of the descent spectral sequence of Remark 2.2.12, following the perspective of Remark 2.2.17 that Theorem 2.2.4(b) is essentially a calculation of a Serre spectral sequence. We will continue to fix \(\mathbf{E}_{3}\)-forms of the truncated Brown-Peterson spectra \(\mathrm{BP}\langle n-1\rangle\) and \(\mathrm{BP}\langle n\rangle\). **Notation 3.1.1**.: Let \(R\) be an \(\mathbf{E}_{\infty}\)-\(\mathbf{Z}_{p}\)-algebra. We will write \(\epsilon^{R}\) to denote \(R[\mathrm{BSU}(p-1)]\) and \(\epsilon^{R}_{*}\) to denote \(\pi_{*}\epsilon^{R}\). (The notation is meant to indicate that \(\epsilon\) only plays a "small" role in the below discussion.) **Definition 3.1.2** (Spectral Gysin sequence).: Suppose \(S^{n-1}\to E\to B\) is a fibration. Since \(E\simeq\operatorname{hocolim}_{B}S^{n-1}\) in pointed spaces, we have \(E_{+}\simeq\operatorname{hocolim}_{B}S^{n-1}_{+}\). There is a cofiber sequence \(S^{n-1}_{+}\to S^{0}\to S^{n}\), so we obtain a cofiber sequence \[E_{+}\to\operatorname{hocolim}_{B}(S^{0})\simeq B_{+}\to\operatorname{hocolim }_{B}(S^{n})\simeq\Sigma^{n}(B_{+}).\] If \(R\) is an \(\mathbf{E}_{1}\)-ring, we get a cofiber sequence of left \(R\)-modules: \[R[E]\to R[B]\to\Sigma^{n}R[B].\] **Construction 3.1.3** (Topological Sen operator).: Let \(\mathscr{C}\) be an \(X(n)\)-linear \(\infty\)-category. There is an \(S^{1}\)-equivariant equivalence \[\operatorname{THH}(\mathscr{C}/X(n-1)) \simeq\operatorname{THH}(\mathscr{C})\otimes_{\operatorname{THH}( X(n-1))}X(n-1)\] \[\simeq\operatorname{THH}(\mathscr{C})\otimes_{X(n)\otimes_{X(n-1 )}\operatorname{THH}(X(n-1))}X(n),\] a tautological \(S^{1}\)-equivariant equivalence \[\operatorname{THH}(\mathscr{C}/X(n))\simeq\operatorname{THH}(\mathscr{C}) \otimes_{\operatorname{THH}(X(n))}X(n).\] Since \(\operatorname{THH}(X(n-1))\simeq X(n-1)[\mathrm{SU}(n-1)]\), there is an equivalence \(X(n)\otimes_{X(n-1)}\operatorname{THH}(X(n-1))\simeq X(n)[\mathrm{SU}(n-1)]\). Note that \(X(n)\otimes_{X(n-1)}\operatorname{THH}(X(n-1))\) admits the structure of an \(\mathbf{E}_{1}\)-ring, and that the \(\mathbf{E}_{1}\)-algebra map \(\operatorname{THH}(X(n-1))\to\operatorname{THH}(X(n))\) induces an \(\mathbf{E}_{1}\)-algebra map \(X(n)\otimes_{X(n-1)}\operatorname{THH}(X(n-1))\to\operatorname{THH}(X(n)) \simeq X(n)[\mathrm{SU}(n)]\). The fiber sequence \[S^{2n-1}\to\mathrm{BSU}(n-1)\to\mathrm{BSU}(n)\] implies: **Theorem 3.1.4**.: _Let \(\mathscr{C}\) be a left \(X(n)\)-linear \(\infty\)-category. Then there is a cofiber sequence_ \[\operatorname{THH}(\mathscr{C}/X(n-1))\xrightarrow{\iota}\operatorname{THH}( \mathscr{C}/X(n))\xrightarrow{\Theta_{\mathscr{C}}}\Sigma^{2n}\operatorname{ THH}(\mathscr{C}/X(n)), \tag{13}\] _where the map \(\iota\) is \(S^{1}\)-equivariant, and the cofiber of \(\iota\) is (at least nonequivariantly) identified with \(\Sigma^{2n}\operatorname{THH}(\mathscr{C}/X(n))\). We will call the map \(\Theta_{\mathscr{C}}:\Sigma^{-2n}\operatorname{THH}(\mathscr{C}/X(n))\to \operatorname{THH}(\mathscr{C}/X(n))\) the topological Sen operator._ **Remark 3.1.5**.: A simpler analogue of Theorem 3.1.4 can be described as follows. Let \(A\) be an \(\mathbf{E}_{2}^{\mathbf{f}_{2}}\)-ring, and let \(A[t]\) be the flat polynomial ring over \(A\) on a generator in degree \(0\). Suppose \(\mathscr{C}\) is an \(A[t]\)-linear \(\infty\)-category. The nonequivariant equivalence \(\operatorname{HH}(A[t]/A)\simeq A[t][S^{1}]\) defines a cofiber sequence \[\operatorname{HH}(\mathscr{C}/A)\to\operatorname{HH}(\mathscr{C}/A[t]) \xrightarrow{\nabla}\Sigma^{2}\operatorname{HH}(\mathscr{C}/A[t]) \tag{14}\] analogous to Theorem 3.1.4, which exhibits \(\nabla:\operatorname{HH}(\mathscr{C}/A[t])\to\Sigma^{2}\operatorname{HH}(\mathscr{C}/ A[t])\) as a "Gauss-Manin connection". This cofiber sequence is often quite useful; for example, if we regard \(\mathbf{Z}_{p}\) as a \(S[\![t]\!]\)-algebra by the \(\mathbf{E}_{\infty}\)-map \(S[\![t]\!]\to\mathbf{Z}_{p}\) sending \(t\mapsto p\), we have \(\pi_{*}\!\operatorname{THH}(\mathbf{Z}_{p}/S[\![t]\!])\simeq\mathbf{Z}_{p}[y]\) with \(|y|=2\) (more precisely, \(y=\sigma^{2}(t-p)\)); see [19]. It is not difficult to show that the map \(\nabla:\operatorname{THH}(\mathbf{Z}_{p}/S[\![t]\!])\to\Sigma^{2}\operatorname{ THH}(\mathbf{Z}_{p}/S[\![t]\!])\) sends \(y^{n}\mapsto ny^{n-1}\), which implies Bokstedt's calculation of \(\pi_{*}\!\operatorname{THH}(\mathbf{Z}_{p})\). Just as in Theorem 3.1.4, the map \(\operatorname{HH}(\mathscr{C}/A)\to\operatorname{HH}(\mathscr{C}/A[t])\) in (14) is \(S^{1}\)-equivariant, but we can only nonequivariantly identify its cofiber with \(\Sigma^{2}\operatorname{HH}(\mathscr{C}/A[t])\). To identify the cofiber equivariantly, observe that if \(\lambda\) denotes the rotation representation of \(S^{1}\), then \(\operatorname{HH}(A/A[t])\simeq A[B^{\lambda}\mathbf{Z}_{\geq 0}]\). Here, \(B^{\lambda}\mathbf{Z}_{\geq 0}\) is the \(\lambda\)-delooping of \(\mathbf{Z}_{\geq 0}\). This implies that there is an _equivariant_ cofiber sequence \[\operatorname{HH}(\mathscr{C}/A)\to\operatorname{HH}(\mathscr{C}/A[t])\xrightarrow{ \nabla}\Sigma^{\lambda}\operatorname{HH}(\mathscr{C}/A[t]). \tag{15}\] See Corollary 3.1.19 for some further discussion. **Remark 3.1.6**.: At the level of homotopy, the map \(\Theta\) in (13) for \(\mathscr{C}=\operatorname{LMod}_{\operatorname{BP}\langle n-1\rangle}\) can be identified using Theorem 2.2.4. Namely, recall that \(\pi_{*}\!\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{n}))\cong \operatorname{BP}\langle n-1\rangle[B\Delta_{n}]_{*}[\theta_{n}]\) by Theorem 2.2.4(a); it then follows from Theorem 2.2.4(b) that \(\Theta\) must send \[\Theta:\theta_{n}^{j}\mapsto jp\theta_{n}^{j-1}.\] Therefore, we may informally write \(\Theta=p\partial_{\theta_{n}}\).9 From the point of view of Remark 2.2.17, the map \(\Theta\) can be interpreted as the \(d^{2p^{n}}\)-differential in the Serre spectral sequence computing the \(\operatorname{BP}\langle n-1\rangle\)-homology of the total space of the fibration (11). Determining the action of \(\Theta\) on \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{j}))\) for \(j\leq n-1\) can therefore be viewed as an analogue of determining the differentials in the Serre spectral sequence/Gysin sequence of a putative analogue of the Cohen-Moore-Neisenendorfer fibration (11) (where \(p\) is replaced by \(v_{n-j}\)). Footnote 9: This action of \(\Theta\) on \(\theta_{n}=\sigma^{2}(v_{n})\) is related to the observation from [18, Lemma 3.2.8(d)] that there is a choice of \(v_{n}\) such that the right unit \(\eta_{R}:\operatorname{BP}_{*}\to\operatorname{BP}_{*}\!\operatorname{BP} \cong\operatorname{BP}_{*}[t_{1},t_{2},\cdots]\) satisfies \(d(v_{n})=\eta_{R}(v_{n})-v_{n}\equiv pt_{n}\pmod{t_{1},\cdots,t_{n-1}}\). One can make some qualitative observations about the action of \(\Theta\) on \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{j}))\) for \(j\leq n-1\). Indeed, recall from (4) that there is an isomorphism \[\pi_{*}\!\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{j}))/v_{ [0,n-j)}\cong\operatorname{BP}\langle n-1\rangle[B\Delta_{j}]_{*}[\theta_{n} ]/v_{[0,n-j)}\otimes_{\mathbf{F}_{p}}\Lambda_{\mathbf{F}_{p}}(\lambda_{j+1}, \cdots,\lambda_{n}).\] An easy calculation shows that there is an isomorphism \[\pi_{*}\!\operatorname{THH}(X(p^{n})/X(p^{j}))\cong X(p^{n})\left[\prod_{i=j+1 }^{n}\overline{\Delta}_{i}\right]_{*}\otimes_{\mathbf{Z}_{(p)}}\mathbf{Z}_{(p) }(\lambda_{j+1},\cdots,\lambda_{n}).\] Therefore, the calculation of \(\pi_{*}\!\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{j}))/v_{ [0,n-j)}\) implies that the image of a class \(y\in\pi_{*}\!\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{j}))\) under \(\Theta:\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{j}))\to \Sigma^{2p^{j}}\!\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{ j}))\) lives in the ideal generated by \(v_{[0,n-j+1)}=(p,\cdots,v_{n-j})\). **Remark 3.1.7**.: The fact that the cofiber of the \(S^{1}\)-equivariant map \(\iota:\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{n}-1))\to \operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{n}))\) is (at least nonequivariantly) identified with \(\Sigma^{2p^{n}}\!\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{ n}))\) makes it more difficult to determine \(\operatorname{TP}(\operatorname{BP}\langle n-1\rangle/X(p^{n}))\) (even modulo \(v_{n-1}\)) from our calculation of \(\pi_{*}\!\operatorname{TP}(\operatorname{BP}\langle n-1\rangle/X(p^{n}))\) in Theorem 2.2.4 and the preceding description of \(\Theta\) as an endomorphism of \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/X(p^{n}))\). One fundamental question is therefore to describe the \(S^{1}\)-action on cofib\((\iota)\). This is already complicated modulo \(p\) when \(n=1\), and a description of \(\operatorname{TP}(\mathbf{Z}_{p}/X(p-1))\simeq\operatorname{TP}(\mathbf{Z}_{p})[ \operatorname{BSU}(p-1)]\) from \(\operatorname{TP}(\mathbf{Z}_{p}/X(p))\) was essentially done in [1, Conjecture 4.3] and [10, Theorem 7.4]. Recall from Theorem 2.2.4(a) that there is an isomorphism \[\pi_{*}\operatorname{TP}(\mathbf{Z}_{p}/X(p))/p\cong\mathbf{F}_{p}[v_{1},\hbar ^{\pm 1}]\otimes_{\mathbf{F}_{p}}\epsilon_{*}^{\mathbf{F}_{p}}\cong\pi_{*}k(1)^{tS^{ 1}}[\operatorname{BSU}(p-1)].\] Then, the map \(\pi_{*}\operatorname{TP}(\mathbf{Z}_{p}/X(p))/p\to\pi_{*-2p}\operatorname{TP}( \mathbf{Z}_{p}/X(p))/p\) is given by \[\hbar^{p^{k}}\mapsto\hbar^{p^{k}(p+1)}v_{1}^{\frac{p^{k+1}-p}{p-1}},\ v_{1}^{ k}\mapsto 0.\] This is a direct consequence of [10, Theorem 7.4], once one notes that the the formula \(t^{p^{k}+\phi(k+1)}f^{\phi(k)}\) from _loc. cit._ becomes precisely \(\hbar^{p^{k}(p+1)}v_{1}^{\frac{p^{k+1}-p}{p-1}}\), via the translation in notation given by \[t\rightsquigarrow\hbar,\ f\rightsquigarrow\sigma^{2}(v_{1}),\ tf\rightsquigarrow v _{1},\ \phi(k)=\frac{p^{k+1}-p}{p-1}=v_{p}((p^{k})!^{p}).\] One could also prove this using an argument similar to [1, Theorem 6.5.1]. Moreover, the image of \(\hbar\) under the boundary map \(\pi_{-2}\operatorname{TP}(\mathbf{Z}_{p}/X(p))/p\to\pi_{2p-3}\operatorname{ TP}(\mathbf{Z}_{p}/X(p-1))/p\) is the class \(\alpha_{1}\in\pi_{2p-3}\operatorname{TP}(\mathbf{Z}_{p})/p\); note that since \(\hbar\) lives in \(\pi_{-2}\operatorname{TP}(\mathbf{Z}_{p}/X(p))\), the class \(\alpha_{1}\) in fact extends to an element of \(\pi_{2p-3}\operatorname{TP}(\mathbf{Z}_{p}/X(p-1))\). The problem of calculating \(\pi_{*}\operatorname{TP}(\mathbf{Z}_{p})\) from \(\operatorname{TP}(\mathbf{Z}_{p}/X(p))\) is very similar to the problem of \(\pi_{*}\operatorname{TP}(\mathbf{Z}_{p})\) from \(\operatorname{TP}(\mathbf{Z}_{p}/S[\![t]\!])\), discussed in [10] (see Remark 3.1.5). If we assume Conjecture 2.1.9, then Theorem 3.1.4 can be refined: namely, if \(\mathscr{C}\) is a left \(T(n)\)-linear \(\infty\)-category, then there is a cofiber sequence \[\operatorname{THH}(\mathscr{C}/T(n-1))\xrightarrow{\iota}\operatorname{THH}( \mathscr{C}/T(n))\xrightarrow{\Theta_{\mathscr{C}}}\Sigma^{2p^{n}}\operatorname {THH}(\mathscr{C}/T(n)). \tag{16}\] **Remark 3.1.8**.: Suppose \(n=1\) and \(\mathscr{C}=\operatorname{Mod}_{\mathbf{Z}_{p}}\) for \(p\) odd. Then there is a map \(\operatorname{TP}(\mathbf{Z}_{p})\to\operatorname{TP}(\mathbf{Z}_{p}/T(1))\), and a trace map \(K(\mathbf{Z}_{p})\to\operatorname{TP}(\mathbf{Z}_{p})\). Let \(j=\tau_{\geq 0}L_{K(1)}S\); upon \(p\)-adic completion, there is an equivalence (see [1, Theorem 9.17]) \[K(\mathbf{Z}_{p})_{p}^{\wedge}\simeq j\vee\Sigma j\vee\Sigma^{3}\text{ku}.\] The summand \(j\) is the unit component, i.e., there is an \(\mathbf{E}_{\infty}\)-ring map \(j\to K(\mathbf{Z}_{p})_{p}^{\wedge}\). It follows that after \(p\)-completion, there is a ring map \(j\to\operatorname{TP}(\mathbf{Z}_{p})\). Assuming the equivalence \(\operatorname{TP}(\mathbf{Z}_{p}/T(1))\simeq\operatorname{BP}\langle 1\rangle^{tS^{1}}\) of Conjecture 2.2.18, the following diagram commutes: Let \(\ell\) be a topological generator of \(\mathbf{Z}_{p}^{\times}\), and let \(\psi^{\ell}:\operatorname{BP}\langle 1\rangle\to\Sigma^{2p-2}\operatorname{BP} \langle 1\rangle\) be the associated Adams operation. Then, the fiber of \(\psi^{\ell}-1\) is \(j\). Based on the above commutative diagram, one expects that under the equivalence \(\operatorname{TP}(\mathbf{Z}_{p}/T(1))\simeq\operatorname{BP}\langle 1\rangle^{tS^{1}}\) of Conjecture 2.2.18, the map \(\psi^{\ell}-1\) is closely related to \(\Theta_{\mathbf{Z}_{p}}^{tS^{1}}\). Note, for example, that if we take \(\ell=p+1\), the map \(\psi^{\ell}-1\) sends \(v_{1}^{j}\mapsto p^{v_{p}(j)+1}v_{1}^{j}\) up to \(p\)-adic units; this should be compared to the fact that \(\Theta_{\mathbf{Z}_{p}}\) sends \(\theta_{1}^{j}\mapsto jp\theta_{1}^{j-1}\) by Remark 3.1.6. This discussion, as well as the classical discussion in **[BM94]**, suggests that \({\rm TP}({\bf Z}_{p})_{p}^{\wedge}\simeq(j^{tS^{1}})_{p}^{\wedge}\). In fact, something stronger is true: in forthcoming work [DR23] with Arpon Raksit, we will show that \({\rm THH}({\bf Z}_{p})=\tau_{\geq 0}(j^{t{\bf Z}/p})\) as cyclotomic \({\bf E}_{\infty}\)-rings. **Example 3.1.9**.: Let \(n=1\), and let \(\mathscr{C}={\rm Mod}_{{\rm BP}\langle 1\rangle}\). Then Theorem 3.1.4 gives a cofiber sequence \[{\rm THH}({\rm BP}\langle 1\rangle/X(p-1))\to{\rm THH}({\rm BP}\langle 1 \rangle/X(p))\xrightarrow{\Theta_{{\rm BP}\langle 1\rangle}}\Sigma^{2p}{\rm THH}({ \rm BP}\langle 1\rangle/X(p)).\] Moreover, recall from Theorem 2.2.4(b) that there is a \(p\)-complete equivalence \[{\rm THH}({\rm BP}\langle 1\rangle/X(p))\simeq{\rm BP}\langle 1\rangle[{\rm BSU }(p-1)]\oplus\bigoplus_{j\geq 1}\Sigma^{2jp^{2}-1}{\rm BP}\langle 1\rangle[{\rm BSU }(p-1)]/pj.\] Let \(a_{j}\) denote the \({\rm BP}\langle 1\rangle\)-module generator of the summand \(\Sigma^{2jp^{2}-1}{\rm BP}\langle 1\rangle/pj\). Since \({\rm THH}({\rm BP}\langle 1\rangle/X(p-1))\simeq{\rm THH}({\rm BP} \langle 1\rangle)[{\rm BSU}(p-1)]\), the calculations of [AHL10, Section 6] can be rephrased as follows. For \(0\leq k\leq v_{p}(j)\), \(\Theta_{{\rm BP}\langle 1\rangle}\) is given on homotopy by \[\Theta_{{\rm BP}\langle 1\rangle}:p^{k}a_{j}\mapsto\left(\frac{j}{p^{k}}-1 \right)a_{j-p^{k}}v_{1}^{p\frac{k+1}{p-1}},\] up to \(p\)-adic units. A different perspective on this computation is given in [Lee22]. **Variant 3.1.10**.: One can prove a variant of Theorem 3.1.4 by replacing \(X(p)\) with \(J(p)\). If \(\mathscr{C}\) is a left \(J(p)\)-linear \(\infty\)-category, then Proposition 2.3.2 produces a cofiber sequence: \[{\rm THH}(\mathscr{C})\xrightarrow{\iota}{\rm THH}(\mathscr{C}/J(p)) \xrightarrow{\Theta^{\prime}}\Sigma^{2}{\rm THH}(\mathscr{C}/J(p)). \tag{17}\] Here, the map \(\iota\) is \(S^{1}\)-equivariant, and cofib\((\iota)\) is (at least nonequivariantly) identified with \(\Sigma^{2}{\rm THH}(\mathscr{C}/J(p))\). Proposition 2.3.3 shows that \({\rm THH}({\bf Z}_{p}/J(p))\simeq{\bf Z}_{p}[\Omega S^{3}]\). On homotopy, the map \({\rm THH}({\bf Z}_{p}/J(p))\to\Sigma^{2}{\rm THH}({\bf Z}_{p}/J(p))\) is given by the \(d^{2}\)-differential in the Serre spectral sequence for the fibration \[S^{1}\to\Omega S^{3}\langle 3\rangle\to\Omega S^{3}.\] For example, under the isomorphism \(\pi_{*}{\rm THH}({\bf Z}_{p}/J(p))\cong{\bf Z}_{p}[x]\) with \(|x|=2\), the map \(\Theta^{\prime}\) in the cofiber sequence (17) for \(n=1\) sends \(x^{j}\mapsto jx^{j-1}\). Suppose \(\mathscr{C}\) is in fact a \({\bf Z}_{p}\)-linear \(\infty\)-category. Base-changing (17) along the map \({\bf Z}_{p}\to{\bf F}_{p}\) and using Corollary 2.3.7, we obtain a cofiber sequence \[{\rm THH}(\mathscr{C})\otimes_{{\bf Z}_{p}}{\bf F}_{p}\xrightarrow{\iota}{\rm THH }(\mathscr{C}\otimes_{{\bf Z}_{p}}{\bf F}_{p})\xrightarrow{\Theta^{\prime}} \Sigma^{2}{\rm THH}(\mathscr{C}\otimes_{{\bf Z}_{p}}{\bf F}_{p}). \tag{18}\] Note that the map \(\Theta^{\prime}:{\rm THH}({\bf F}_{p})\to\Sigma^{2}{\rm THH}({\bf F}_{p})\) sends \(\sigma^{j}\mapsto j\sigma^{j-1}\) on homotopy. It follows that upon composition with \(\sigma:\Sigma^{2}{\rm THH}(\mathscr{C}\otimes_{{\bf Z}_{p}}{\bf F}_{p})\to{ \rm THH}(\mathscr{C}\otimes_{{\bf Z}_{p}}{\bf F}_{p})\), \(\Theta^{\prime}\) acts by multiplication by \(j\) on the homotopy of the \(j\)th graded piece \({\rm gr}_{\sigma}^{j}{\rm THH}(\mathscr{C}\otimes_{{\bf Z}_{p}}{\bf F}_{p})\) of the \(\sigma\)-adic filtration on \({\rm THH}(\mathscr{C}\otimes_{{\bf Z}_{p}}{\bf F}_{p})\). **Remark 3.1.11**.: Let \(p=2\). Using the fiber sequence \[S^{3}\to{\rm BU}(1)\to{\rm BU}(2),\] one can similarly show that if \(T_{4}(2)\) denotes the \({\bf E}_{2}^{\rm fr}\)-ring from Construction 2.3.9 and \(\mathscr{C}\) is a left \(T_{4}(2)\)-linear \(\infty\)-category, there is a cofiber sequence \[{\rm THH}(\mathscr{C}/J(2))\to{\rm THH}(\mathscr{C}/T_{4}(2))\to\Sigma^{4}{\rm THH }(\mathscr{C}/T_{4}(2)).\] **Remark 3.1.12**.: Let \(R\) be an animated \({\bf Z}_{p}\)-algebra. Let \(\hat{\mathbb{A}}_{R}\) denote the Nygaard-completed prismatic cohomology of \(R\), and \(\mathscr{N}^{i}\hat{\mathbb{A}}_{R}\) denote the \(i\)th graded piece of the Nygaard filtration \(\mathscr{N}^{\perp\star}(\hat{\mathbb{A}}_{R})\). Note that [10, Remark 5.5.15] gives an isomorphism \(\mathscr{N}^{i}(\hat{\mathbb{A}}_{R}\{i\})\cong\mathscr{N}^{i}\hat{\mathbb{A }}_{R}\), where \(\hat{\mathbb{A}}_{R}\{i\}\) denotes the Breuil-Kisin twisted prismatic cohomology of \(R\). Using the methods of [10], one can construct a cofiber sequence \[(\mathscr{N}^{i}\hat{\mathbb{A}}_{R})/p\to{\rm F}_{i}^{\rm conj}{\rm dR}_{(R/p )/{\bf F}_{p}}\cong\mathscr{N}^{i}\hat{\mathbb{A}}_{R/p}\to{\rm F}_{i-1}^{\rm conj }{\rm dR}_{(R/p)/{\bf F}_{p}}. \tag{19}\] As explained in _loc. cit._, the second map is closely related to the Sen operator. Recall (see [10, Example 6.4.17] and [11]) that \({\rm THH}(R/p)\) admits a motivic filtration such that \({\rm gr}_{\rm mot}^{i}{\rm THH}(R/p)=\mathscr{N}^{i}(\hat{\mathbb{A}}_{R/p})[2 i]\). Taking \(\mathscr{C}={\rm Mod}_{R}\), (18) says that there is a self-map \(\Theta^{\prime}:{\rm THH}(R/p)\to\Sigma^{2}{\rm THH}(R/p)\) whose fiber is \({\rm THH}(R)/p\). Presumably, the cofiber sequence (18) can be shown to respect the motivic filtration, so taking graded pieces would recover the cofiber sequence (19). Given this discussion, it is natural to ask if \({\rm THH}(R/J(p))\) admits a motivic filtration such that (17) is a cofiber sequence of motivically-filtered spectra. **Recollection 3.1.13**.: Let \(({\bf Z}_{p}[\widetilde{p}],\widetilde{p})\) denote the prism of [10, Notation 3.8.9], and if \(R\) is a \(p\)-complete animated \({\bf Z}_{p}\)-algebra, let \(\widetilde{p}\Omega_{R}\) denote \(\mathbb{A}_{R/{\bf Z}_{p}[\widetilde{p}]}\). In particular, \(\widetilde{p}\Omega_{R}\simeq\left(q\Omega_{R}\right)^{h{\bf F}_{p}^{\times}}\), via the \({\bf F}_{p}^{\times}\)-action on the prism \(({\bf Z}_{p}[q-1],[p]_{q})\). Let \(\widehat{\Omega}_{R}^{\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! utility of the discussion in Construction 2.3.9 is that although describing a higher chromatic analogue of \(J(p)\) is tricky (see Conjecture 2.3.22), \(\operatorname{THH}(\mathscr{C}/X(p^{n}))\) furnishes a natural higher chromatic and noncommutative analogue of the diffracted Hodge complex when \(\mathscr{C}\) is a left \(\operatorname{BP}\langle n\rangle\)-linear \(\infty\)-category. **Remark 3.1.16**.: We collect some further evidence for Conjecture 3.1.14: 1. Recall that if \(\mathscr{D}\) is an \(\mathbf{F}_{p}\)-linear \(\infty\)-category, then the canonical map \(\operatorname{THH}(\mathscr{D})\to\operatorname{HH}(\mathscr{D}/\mathbf{F}_{p})\) is given by quotienting by \(\sigma\in\pi_{2}\operatorname{THH}(\mathbf{F}_{p})\). Moreover, if \(R\) is an animated \(\mathbf{F}_{p}\)-algebra, then \(\operatorname{gr}_{\operatorname{mot}}^{i}\operatorname{THH}(R)\simeq( \operatorname{F}_{i}^{\operatorname{conj}}\mathrm{dR}_{R/\mathbf{F}_{p}})[2i]\), and \(\operatorname{F}_{\star}^{\sigma}\operatorname{THH}(R)\) is a noncommutative analogue of the conjugate filtration \(\operatorname{F}_{\star}^{\operatorname{conj}}\mathrm{dR}_{R/\mathbf{F}_{p}}\). In particular, the induced motivic filtration on \(\operatorname{THH}(R)/\sigma\) has \(\operatorname{gr}_{\operatorname{mot}}^{i}(\operatorname{THH}(R)/\sigma) \simeq L\Omega_{R/\mathbf{F}_{p}}^{i}[-i]\). This picture admits an analogue over \(J(p)\). Recall from Proposition 2.3.3(a) that \(\pi_{\ast}\operatorname{THH}(\mathbf{Z}_{p}/J(p))\cong\mathbf{Z}_{p}[x]\) with \(|x|=2\). Let \(\mathscr{C}\) be a \(\mathbf{Z}_{p}\)-linear \(\infty\)-category. One could attempt to define the quotient \(\operatorname{THH}(\mathscr{C}/J(p))/x\) as a relative tensor product of \(\operatorname{THH}(\mathscr{C}/J(p))\) with \(\mathbf{Z}_{p}\) over \(\operatorname{THH}(\mathbf{Z}_{p}/J(p))\). Unfortunately, this tensor product does not make sense, since \(\operatorname{THH}(\mathbf{Z}_{p}/J(p))\) does not naturally acquire the structure of an \(\mathbf{E}_{1}\)-algebra. However, were \(J(p)\) to admit the structure of an \(\mathbf{E}_{3}\)-algebra, the above relative tensor product would precisely be computing \(\operatorname{HH}(\mathscr{C}/\mathbf{Z}_{p})=\operatorname{THH}(\mathscr{C} )\otimes_{\operatorname{THH}(\mathbf{Z}_{p})}\mathbf{Z}_{p}\). It is therefore reasonable to view the canonical map \(\operatorname{THH}(\mathscr{C}/J(p))\to\operatorname{HH}(\mathscr{C}/ \mathbf{Z}_{p})\) as a quotient by \(x\). If \(R\) is an animated \(\mathbf{Z}_{p}\)-algebra, then \(\operatorname{HH}(R/\mathbf{Z}_{p})\) is a noncommutative analogue of the Hodge complex \(\bigoplus_{n\geq 0}L\widehat{\Omega}_{R/\mathbf{Z}_{p}}^{n}[-n]\). Under Conjecture 3.1.14, the perspective that the map \(\operatorname{THH}(\mathscr{C}/J(p))\to\operatorname{HH}(\mathscr{C}/ \mathbf{Z}_{p})\) is given by "killing \(x\)" can be regarded as an analogue of [BL22a, Remark 4.7.14], which identifies \(\operatorname{gr}_{i}^{\operatorname{conj}}\widehat{\Omega}_{R}^{\mathscr{D}} \simeq L\widehat{\Omega}_{R/\mathbf{Z}_{p}}^{i}[-i]\). 2. Let \(R\) be a smooth \(\mathbf{Z}_{p}\)-algebra. Then the prismatic-crystalline comparison theorem (see [BL22a, Remark 4.7.18]) implies that the base-change \(\mathbf{F}_{p}\otimes_{\mathbf{Z}_{p}}\operatorname{F}_{\star}^{\operatorname {conj}}\widehat{\Omega}_{R}^{\mathscr{D}}\) can be identified with \(\operatorname{Frob}_{\ast}\mathbf{F}_{\star}^{\operatorname{conj}}\Omega_{R/p /\mathbf{F}_{p}}^{\star}\), where \(\operatorname{Frob}:R\to R\) is the absolute Frobenius. Under Conjecture 3.1.14, Corollary 2.3.7 can be viewed as a noncommutative analogue of this result. 3. By Proposition 2.3.3, the class \(x\) is sent to \(\sigma\in\pi_{2}\operatorname{THH}(\mathbf{F}_{p})\) under the map \(\iota:\operatorname{THH}(\mathbf{Z}_{p}/J(p))\to\operatorname{THH}(\mathbf{F}_ {p})\). Since the cyclotomic Frobenius induces an equivalence \(\varphi:\operatorname{THH}(\mathbf{F}_{p})[1/\sigma]\xrightarrow{\sim} \operatorname{THH}(\mathbf{F}_{p})^{t\mathbf{Z}/p}\), the cofiber sequence of (18) predicts a cofiber sequence (20) \[\operatorname{THH}(\mathscr{C})^{t\mathbf{Z}/p}\otimes_{\mathbf{Z}_{p}} \mathbf{F}_{p}\xrightarrow{\iota}\operatorname{THH}(\mathscr{C}\otimes_{ \mathbf{Z}_{p}}\mathbf{F}_{p})^{t\mathbf{Z}/p}\xrightarrow{\Theta^{\prime}} \operatorname{THH}(\mathscr{C}\otimes_{\mathbf{Z}_{p}}\mathbf{F}_{p})^{t \mathbf{Z}/p}.\] Such a cofiber sequence does indeed exist, and we will construct it below in Corollary 3.1.19 (albeit using slightly different methods). Suppose that the cofiber sequence (20) respects the motivic filtration when \(\mathscr{C}=\operatorname{Mod}_{R}\). Since \(\operatorname{THH}(R)^{t\mathbf{Z}/p}\simeq\operatorname{HP}((R/p)/\mathbf{F}_ {p})\) (see [Mat20, Proposition 2.12]) and \(\operatorname{HP}((R/p)/\mathbf{F}_{p})\) has a motivic filtration such that \(\operatorname{gr}_{\operatorname{mot}}^{i}\operatorname{HP}((R/p)/\mathbf{F}_ {p})\simeq\operatorname{dR}_{(R/p)/\mathbf{F}_{p}}[2i]\), the cofiber sequence (20) would presumably be related under Conjecture 3.1.14 to the following cofiber sequence related to (19) (whose existence was told to me by Akhil Mathew): \[\overline{\Delta}_{R}/p\to\operatorname{dR}_{(R/p)/\mathbf{F}_{p}}\to \operatorname{dR}_{(R/p)/\mathbf{F}_{p}}. \tag{21}\] For completeness, we give an argument for (21). Proof of the cofiber sequence (21). Recall from [11, Corollary 3.16] that if \(A\) is an animated \(\mathbf{Z}_{p}[x]\)-algebra, there is a cofiber sequence \[\overline{\mathbb{A}}_{A}\{i\}/x\to\overline{\mathbb{A}}_{A/x}\{i\}\to\overline {\mathbb{A}}_{A/x}\{i-1\}. \tag{22}\] This implies (by setting \(i=0\) and viewing \(R/p\) as the base-change \(R\otimes_{\mathbf{Z}_{p}[x]}\mathbf{Z}_{p}\), where the map \(\mathbf{Z}_{p}[x]\to R\) sends \(x\mapsto p\), and the map \(\mathbf{Z}_{p}[x]\to\mathbf{Z}_{p}\) is the augmentation) that there is a cofiber sequence \[\overline{\mathbb{A}}_{R}/p\to\overline{\mathbb{A}}_{R/p}\to\overline{\mathbb{ A}}_{R/p}.\] The de Rham/crystalline comparison theorems tell us that \(\mathbb{A}_{R/p}\simeq\mathbb{A}_{(R/p)/\mathbf{Z}_{p}}\simeq(\mathrm{dR}_{R}) _{p}^{\wedge}\), where \(\mathbb{A}_{(R/p)/\mathbf{Z}_{p}}\) denotes prismatic cohomology with respect to the crystalline prism \((\mathbf{Z}_{p},(p))\) (i.e., the derived crystalline cohomology of \(R/p\)). But then \(\overline{\mathbb{A}}_{R/p}\simeq\mathrm{dR}_{(R/p)/\mathbf{F}_{p}}\), as desired. Let us remark that (22) can be constructed using \(\mathrm{WCart}_{\mathbf{G}_{a}}^{\mathrm{HT}}\). Indeed, we can reduce to the case when \(A\) is the \(p\)-completion of \(\mathbf{Z}_{p}[x]=\mathscr{O}_{\mathbf{G}_{a}}\). Then, [12, Example 9.1] implies that \(\mathrm{Spec}(\mathbf{Z}_{p})\times_{\mathbf{G}_{a}}\mathrm{WCart}_{\mathbf{ G}_{a}}^{\mathrm{HT}}\cong B(\mathbf{G}_{a}^{\sharp}\rtimes\mathbf{G}_{m}^{ \sharp})\). Let \(\alpha:\mathrm{WCart}_{\mathbf{Z}_{p}}^{\mathrm{HT}}\to\mathrm{WCart}_{\mathbf{ G}_{a}}^{\mathrm{HT}}\) be the tautological map, so that it factors through a map \(f:\mathrm{WCart}_{\mathbf{Z}_{p}}^{\mathrm{HT}}\to\mathrm{Spec}(\mathbf{Z}_{p}) \times_{\mathbf{G}_{a}}\mathrm{WCart}_{\mathbf{G}_{a}}^{\mathrm{HT}}\), which can in turn be identified with the map \(B\mathbf{G}_{m}^{\sharp}\to B(\mathbf{G}_{a}^{\sharp}\rtimes\mathbf{G}_{m}^{ \sharp})\). It follows that there is a Cartesian square Let \(\mathscr{F}\) be a quasicoherent sheaf on \(\mathrm{WCart}_{\mathbf{G}_{a}}^{\mathrm{HT}}\), and let \(\mathscr{F}/x\) be the associated quasicoherent sheaf on \(\mathrm{Spec}(\mathbf{Z}_{p})\times_{\mathbf{G}_{a}}\mathrm{WCart}_{\mathbf{G}_ {a}}^{\mathrm{HT}}\). Our goal is to identify the cofiber of the map \(\mathscr{F}/x\to f_{*}\alpha^{*}\mathscr{F}\simeq f_{*}f^{*}(\mathscr{F}/x)\) in the case when \(\mathscr{F}\) is the Breuil-Kisin twisting line bundle \(\mathscr{O}_{\mathrm{WCart}_{\mathbf{G}_{a}}^{\mathrm{HT}}}\{i\}\) on \(\mathrm{WCart}_{\mathbf{G}_{a}}^{\mathrm{HT}}\). The preceding Cartesian square along with the cofiber sequence11 Footnote 11: Here, we declare \(\gamma_{-1}(x)=0\). implies that \(\mathrm{cofib}(\mathscr{F}/x\to f_{*}\alpha^{*}\mathscr{F})\) can be identified with \(\mathscr{O}_{\mathrm{WCart}_{\mathbf{Z}_{p}}^{\mathrm{HT}}}\{-1\}\otimes f_{*} \alpha^{*}\mathscr{F}\). Setting \(\mathscr{F}=\mathscr{O}_{\mathrm{WCart}_{\mathbf{G}_{a}}^{\mathrm{HT}}}\{i\}\) and taking global sections produces (22). We now construct a more general version of the cofiber sequence (20). We first need the following lemma: **Lemma 3.1.17**.: _Let \(G\subseteq S^{1}\) be a nontrivial finite subgroup of \(S^{1}\), and let \(\lambda\) denote the rotation representation of \(S^{1}\) on \(\mathbf{C}\)._ 1. _Define_ \((S^{\lambda})^{(1)}\) _via the cofiber sequence_ \[G_{+}\to S^{0}\to(S^{\lambda})^{(1)}.\] _Then there is a cofiber sequence_ \[\Sigma(G_{+})\to(S^{\lambda})^{(1)}\to S^{\lambda}.\] 2. _Let_ \(X\) _be a spectrum with_ \(G\)_-action. Then_ \(X^{tG}\xrightarrow{\sim}(\Sigma^{\lambda}X)^{tG}\) Proof.: Part (a) describes an equivariant CW-structure on \(S^{\lambda}\); we leave this as an exercise to the reader. Part (b) follows by observing that the cofiber sequence \[G_{+}\otimes X\to X\to(S^{\lambda})^{(1)}\otimes X\] implies that \(X^{tG}\xrightarrow{\sim}((S^{\lambda})^{(1)}\otimes X)^{tG}\); and the cofiber sequence \[X\otimes\Sigma(G_{+})\to X\otimes(S^{\lambda})^{(1)}\to\Sigma^{\lambda}X\] implies that \((X\otimes(S^{\lambda})^{(1)})^{tG}\xrightarrow{\sim}(\Sigma^{\lambda}X)^{tG}\). **Proposition 3.1.18**.: _Let \(S[\pi]=S[\mathbf{Z}_{\geq 0}]\). For any \(S[\pi]\)-linear \(\infty\)-category \(\mathscr{C}\), there are cofiber sequences_ \[\operatorname{THH}(\mathscr{C})^{t\mathbf{Z}/p}\otimes_{S[\pi]}S \to\operatorname{THH}(\mathscr{C}\otimes_{S[\pi]}S)^{t\mathbf{Z}/p} \xrightarrow{\nabla^{t\mathbf{Z}/p}}\operatorname{THH}(\mathscr{C}\otimes_{S[ \pi]}S)^{t\mathbf{Z}/p}, \tag{24}\] \[\operatorname{TP}(\mathscr{C}) \to\operatorname{TP}(\mathscr{C}/S[\pi])\xrightarrow{\nabla^{tS^{ 1}}}\operatorname{TP}(\mathscr{C}/S[\pi]). \tag{23}\] Proof.: We will use (15) with \(A=S\) (here, the variable \(t\) is relabeled as \(\pi\)). This gives us an \(S^{1}\)-equivariant cofiber sequence \[\operatorname{THH}(\mathscr{C})\to\operatorname{THH}(\mathscr{C}/S[\pi])\to \Sigma^{\lambda}\operatorname{THH}(\mathscr{C}/S[\pi]). \tag{25}\] To prove the cofiber sequence (23), we first apply \(t\mathbf{Z}/p\) to the preceding cofiber sequence: \[\operatorname{THH}(\mathscr{C})^{t\mathbf{Z}/p}\to\operatorname{THH}(\mathscr{C }/S[\pi])^{t\mathbf{Z}/p}\to(\Sigma^{\lambda}\operatorname{THH}(\mathscr{C}/S[ \pi]))^{t\mathbf{Z}/p}.\] Observe that the tensor product \(\operatorname{THH}(\mathscr{C})^{t\mathbf{Z}/p}\otimes_{S[\pi]}S\) along the augmentation \(S[\pi]\to S\) sending \(\pi\mapsto 0\) is precisely \(\operatorname{THH}(\mathscr{C})^{t\mathbf{Z}/p}\otimes_{S[\pi]}S\). Similarly, \(\operatorname{THH}(\mathscr{C}/S[\pi])^{t\mathbf{Z}/p}\otimes_{S[\pi]}S\simeq \operatorname{THH}(\mathscr{C}\otimes_{S[\pi]}S)^{t\mathbf{Z}/p}\). It therefore suffices to show that \((\Sigma^{\lambda}\operatorname{THH}(\mathscr{C}/S[\pi]))^{t\mathbf{Z}/p}\simeq \operatorname{THH}(\mathscr{C}/S[\pi])^{t\mathbf{Z}/p}\); but this is exactly Lemma 3.1.17. The cofiber sequence (24) is even easier to construct: applying \(tS^{1}\) to (25), we obtain a cofiber sequence \[\operatorname{TP}(\mathscr{C})\to\operatorname{TP}(\mathscr{C}/S[\pi])\to( \Sigma^{\lambda}\operatorname{THH}(\mathscr{C}/S[\pi]))^{tS^{1}}.\] Since there is a cofiber sequence \[S^{1}_{+}\to S^{0}\to S^{\lambda},\] we see that there is an equivalence \(X^{tS^{1}}\xrightarrow{\sim}(\Sigma^{\lambda}X)^{tS^{1}}\) for any \(S^{1}\)-spectrum \(X\). In particular, \((\Sigma^{\lambda}\operatorname{THH}(\mathscr{C}/S[\pi]))^{tS^{1}}\simeq \operatorname{TP}(\mathscr{C}/S[\pi])\), as desired. **Corollary 3.1.19**.: _Let \(K\) be a number field, let \(\mathfrak{p}\subseteq\mathscr{O}_{K}\) be a prime ideal over \(p\), and let \(R\) denote the localization of \(\mathscr{O}_{K}\) at \(\mathfrak{p}\). Denote by \(\pi\in R\) a uniformizer, and let \(k=R/\pi\) be the residue field, so that there is an \(\mathbf{E}_{\infty}\)-map \(S[\pi]\to R\) sending \(\pi\mapsto\pi\). For any \(R\)-linear \(\infty\)-category \(\mathscr{C}\), there are cofiber sequences_ \[\operatorname{THH}(\mathscr{C})^{t\mathbf{Z}/p}\otimes_{R}k \to\operatorname{THH}(\mathscr{C}\otimes_{R}k)^{t\mathbf{Z}/p} \xrightarrow{\nabla^{t\mathbf{Z}/p}}\operatorname{THH}(\mathscr{C}\otimes_{R }k)^{t\mathbf{Z}/p}, \tag{27}\] \[\operatorname{TP}(\mathscr{C}) \to\operatorname{TP}(\mathscr{C}/S[\pi])\xrightarrow{\nabla^{tS^{1 }}}\operatorname{TP}(\mathscr{C}/S[\pi]). \tag{26}\] **Remark 3.1.20**.: The cofiber sequence (27) was used in [10] to calculate \(\operatorname{TP}(\mathscr{O}_{K})\) by computing the resulting endomorphism of \(\operatorname{TP}(\mathscr{O}_{K}/S[\pi])\). ### Some calculations of \(\mathrm{THH}\) relative to \(X(p)\) and \(\Theta\) We now calculate the topological Sen operator for perfectoid rings; these calculations lend further evidence for Conjecture 3.1.14. **Recollection 3.2.1**.: Let \(R\) be a perfectoid ring. Recall that \(A_{\mathrm{inf}}(R)=W(R^{\flat})\), so that \(L_{A_{\mathrm{inf}}(R)/\mathbf{Z}_{p}}\) is \(p\)-completely zero. Let \(A_{\mathrm{inf}}^{+}(R)\) denote the spherical Witt vectors \(W^{+}(R^{\flat})\) of [12, Example 5.2.7]. **Lemma 3.2.2**.: _Let \(\xi\) be a generator of the kernel of Fontaine's map \(\theta:A_{\mathrm{inf}}(R)\to R\). Let \(\Omega^{2}S^{3}\to\mathrm{BGL}_{1}(A_{\mathrm{inf}}^{+}(R))\) denote the \(\mathbf{E}_{2}\)-map which detects \(1-\xi\in A_{\mathrm{inf}}(R)^{\times}\) on the bottom cell of the source. Then there is an equivalence of \(\mathbf{E}_{2}\)-\(A_{\mathrm{inf}}^{+}(R)\)-algebras between the \(\xi\)-adic completion of \(A_{\mathrm{inf}}(R)\) and the \(\xi\)-adic completion of the Thom spectrum of the following composite:_ \[g_{\xi}:\Omega^{2}S^{3}\langle 3\rangle\to\Omega^{2}S^{3}\to\mathrm{BGL}_{1}(A_{ \mathrm{inf}}^{+}(R)).\] _In particular, there is an equivalence \(\mathrm{THH}(A_{\mathrm{inf}}(R)^{\wedge}_{\xi}/A_{\mathrm{inf}}^{+}(R)^{ \wedge}_{\xi})\simeq A_{\mathrm{inf}}(R)^{\wedge}_{\xi}[\Omega S^{3}\langle 3\rangle]\) of \(\mathbf{E}_{2}\)-\(A_{\mathrm{inf}}(R)^{\wedge}_{\xi}\)-algebras._ Proof.: Recall from [13, Theorem 1.13] that the Thom spectrum of the map \(\Omega^{2}S^{3}\to\mathrm{BGL}_{1}(A_{\mathrm{inf}}^{+}(R))\) is equivalent to \(R\) as an \(\mathbf{E}_{2}\)-\(A_{\mathrm{inf}}^{+}(R)\)-algebra. The fiber sequence \[\Omega^{2}S^{3}\langle 3\rangle\to\Omega^{2}S^{3}\to S^{1}\] implies that there is a class \(\xi\in\pi_{0}(\Omega^{2}S^{3}\langle 3\rangle)^{g_{\xi}}\) and a map \(S^{1}\to\mathrm{BGL}_{1}(\Omega^{2}S^{3}\langle 3\rangle)^{g_{\xi}}\) detecting \(1-\xi\), such that its Thom spectrum is \(R\). This implies that there is a cofiber sequence \[(\Omega^{2}S^{3}\langle 3\rangle)^{g_{\xi}}\xrightarrow{\xi}(\Omega^{2}S^{3} \langle 3\rangle)^{g_{\xi}}\to R.\] It follows that the \(\xi\)-adic completion \((\Omega^{2}S^{3}\langle 3\rangle)^{g_{\xi}}\) is equivalent to \(A_{\mathrm{inf}}(R)^{\wedge}_{\xi}\). The claim about \(\mathrm{THH}\) follows in the standard manner using [1]. **Remark 3.2.3**.: In fact, the calculation from [1, Theorem 6.1] that \(\pi_{*}\mathrm{THH}(R)\cong R[\sigma]\) is equivalent to [13, Theorem 1.13] (which constructs \(R\) as the Thom spectrum of the map \(\Omega^{2}S^{3}\to\mathrm{BGL}_{1}(A_{\mathrm{inf}}^{+}(R))\)). The equivalence between these two statements can be proved similarly to [1, Remark 1.5]. **Proposition 3.2.4**.: _Let \(R\) be a \(p\)-complete perfectoid ring. Then there is a \(p\)-complete equivalence_ \[\mathrm{THH}(R/X(p))\simeq R[\mathbf{C}P^{\infty}\times\Omega S^{2p+1}]\otimes _{R}\epsilon^{R}.\] _In particular, if \(\theta\) denotes the "polynomial12" generator in degree \(2p\) arising via the James filtration on \(\Omega S^{2p+1}\) and \(R\langle u\rangle=\pi_{*}R[\mathbf{C}P^{\infty}]\) is (the underlying \(R\)-module of) a divided power algebra on a class \(u\) in degree \(2\), then there is a \(p\)-complete isomorphism_ Footnote 12: Recall that \(\mathrm{THH}(R/X(p))\) is not a ring; the word polynomial simply means the subspace generated by \(R[\Omega S^{2p+1}]_{*}\). \[\pi_{*}\mathrm{THH}(R/X(p))\simeq R[\theta]\langle u\rangle\otimes_{R} \epsilon_{*}^{R}.\] Proof.: Let \(X(p)_{\xi}\) denote the \(\xi\)-adic completion of the Thom spectrum of the composite \[\Omega\mathrm{SU}(p)\to\Omega S^{2p-1}\xrightarrow{\alpha_{1}}\Omega^{2}S^{3} \langle 3\rangle\to\mathrm{BGL}_{1}(A_{\mathrm{inf}}^{+}(R)).\] Then, the map \(\operatorname{THH}(X(p))_{\xi}\to\operatorname{THH}(X(p))\otimes A^{+}_{\inf}(R)^{ \wedge}_{\xi}\) is a \((p,\xi)\)-complete equivalence: indeed, the above composite is determined as an \(\mathbf{E}_{1}\)-map by the composite \[\operatorname{SU}(p)\to S^{2p-1}\xrightarrow{(1-\xi)\alpha_{1}}B^{2} \mathrm{GL}_{1}(A^{+}_{\inf}(R)).\] Since \(1-\xi\) is a unit in \(\pi_{0}A^{+}_{\inf}(R)\cong A_{\inf}(R)\), it suffices to prove that the map \(\operatorname{THH}(A^{+}_{\inf}(R)^{\wedge}_{\xi})\to A^{+}_{\inf}(R)^{\wedge} _{\xi}\) is a \((p,\xi)\)-complete equivalence. But this is clear: after killing \(\xi\) and tensoring with \(\mathbf{F}_{p}\), we obtain the map \(\operatorname{HH}(R^{\flat}/\mathbf{F}_{p})\to R^{\flat}\), which is an equivalence since \(R^{\flat}\) is perfect. It then follows from Lemma 3.2.2 and the same argument used to prove Theorem 2.2.4(a) that there are \((p,\xi)\)-complete equivalences \[\operatorname{THH}(A_{\inf}(R)^{\wedge}_{\xi}/X(p))\simeq\operatorname{THH}(A _{\inf}(R)^{\wedge}_{\xi}/X(p)_{\xi})\simeq A_{\inf}(R)[\Omega S^{2p+1}\times \operatorname{BSU}(p-1)].\] Therefore, there are \(p\)-complete equivalences \[\operatorname{THH}(R/X(p)) \simeq\operatorname{THH}(R/X(p)_{\xi})\] \[\simeq\operatorname{THH}(R/A^{+}_{\inf}(R)^{\wedge}_{\xi}) \otimes_{\operatorname{THH}(A_{\inf}(R)^{\wedge}_{\xi}/A^{+}_{\inf}(R)^{ \wedge}_{\xi})}\operatorname{THH}(A_{\inf}(R)^{\wedge}_{\xi}/X(p)_{\xi})\] \[\simeq\operatorname{THH}(R/A^{+}_{\inf}(R)^{\wedge}_{\xi}) \otimes_{A_{\inf}(R)^{\wedge}_{\xi}[\Omega S^{3}(3)]}A_{\inf}(R)^{\wedge}_{\xi }[\Omega S^{2p+1}\times\operatorname{BSU}(p-1)].\] Since \(R\) is perfectoid, [1, Theorem 6.1] implies that \(\operatorname{THH}(R/A^{+}_{\inf}(R))\simeq R[\Omega S^{3}]\). The map \(\operatorname{THH}(W(R^{\flat}))\to\operatorname{THH}(R)\) induced by the unit can be identified with the composite \(W(R^{\flat})[\Omega S^{3}(3)]\to R[\Omega S^{3}]\), induced by Fontaine's map \(\theta:A_{\inf}(R)\to R\). There is a \(p\)-local Cartesian square (28) which implies that \[\operatorname{THH}(R/X(p))\simeq R[\Omega S^{2p+1}\times\mathbf{C}P^{\infty} \times\operatorname{BSU}(p-1)],\] as desired. Alternatively, there are equivalences \[\operatorname{THH}(R/X(p)_{\epsilon}) \simeq\operatorname{THH}(R/A^{+}_{\inf}(R)^{\wedge}_{\xi}) \otimes_{\operatorname{THH}(X(p)_{\epsilon}/A^{+}_{\inf}(R)^{\wedge}_{\xi})} X(p)_{\epsilon}\] \[\simeq R[\Omega S^{3}]\otimes_{R[\operatorname{SU}(p)]}R.\] The desired calculation follows from the observation that there is a \(p\)-local fibration \[\operatorname{SU}(p)\simeq\operatorname{SU}(p-1)\times S^{2p-1}\xrightarrow{* \times\alpha_{1}}\Omega S^{3}\xrightarrow{H_{p}\times_{\epsilon}}\Omega S^{2p +1}\times\mathbf{C}P^{\infty}\times\operatorname{BSU}(p-1)\] which is induced by the Cartesian square (28). **Remark 3.2.5**.: Proposition 3.2.4 has the following slight variant: if \(R\) is a \(p\)-complete perfectoid ring, then there is a \(p\)-complete equivalence \(\operatorname{THH}(R/J(p))\simeq R[\Omega S^{3}\times\mathbf{C}P^{\infty}]\). The only modification is that one instead has to use the \(p\)-local Cartesian square \[\begin{CD}\Omega S^{3}\langle 3\rangle@>{\Omega S^{3}}>{}>\\ \Omega S^{3}@>{}>{\Omega S^{3}}>\Omega S^{3}\times\mathbf{C}P^{\infty},\end{CD}\] which supplies a fibration \[S^{1}\to\Omega S^{3}\to\Omega S^{3}\times\mathbf{C}P^{\infty}.\] In particular, the above discussion shows that \(\pi_{*}\mathrm{THH}(R/J(p))\cong R[x]\langle u\rangle\). This is compatible with Conjecture 3.1.14: 1. First, \(\pi_{*}\mathrm{THH}(R/J(p))[x^{-1}]\cong R[x^{\pm 1}]\langle\frac{u}{x}\rangle\). Since \(\frac{u}{x}\) lives in degree \(0\), Conjecture 3.1.14 predicts that \(\widehat{\Omega}_{R}^{\not\!\!D}\cong R\langle\frac{u}{x}\rangle\). This is indeed true: [1, Example 4.7.6] implies that the diffracted Hodge complex of a \(p\)-complete perfectoid ring \(R\) is a divided power \(R\)-algebra on a single class in degree zero. 2. Second, \(\tau_{(2n-2,2n]}\mathrm{THH}(R/J(p))\) is equivalent to \(\bigoplus_{0\leq j\leq n}R\cdot\gamma_{j}(u)x^{n-j}\), so that Conjecture 3.1.14 predicts that \(\mathrm{F}_{i}^{\mathrm{conj}}\widehat{\Omega}_{R}^{\not\!\!D}\) is isomorphic to the \(R\)-submodule of \(\widehat{\Omega}_{R}^{\not\!\!D}\) generated by \(\{\gamma_{j}(\frac{u}{x})\}_{0\leq j\leq n}\). This is indeed true: see \((*_{n})\) in the proof of [1, Lemma 5.6.14]. In the same way, \(\tau_{(2(n-1)p,2np]}\mathrm{THH}(R/T(1))\) is a free \(R\)-module spanned by \(\theta^{i}\gamma_{j}(u)\) for \((n-1-i)p<j\leq(n-i)p\). This includes \(\gamma_{j}(u)\) for \((n-1)p<j\leq np\), but also terms such as \(\theta^{n}\) and \(\theta^{n-1}\gamma_{p}(u)\). **Remark 3.2.6**.: We can understand the calculation of Proposition 3.2.4 more algebraically as follows. There is a \(p\)-local fiber sequence \[S^{2p-1}\to\Omega S^{3}\to\mathbf{C}P^{\infty}\times\Omega S^{2p+1}, \tag{29}\] where the second map is given by the product of the canonical map \(\Omega S^{3}\to\mathbf{C}P^{\infty}\) with the James-Hopf map \(\Omega S^{3}\to\Omega S^{2p+1}\). The Serre spectral sequence in \(\mathbf{Z}_{p}\)-homology for (29) is given by \[E_{*,*}^{2}=\mathbf{Z}_{p}\langle u\rangle\otimes_{\mathbf{Z}_{p}}\mathbf{Z}_ {p}[\theta,\epsilon]/\epsilon^{2}\Rightarrow\pi_{*}\mathbf{Z}_{p}[\Omega S^{3} ]\cong\mathbf{Z}_{p}[\sigma],\] where \(\epsilon\) lives in degree \(2p-1\). It is not difficult to show that there is a single family of differentials given by \[d^{2p}(\gamma_{p^{n}}(u))=\epsilon\prod_{j=1}^{n-1}\gamma_{p^{j}}(u)^{p-1},\ d^{2p}(\theta^{j})=jp\theta^{j-1}\epsilon.\] where the equality is to be understood up to \(p\)-adic units. The above description implies that the map \(d^{2p}:E_{2np,0}^{2}\to E_{2np-2p,2p-1}^{2}\) is surjective, and its kernel is a free \(\mathbf{Z}_{p}\)-module of rank \(1\) (for example, one can calculate an explicit \((n+1)\times n\)-matrix with coefficients in \(\mathbf{Z}_{p}\) which describes \(d^{2p}\)). If \(R\) is a perfectoid ring, this discussion determines the Serre spectral sequence in \(R\)-homology for (29). Since the \(d^{2p}\)-differential in this spectral sequence is just the effect of the topological Sen operator \(\Theta_{R}:\mathrm{THH}(R/X(p))\to\Sigma^{2p}\mathrm{THH}(R/X(p))\) on homotopy, we see that \(\Theta_{R}\) is given (up to \(p\)-adic units) by the map \[\gamma_{p^{n}}(u)\mapsto\prod_{j=1}^{n-1}\gamma_{p^{j}}(u)^{p-1}.\] \[\Omega S^{2p+1}\times\mathrm{BSU}(p-1)\] which implies the calculation of \(\mathrm{THH}(\mathbf{Z}/p^{n}/X(p))\). The calculation of \(\mathrm{THH}(\mathbf{Z}/p^{n}/J(p))\) is similar. **Remark 3.2.9**.: One could also deduce Proposition 3.2.8 for \(n\geq 2\) from Proposition 3.2.4 for \({\bf F}_{p}\), using descent and the fact that \({\rm HH}({\bf F}_{p}/{\bf Z}/p^{n})={\bf F}_{p}[K({\bf Z}/p^{n-1},2)]\). Indeed, the composite \(S^{1}\xrightarrow{p^{n-1}}S^{1}\xrightarrow{1-p}{\rm BGL}_{1}(S)\) detects the class \((1-p)^{p^{n-1}}=1-p^{n}u\in{\bf Z}_{p}^{\times}\) for some \(p\)-adic unit \(u\). Therefore, its Thom spectrum is equivalent to \({\bf Z}/p^{n}\). In turn, [10, Proposition 2.1.6] (or [1]) and the fiber sequence \[S^{1}\xrightarrow{p^{n-1}}S^{1}\to B{\bf Z}/p^{n-1}\] imply that \({\bf F}_{p}\) is the Thom spectrum of a map \(B{\bf Z}/p^{n-1}\to{\rm BGL}_{1}({\bf Z}/p^{n})\) which detects \(1-p\in({\bf Z}/p^{n})^{\times}\) on the bottom cell of the source. Applying [1] implies the desired calculation of \({\rm HH}({\bf F}_{p}/{\bf Z}/p^{n})\). **Remark 3.2.10**.: There is a higher chromatic analogue of Proposition 3.2.8. To explain this, recall from [11, Construction 3.5.1] that there is an \({\bf E}_{2}\)-algebra \(S(\!(\hbar)\!)\) over the sphere spectrum with \(|\hbar|=-2\). It follows from [1, Corollary 3.12] that \(S(\!(\hbar)\!)\) can be upgraded to an \({\bf E}_{2}^{\rm fr}\)-algebra. Tensoring with \(X(p^{n})\) therefore defines an \({\bf E}_{2}^{\rm fr}\)-ring \(X(p^{n})(\!(\hbar)\!)\); in particular, one can define THH relative to \(X(p^{n})(\!(\hbar)\!)\). The \({\bf E}_{2}\)-map \(X(p^{n})\to{\rm BP}\langle n-1\rangle\to{\rm BP}\langle n-1\rangle^{tS^{1}}\) factors through an \({\bf E}_{2}\)-map \(X(p^{n})(\!(\hbar)\!)\to{\rm BP}\langle n-1\rangle^{tS^{1}}\), where \(\hbar\) is sent to a complex orientation of \({\rm BP}\langle n-1\rangle\) (viewed as a class in \(\pi_{-2}{\rm BP}\langle n-1\rangle^{tS^{1}}\)). The calculation of Theorem 2.2.4 implies that \[{\rm THH}({\rm BP}\langle n-1\rangle^{tS^{1}}/X(p^{n})(\!(\hbar)\!))\simeq{\rm BP }\langle n-1\rangle^{tS^{1}}[\Omega S^{2p^{n}+1}\times B\Delta_{n}].\] The spectrum \({\rm BP}\langle n-1\rangle^{t{\bf Z}/m}\) is the quotient \({\rm BP}\langle n-1\rangle^{tS^{1}}/\frac{[m](\hbar)}{\hbar}\), where \([m](\hbar)\) denotes the \(m\)-series of the formal group law over \({\rm BP}\langle n-1\rangle_{*}\). This can be viewed as the Thom spectrum of a map \(S^{1}\to{\rm BGL}_{1}({\rm BP}\langle n-1\rangle^{tS^{1}})\) detecting \(1+\frac{[m](\hbar)}{\hbar}\in\pi_{0}({\rm BP}\langle n-1\rangle^{tS^{1}})^{\times}\). It follows that \[{\rm THH}({\rm BP}\langle n-1\rangle^{t{\bf Z}/m}/X(p^{n})(\!(\hbar)\!))\simeq{ \rm BP}\langle n-1\rangle^{t{\bf Z}/m}[BS^{1}\times\Omega S^{2p^{n}+1}\times B \Delta_{n}]. \tag{30}\] When \(n=1\), there is an equivalence \({\rm BP}\langle 0\rangle^{t{\bf Z}/m}\simeq({\bf Z}/m)^{tS^{1}}\), and (30) can be viewed as the equivalence of Proposition 3.2.8, base-changed along \({\bf Z}/m\to({\bf Z}/m)^{tS^{1}}\). Since \(B^{2}(p^{n-1}{\bf Z})\cong{\bf C}P^{\infty}\) (more canonically, it is the total space of the line bundle \(\mathscr{O}(p^{n-1})\) over the standard \({\bf C}P^{\infty}\)), Proposition 3.2.8 implies that \(\pi_{*}{\rm THH}({\bf Z}/p^{n}/J(p))\cong{\bf Z}/p^{n}[x]\langle u_{n}\rangle\) with \(|u_{n}|=|x|=2\). Were Conjecture 3.1.14 to hold, Proposition 3.2.8 would imply that \(\widehat{\Omega}^{\not{D}}_{{\bf Z}/p^{n}}\) is a (discrete) divided power algebra over \({\bf Z}/p^{n}\). In [1, Example 5.15], it is shown that if \({\bf G}^{\sharp}_{a}\) denotes the PD-completion of \({\bf G}_{a}\) at the origin, then \({\rm Spec}({\bf Z}/p^{n})^{\not{D}}\otimes{\bf F}_{p}\cong{\bf G}^{\sharp}_{a} \otimes{\bf F}_{p}\) in the notation of [1, Theorem implies that \(\widehat{\Omega}^{\not{D}}_{{\bf Z}/p^{n}}\otimes_{{\bf Z}/p^{n}}{\bf F}_{p}\) is isomorphic to the divided power algebra \({\bf F}_{p}\langle t_{n}\rangle\) for \(|t_{n}|=0\). However, as predicted by Conjecture 3.1.14, there is in fact no need to reduce modulo \(p\): Corollary 3.2.15 below says that \(\widehat{\Omega}^{\not{D}}_{{\bf Z}/p^{n}}\) is indeed isomorphic to the divided power algebra \({\bf Z}/p^{n}\langle t_{n}\rangle\) for \(|t_{n}|=0\). I am grateful to Bhargav Bhatt for the statement of the following lemma, which is analogous to the calculation that if \(R\) is a commutative ring and \(x\in R\) is a regular element, then there is a \(p\)-complete equivalence \({\rm dR}_{R/x/R}\simeq R\langle x\rangle/x\) (see [1, Theorem 8.4]). The argument for Lemma 3.2.11 below is my interpretation of Bhatt's explanation. The topological discussion above can be regarded as an analogue of the calculation that \(\operatorname{HH}(R/x/R)\simeq R[\mathbf{C}P^{\infty}]/x\). We will freely use notation from [1, 2] below. **Lemma 3.2.11**.: _Let \((A,I)\) be a transversal prism (i.e., \(A/I\) is \(p\)-torsionfree). Let \(x\in A\) be an element such that \(x\pmod{I}\) is regular in \(\overline{A}:=A/I\), and such that \((x)\subseteq A\) is \(\phi\)-stable. Then \(\operatorname{WCard}_{A/(I,x)/A}^{\operatorname{HT}}\) is \(p\)-completely isomorphic to \(\mathbf{G}_{a}^{\sharp}\times\operatorname{Spf}(A/(I,x))\), so that \(\overline{\Delta}_{A/(I,x)/A}\cong A/(I,x)\langle t\rangle\) with \(|t|=0\)._ Proof.: By [1, Proposition 5.12], the map \(\operatorname{WCard}_{A/(I,x)/A}^{\operatorname{HT}}\to\operatorname{Spf}(A/( I,x))\) is a split gerbe, banded by \(T_{A/(I,x)/\overline{A}}\{1\}^{\sharp}\). In this case, since \(x\pmod{I}\) is a regular element of \(\overline{A}\), we see that \(L_{A/(I,x)/\overline{A}}=(x)/(x^{2})[1]\), so that \(T_{A/(I,x)/\overline{A}}=\operatorname{Spf}\operatorname{Sym}_{A/(I,x)}(L_{A/ (I,x)/\overline{A}})_{p}^{\wedge}\) is isomorphic to \(\Omega\mathbf{G}_{a}\) over \(A/(I,x)\). It follows that \(\operatorname{WCard}_{A/(I,x)/A}^{\operatorname{HT}}\) is isomorphic to a trivial \(\mathbf{G}_{a}^{\sharp}\)-torsor over \(\operatorname{Spf}(A/(I,x))\). Since \(\overline{\Delta}_{A/(I,x)/A}\) is the global sections of the structure sheaf of \(\operatorname{WCard}_{A/(I,x)/A}^{\operatorname{HT}}\), the lemma follows. **Remark 3.2.12**.: In fact, the conjugate filtration \(\operatorname{F}_{i}^{\operatorname{conj}}\overline{\Delta}_{A/(I,x)/A}\) is isomorphic to the divided power filtration on \(A/(I,x)\langle t\rangle\) under Lemma 3.2.11. **Remark 3.2.13**.: Sticking with the assumptions of Lemma 3.2.11, let us mention without proof that Lemma 3.2.11 is also a consequence of [1, Example 7.9], which states that \(\mathbb{A}_{A/(I,x)/A}\cong A\{\frac{x}{I}\}_{(p,I)}^{\wedge}\). If \(I=(d)\) is principal, the \(p\)-complete isomorphism \[\beta:A/(I,x)\langle t\rangle_{p}^{\wedge}\xrightarrow{\sim}\overline{ \Delta}_{A/(I,x)/A}\cong A\left\{\frac{x}{I}\right\}_{p}^{\wedge}/I\] leads to an \(I\)-adic Bockstein spectral sequence \[E_{1}^{*,*}=A/(I,x)\langle t\rangle_{p}^{\wedge}[\overline{d}]\cong A\left\langle \frac{x}{d}\right\rangle_{p}^{\wedge}[\overline{d}]/d\Rightarrow A\left\{ \frac{x}{d}\right\}_{(p,d)}^{\wedge},\] where \(\overline{d}\) represents \(d\) on the \(E_{1}\)-page. The map \(\beta\) sends \(\gamma_{p^{n}}(t)\mapsto\delta^{n}(\frac{x}{d})\) (up to \(p\)-adic units). This can be proved by showing that in the setting of Lemma 3.2.11, \(\phi(\delta^{n}(\frac{x}{d}))\in(d)\subseteq A\{\frac{x}{d}\}\) if \(n\geq 0\) (see Lemma 3.2.14 below). The fact that \[\phi\left(\delta^{n}\left(\frac{x}{d}\right)\right)=\delta^{n}\left(\frac{x} {d}\right)^{p}+p\delta^{n+1}\left(\frac{x}{d}\right)\] then implies that \(\delta^{n}(\frac{x}{d})^{p}\equiv-p\delta^{n+1}(\frac{x}{d})\pmod{d}\). Therefore, the elements \(\delta^{n}(\frac{x}{d})\) can be used to define divided powers of \(\frac{x}{d}\pmod{d}\). In particular, we obtain the desired map \(\beta:A/(I,x)\langle t\rangle\to A\{\frac{x}{d}\}_{(p,d)}^{\wedge}/d\), but further work is required to show that it is a \(p\)-complete isomorphism. **Lemma 3.2.14**.: _Fix notation as in Lemma 3.2.11. Then \(\phi(\delta^{n}(\frac{x}{d}))\in(d)\subseteq A\{\frac{x}{d}\}\)._ Proof.: Let \(t=\frac{x}{d}\). The desired claim can be proved by induction on \(n\). For the base case, we need to show that \(\phi(t)\in I\). By reduction to the universal case, we may assume that \((p,d)\) is regular in \(A\). Then [1, Lemma 3.6] implies that the sequence \((d,\phi(d))\) is regular in \(A\). Since \((x)\) is \(\phi\)-stable, we see that \(d\) divides \(\phi(x)\); it then follows from the formula \(\phi(d)\phi(t)=\phi(x)\) that \(d\) divides \(\phi(t)\), as desired. For the inductive step, observe that \[p\phi(\delta^{n+1}(t))=p\delta(\phi(\delta^{n}(t)))=\phi^{2}(\delta^{n}(t))- \phi(\delta^{n}(t))^{p}.\] The inductive hypothesis says that \(\phi(\delta^{n}(t))\in(d)\) for every \(k\geq 1\), so that \(d\) divides \(p\phi(\delta^{n+1}(t))\). Since \((p,d)\) is a regular sequence, this implies that \(d\) divides \(\phi(\delta^{n+1}(t))\), as desired. This implies the following result, which is also proved in [23, Lemma 6.13]. **Corollary 3.2.15**.: _There is an isomorphism \(\operatorname{Spec}(\mathbf{Z}/p^{n})^{\not{D}}\cong\mathbf{G}_{a}^{\sharp} \times\operatorname{Spec}(\mathbf{Z}/p^{n})\) of \(\mathbf{Z}/p^{n}\)-schemes. In particular, the scaling action of \(\mathbf{G}_{m}^{\sharp}\) on \(\mathbf{G}_{a}^{\sharp}\) over \(\mathbf{Z}/p^{n}\) gives an isomorphism \(\operatorname{WCar}_{\mathbf{Z}/p^{n}}^{\operatorname{HT}}\cong\mathbf{G}_{a} ^{\sharp}/\mathbf{G}_{m}^{\sharp}\) of \(\mathbf{Z}/p^{n}\)-stacks._ Proof.: Recall that \(\overline{\Delta}_{\mathbf{Z}/p^{n}/\mathbf{Z}_{p}[\overline{p}]}=\widehat{ \Omega}_{\mathbf{Z}/p^{n}}^{\not{D}}\). Lemma 3.2.11 implies that \(\overline{\Delta}_{\mathbf{Z}/p^{n}/\mathbf{Z}_{p}[\overline{p}]}\cong\mathbf{ Z}/p^{n}\langle t\rangle\) with \(|t|=0\); this gives the desired claim. (It is useful to view \(\gamma_{p^{m}}(t)\) as a \(p\)-adic unit multiple of \(\delta^{m}(\frac{p^{n}}{\overline{p}})\), as described in Remark 3.2.13.) Alternatively, consider the transversal prism \((A,I)=(\mathbf{Z}_{p}[q-1],[p]_{q})\), and let \(x=(q-1)^{n(p-1)}\). Note that \(\phi(x)\in(x)\), so \((x)\) is \(\phi\)-stable. Then \(A/I\cong\mathbf{Z}_{p}[\zeta_{p}]\), and \(A/(I,x)\) is isomorphic to \(\mathbf{Z}_{p}[\zeta_{p}]/(\zeta_{p}-1)^{n(p-1)}\cong\mathbf{Z}/p^{n}[\zeta_{p}]\) since the \(p\)-adic valuation of \((\zeta_{p}-1)^{n(p-1)}\) is \(n\). It follows from Lemma 3.2.11 that \(\overline{\Delta}_{\mathbf{Z}/p^{n}[\zeta_{p}]/\mathbf{Z}_{p}[q-1]}\cong \mathbf{Z}/p^{n}[\zeta_{p}]\langle t^{\prime}\rangle\) with \(|t^{\prime}|=0\). There is an action of \(\mathbf{Z}_{p}^{\times}\) (and hence \(\mathbf{F}_{p}^{\times}\subseteq\mathbf{Z}_{p}^{\times}\)) on \((A,I)\); taking \(\mathbf{F}_{p}^{\times}\)-fixed points produces an isomorphism \[\overline{\Delta}_{\mathbf{Z}/p^{n}/\mathbf{Z}_{p}[\overline{p}]}\cong( \overline{\Delta}_{\mathbf{Z}/p^{n}[\zeta_{p}]/\mathbf{Z}_{p}[q-1]})^{h \mathbf{F}_{p}^{\times}}\cong\mathbf{Z}/p^{n}\langle t\rangle\] with \(|t|=0\), as desired. Note that as described in Remark 3.2.13, the divided power \(\gamma_{p^{m}}(t^{\prime})\) can be viewed as a \(p\)-adic multiple of \(\delta^{m}(\frac{(q-1)^{n(p-1)}}{|p|_{q}})=\delta^{m}(\frac{(q-1)^{np-n+1}}{q^ {p}-1})\). An alternative (and more hands-on) proof of Corollary 3.2.15 is given in Appendix B; this alternative argument is also presented as [23, Lemma 6.13]. **Example 3.2.16**.: Let us describe the topological Sen operator on \(\operatorname{THH}(\mathbf{Z}/p^{n}/X(p))\) for \(n\geq 2\) (recall that \(p>2\)). This is equivalent to describing the Serre spectral sequence in \(\mathbf{Z}/p^{n}\)-homology for the fibration \[S^{2p-1}\to\Omega^{2}Y_{n}\to\Omega S^{2p+1}\times B(p^{n-1}\mathbf{Z}).\] Note that this fibration is an analogue of the fibration (5). It will be simpler to analyze the Serre spectral sequence in \(\mathbf{Z}_{p}\)-homology, since all the differentials in the Serre spectral sequence in \(\mathbf{Z}/p^{n}\)-homology arise from the Serre spectral sequence in \(\mathbf{Z}_{p}\)-homology. The analysis is similar to Remark 3.2.6; the Serre spectral sequence runs \[E_{*,*}^{2}=\mathbf{Z}_{p}\langle u_{n}\rangle\otimes_{\mathbf{Z}_{p}} \mathbf{Z}_{p}[\theta,\epsilon]/\epsilon^{2}\Rightarrow\pi_{*}\mathbf{Z}_{p}[ \Omega^{2}Y_{n}], \tag{31}\] where \(\epsilon\) lives in degree \(2p-1\) and \(u_{n}\) lives in degree \(2\). There are several ways to determine the differentials in this spectral sequence. Our approach will be to describe the pattern of differentials by first calculating \(\pi_{*}\mathbf{Z}_{p}[\Omega^{2}Y_{n}]\); in turn, we will do this by computing \(\pi_{*}C^{*}(\Omega^{2}Y_{n};\mathbf{Z}_{p})\). For this, we use the Serre spectral sequence for the fibration \[B\mathbf{Z}/p^{n-1}\to\Omega^{2}Y_{n}\to\Omega S^{3}.\] Since \(\operatorname{H}^{*}(B\mathbf{Z}/p^{n-1};\mathbf{Z})\cong\mathbf{Z}[c]/p^{n-1}c\) with \(|c|=2\), the Serre spectral sequence collapses on the \(E_{2}\)-page, and we find that \(\pi_{*}C^{*}(\Omega^{2}Y_{n};\mathbf{Z}_{p})\cong\mathbf{Z}_{p}\langle x \rangle[c]/(x-p^{n-1}c)\) with \(|x|=2\). (If \(n=1\), then \(\Omega^{2}Y_{n}\simeq\Omega S^{3}\), and the cohomology ring is \(\mathbf{Z}_{p}\langle x\rangle\).) For \(n\geq 2\), this is isomorphic to \(\mathbf{Z}_{p}\langle y\rangle[c]/y\), where \(y=x-p^{n-1}c\). Indeed, observe that if \(n\geq 2\), then \[\gamma_{j}(y):=\sum_{i=0}^{j}\frac{p^{i(n-1)}}{i!}c^{i}\gamma_{j-i}(x)\] is a well-defined class in \(\mathbf{Z}_{p}\langle x\rangle[c]/(x-p^{n-1}c)\) since \(p\) has divided powers in \(\mathbf{Z}_{p}\), and that these classes form a basis for \(\mathbf{Z}_{p}\langle x\rangle[c]/(x-p^{n-1}c)\) as a \(\mathbf{Z}_{p}[c]\)-module. Recall that in homological grading, there is an equivalence: \[\mathbf{Z}_{p}\langle y\rangle/y\simeq\mathbf{Z}_{p}\oplus\bigoplus_{n\geq 1 }\mathbf{Z}_{p}/n\{\gamma_{n}(y)\}[-2n],\] which implies that if \(n\geq 2\), then \[\mathrm{H}^{i}(\Omega^{2}Y_{n};\mathbf{Z}_{p})\cong\begin{cases}\mathbf{Z}_{p} \oplus\bigoplus_{j=1}^{k}\mathbf{Z}_{p}/j\{\gamma_{j}(y)c^{k-j}\}&i=2k\geq 0 \text{ even}\\ 0&\text{else}.\end{cases}\] Using the universal coefficients theorem, we find that if \(n\geq 2\), then \[\pi_{i}\mathbf{Z}_{p}[\Omega^{2}Y_{n}]\cong\begin{cases}\mathbf{Z}_{p}&i\in 2 \mathbf{Z}_{\geq 0}\\ \bigoplus_{j=1}^{k}\mathbf{Z}_{p}/j&i=2k-1.\end{cases}\] The generator of \(\pi_{2j}\mathbf{Z}_{p}[\Omega^{2}Y_{n}]\) is the linear dual to \(c^{j}\in\mathbf{Z}_{p}\langle y\rangle[c]/y\), while the generator of \(\pi_{2k-1}\mathbf{Z}_{p}[\Omega^{2}Y_{n}]\) which is killed by \(j\) is dual to \(\gamma_{j}(y)c^{k-j}\). Note that the homotopy groups \(\pi_{*}\mathbf{Z}_{p}[\Omega^{2}Y_{n}]\) are _independent_ of \(n\) if \(n\geq 2\) (but the generators of these groups do depend on \(n\)). Let us now return to the Serre spectral sequence (31). Comparison with the Serre spectral sequence for the fibration (5) (i.e., with the topological Sen operator on \(\mathrm{THH}(\mathbf{Z}_{p}/X(p))\); see Remark 3.1.6) forces the differentials in (31) to be given by (up to \(p\)-adic units): \[d^{2p}(\gamma_{p^{k}}(u_{n}))=p^{n-1}\epsilon\prod_{j=1}^{k-1}\gamma_{p^{j}}( u_{n})^{p-1}=p^{n}\epsilon\partial_{u_{n}^{p}}(\gamma_{p^{k}}(u_{n})),\ d^{2p}(\theta^{j})=jp\theta^{j-1}\epsilon.\] Reducing modulo \(p^{n}\), we get the topological Sen operator on \(\mathrm{THH}(\mathbf{Z}/p^{n}/X(p))\) for \(n\geq 2\): \[\Theta:\gamma_{p^{k}}(u_{n})\mapsto p^{n-1}\prod_{j=1}^{k-1}\gamma_{p^{j}}(u_{ n})^{p-1},\ \Theta:\theta^{j}\mapsto jp\theta^{j-1}.\] Observe that this acts as "\(p^{n}\partial_{u_{n}^{p}}\)". Of course, one can similarly deduce the action of the topological Sen operator on \(\mathrm{THH}(\mathbf{Z}/p^{n}/J(p))\). This recovers the calculation \[\pi_{j}\mathrm{THH}(\mathbf{Z}/p^{n})=\begin{cases}\bigoplus_{i=0}^{j}\mathbf{ Z}/\gcd(j,p^{n})&\text{even $j\geq 0$},\\ \bigoplus_{i=0}^{j}\mathbf{Z}/\gcd(j,p^{n})&\text{odd $j\geq 0$},\\ 0&j<0.\end{cases}\] Another example of the topological Sen operator comes from studying complete DVRs, where the relationship between THH relative to \(J(p)\) and the diffracted Hodge complex predicted by Conjecture 3.1.14 can be seen directly. **Example 3.2.17**.: Let \(R\) be a \(p\)-torsionfree complete DVR of mixed characteristic \((0,p>0)\) whose residue field \(k\) is perfect. Then we have \[\pi_{*}\mathrm{THH}(R/X(p))\cong\mathrm{HH}_{*}(R/\mathbf{Z}_{p})[\theta]\otimes _{\mathbf{Z}_{p}}\epsilon_{*}^{\mathbf{Z}_{p}},\] and the map \(\Theta:\pi_{*}\mathrm{THH}(R/X(p))\to\pi_{*-2p}\mathrm{THH}(R/X(p))\) sends \(\theta^{j}\mapsto jp\theta^{j-1}\). To compute the action of the topological Sen operator on the remainder of \(\mathrm{THH}(R/X(p))\), it will be simpler to assume that \(T(1)\) is an \(\mathbf{E}_{2}\)-ring and work instead with \(\mathrm{THH}(R/T(1))\); this is merely cosmetic, and it is not difficult to modify the below argument to use \(\mathrm{THH}(R/X(p))\) instead. Then, we have \(\pi_{*}\mathrm{THH}(R/T(1))\cong\mathrm{HH}_{*}(R/\mathbf{Z}_{p})[\theta]\). We will compute \(\mathrm{THH}(R)\) using the topological Sen operator on \(\mathrm{THH}(R/T(1))\) and (16). Let \(\pi\in R\) be a uniformizer, let \(E(u)\in W(k)[\![u]\!]\) be its minimal polynomial, and let \(E^{\prime}(u)\in W(k)[\![u]\!]\) denote its derivative with respect to \(u\). Recall that \(R=W(k)[\![u]\!]/E(u)\), that \(W(k)\) is etale over \(\mathbf{Z}_{p}\), \(\pi_{*}\mathrm{HH}(W(k)[\![u]\!]/W(k))\cong\Lambda_{W(k)[\![u]\!]}(du)\) with \(|du|=1\), and \(\pi_{*}\mathrm{HH}(R/W(k)[\![u]\!])\cong R\langle\sigma_{E}\rangle\), where \(\sigma_{E}:=\sigma^{2}(E(u))\). The transitivity sequence for the composite \(W(k)\to W(k)[\![u]\!]\to R\) implies that \(\mathrm{HH}(R/W(k))\simeq\mathrm{HH}(R/\mathbf{Z}_{p})\) is the fiber of a map \(R\langle\sigma_{E}\rangle\to\Sigma^{2}R\langle\sigma_{E}\rangle\) sending \(\gamma_{n}(\sigma_{E})\mapsto E^{\prime}(\pi)\gamma_{n-1}(\sigma_{E})\). In particular, \[\pi_{n}\mathrm{HH}(R/\mathbf{Z}_{p})\cong\begin{cases}R&n=0,\\ R/E^{\prime}(\pi)&n=2j+1,\ j\geq 0,\\ 0&\text{else}.\end{cases}\] Let us denote the generator of \(\pi_{2j-1}\mathrm{HH}(R/\mathbf{Z}_{p})\) by \(z_{j}\), so that \(\gamma_{j-1}(\sigma_{E})\in\pi_{2j}\Sigma^{2}R\langle\sigma_{E}\rangle\) is sent to \(z_{j}\) under the boundary map \(\Sigma^{2}R\langle\sigma_{E}\rangle\to\Sigma\mathrm{HH}(R/\mathbf{Z}_{p})\). We then have \[\pi_{n}\mathrm{THH}(R/T(1))\cong\begin{cases}R\cdot\theta^{j}&n=2pj,j\geq 0\\ \bigoplus_{0\leq i<j/p}R/E^{\prime}(\pi)\cdot z_{j-pi}\theta^{i}&n=2j-1,j\geq 0,\\ 0&\text{else}.\end{cases} \tag{32}\] From this, we can describe the topological Sen operator on \(\mathrm{THH}(R/T(1))\). For this, it will be useful to rephrase the above calculations somewhat, and use \(J(p)\) instead of \(T(1)\). It is easy to compute that \(\pi_{*}\mathrm{THH}(R/J(p))\cong\mathrm{HH}_{*}(R/\mathbf{Z}_{p})[x]\), where \(x\) is the class in degree \(2\) from Proposition 2.3.3. In other words, \[\pi_{n}\mathrm{THH}(R/J(p))\cong\begin{cases}R\cdot x^{j}&n=2j\text{ for }j\geq 0,\\ \bigoplus_{0\leq i<j}R/E^{\prime}(\pi)\cdot z_{j-i}x^{i}&n=2j-1,j\geq 1. \end{cases}\] Since \(\mathrm{HH}(R/\mathbf{Z}_{p})\) is the fiber of a map \(R\langle\sigma_{E}\rangle\to\Sigma^{2}R\langle\sigma_{E}\rangle\), it follows that there is a cofiber sequence \[\mathrm{THH}(R/J(p))\to R\langle\sigma_{E}\rangle[x]\xrightarrow{\nabla} \Sigma^{2}R\langle\sigma_{E}\rangle[x], \tag{33}\] where we have denoted the second map by \(\nabla\). The map \(\nabla\) is given on homotopy by a derivation, sending \(\sigma_{E}\mapsto E^{\prime}(\pi)\). Informally, \(\mathrm{THH}(R/J(p))\) can be written as \(R\langle\sigma_{E}\rangle[x]^{\nabla=0}\). The topological Sen operator \(\Theta:\mathrm{THH}(R/J(p))\to\Sigma^{2}\mathrm{THH}(R/J(p))\) is described on homotopy by the operator on \(R\langle\sigma_{E}\rangle[x]\) sending \(x\mapsto nx^{n-1}\). Note that this operator commutes with \(\nabla\) (so that it does indeed define an operator on \(\pi_{*}\mathrm{THH}(R/J(p))\)). Observe that since \(\mathrm{THH}(R)\) is the fiber of \(\Theta:\mathrm{THH}(R/J(p))\to\Sigma^{2}\mathrm{THH}(R/J(p))\), and \(\mathrm{THH}(R/J(p))\) is the fiber of \(\nabla:R\langle\sigma_{E}\rangle[x]\to\Sigma^{2}R\langle\sigma_{E}\rangle[x]\) we can write \(\mathrm{THH}(R)\) as the total fiber of the square (34) where the map denoted \(\Theta\) sends \(x^{n}\mapsto nx^{n-1}\). In turn, it follows that \(\mathrm{THH}(R)\) is also the total fiber of the square (35) The operator \(\nabla+\Theta\) acts on \(R\langle\sigma_{E}\rangle[x]\) by \[\nabla+\Theta:x^{n}\gamma_{m}(\sigma_{E})\mapsto nx^{n-1}\gamma_{m}(\sigma_{E })+E^{\prime}(\pi)x^{n}\gamma_{m-1}(\sigma_{E}). \tag{36}\] Let us now invert \(x\), and write \(y=\sigma_{E}x^{-1}\) in \(R\langle\sigma_{E}\rangle[x^{\pm 1}]\). Then \(y\) has divided powers and lives in degree \(0\), and there is an isomorphism \(R\langle\sigma_{E}\rangle[x^{\pm 1}]\cong R\langle y\rangle_{(y)}^{\wedge}[x^{\pm 1}]\) on homotopy. We can formally define \(\Theta\) on \(x^{n}\) for \(n\leq 0\) by the same formula: \(\Theta(x^{n})=nx^{n-1}\). It follows from (36) that \(\Psi:=x(\nabla+\Theta)\) sends \[\Psi:\gamma_{n}(y)\mapsto-nx^{-n}\gamma_{n}(\sigma_{E})+E^{\prime}(\pi)x^{-n +1}\gamma_{n-1}(\sigma_{E})=E^{\prime}(\pi)\gamma_{n-1}(y)-n\gamma_{n}(y).\] We claim that the action of \(-t\partial_{t}\) on \(R\langle(1-t)E^{\prime}(\pi)\rangle\) agrees the action of \(\Psi\) on \(R\langle y\rangle_{(y)}^{\wedge}\), if we identify \(y=(1-t)E^{\prime}(\pi)\). Indeed: \[\gamma_{n}(y) =\gamma_{n}((1-t)E^{\prime}(\pi))=-E^{\prime}(\pi)^{n}\frac{(1-t )^{n}}{n!}\] \[\xmapsto^{-t\partial_{t}}E^{\prime}(\pi)^{n}\frac{t(1-t)^{n-1}}{ (n-1)!}=E^{\prime}(\pi)^{n}\left(\frac{(1-t)^{n-1}}{(n-1)!}-n\frac{(1-t)^{n}} {n!}\right)\] \[=E^{\prime}(\pi)\gamma_{n-1}((1-t)E^{\prime}(\pi))-n\gamma_{n}((1 -t)E^{\prime}(\pi))\] \[=E^{\prime}(\pi)\gamma_{n-1}(y)-n\gamma_{n}(y).\] In particular, we can rewrite the square (35) after inverting \(x\) as where the horizontal maps act by sending \(\gamma_{n}((1-t)E^{\prime}(\pi))\mapsto E^{\prime}(\pi)\gamma_{n-1}((1-t)E^{ \prime}(\pi))\). The fiber of either of the horizontal maps in the above square can be identified with \(\mathrm{THH}(R/J(p))[x^{-1}]\). Using the above description of \(\Theta\), one can calculate (with some tedium) that \[\pi_{n}\mathrm{THH}(R)=\begin{cases}R&n=0,\\ R/jE^{\prime}(\pi)&n=2j-1\geq 0,\\ 0&\text{else}.\end{cases}\] This is exactly the calculation of \(\pi_{*}\mathrm{THH}(R)\) from [11, Theorem 5.1] (reproved in [19, Theorem 4.4]). The above discussion can be compared to [1, Remark 9.7], which says that if \(X=\mathrm{Spec}\,R\), then \(\mathrm{W}\mathrm{C}\mathrm{art}_{X}^{\mathrm{HT}}\cong X\times BG\), where \(G=\{(a,t)\in\mathbf{G}_{a}^{\sharp}\rtimes\mathbf{G}_{m}^{\sharp}|t-1=E^{ \prime}(\pi)a\}\). The canonical map \(\mathrm{W}\mathrm{C}\mathrm{art}_{X}^{\mathrm{HT}}\to X\times\mathrm{W} \mathrm{C}\mathrm{art}^{\mathrm{HT}}\cong(B\mathbf{G}_{m}^{\sharp})_{X}\) can be identified with the map induced on classifying stacks by the quotient map \(G\to\mathbf{G}_{m}^{\sharp}\) of group schemes over \(X\). Recall that the diffracted Hodge stack \(X^{\not\!\!D}\) can be identified with \(\mathrm{W}\mathrm{C}\mathrm{art}_{X}^{\mathrm{HT}}\times_{\mathrm{W}\mathrm{ C}\mathrm{art}^{\mathrm{HT}}}\mathrm{Spec}(\mathbf{Z}_{p})\cong\mathrm{W} \mathrm{C}\mathrm{art}_{X}^{\mathrm{HT}}\times_{\mathrm{W}\mathrm{C}\mathrm{art }^{\mathrm{HT}}\times X}X\). In particular, \(X^{\not\!\!D}\cong(\mathbf{G}_{m}^{\sharp})_{X}/G\), i.e., the classifying stack of the group scheme \(\mathbf{G}_{a}^{\sharp}[E^{\prime}(\pi)]=\{a\in\mathbf{G}_{a}^{\sharp}|E^{ \prime}(\pi)a=0\}\). One can show from this description that the \(2\)-periodification of the cohomology of the diffracted Hodge complex \(\widehat{\Omega}_{R}^{\not\!\!D}\cong\Gamma(B\mathbf{G}_{a}^{\sharp}[E^{\prime }(\pi)];\mathscr{O})\) can be identified with \(\pi_{*}\mathrm{THH}(R/J(p))[x^{-1}]\) (which is additively the \(2\)-periodification of \(\pi_{*}\mathrm{HH}(R/\mathbf{Z}_{p})\)), as predicted by Conjecture 3.1.14. Note that the extensions in the following long exact sequence in homotopy for (16) are _always_ nontrivial: \[\cdots\to\pi_{2j}\mathrm{THH}(R/T(1))\to\pi_{2(j-p)}\mathrm{THH}(R/ T(1))\to\pi_{2j-1}\mathrm{THH}(R)\to\] \[\pi_{2j-1}\mathrm{THH}(R/T(1))\to\pi_{2(j-p)-1}\mathrm{THH}(R/T(1) )\to\pi_{2j-2}\mathrm{THH}(R)\to\cdots \tag{37}\] For example, when \(j=p\), there is a long exact sequence \[\pi_{2p}\mathrm{THH}(R/T(1))\cong R\cdot\theta\to\pi_{0}\mathrm{THH}(R/T(1)) \cong R\to\pi_{2p-1}\mathrm{THH}(R)\to\pi_{2p-1}\mathrm{THH}(R/T(1))\to 0,\] which in particular gives a short exact sequence \[0\to R/p\to\pi_{2p-1}\mathrm{THH}(R)\to R/E^{\prime}(\pi)\cong\pi_{2p-1} \mathrm{THH}(R/T(1))\to 0.\] Since \(\pi_{2p-1}\mathrm{THH}(R)\cong R/pE^{\prime}(\pi)\), this extension must be nontrivial. ### Relation to the \(\widetilde{p}\)-de Rham complex We now describe some additional calculations which give further evidence for Conjecture 3.1.14. **Remark 3.3.1**.: Assume that \(R\) is the \(p\)-completion of \(\mathbf{Z}_{p}[t]\). Forthcoming work of Arpon Raksit ([10]) shows that (a completion of) \(q\Omega_{R}\) arises as the associated graded of a motivic filtration on \(\mathrm{HP}(\mathrm{ku}[t]/\mathrm{ku})\). In fact, Raksit studies \(\mathrm{HP}(A[t]/A)\) for a general \(\mathbf{E}_{\infty}\)-ring \(A\) with even homotopy groups. Using Remark 3.3.1, one can show that (a completion of) \(\widetilde{p}\Omega_{R}\) arises as the associated graded of a motivic filtration on \(\mathrm{HP}(\mathrm{BP}\langle 1\rangle[t]/\mathrm{BP}\langle 1\rangle)\). Moreover, the class \(\widetilde{p}\) is identified as the image of \(v_{1}\hbar^{p-1}\) in the associated graded. For the sake of completeness, let us explicitly compute \(\pi_{*}\mathrm{TP}(\mathbf{Z}_{p}[t]/X(p))\). As in Example 3.2.17, it will be convenient to assume that \(T(1)\) is an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring and work instead with \(\mathrm{THH}(R/T(1))\); again, this is merely cosmetic. We first need the following result, which is a special case of [11, Proposition 3.1.1] and \(S^{1}\)-equivariant Poincare duality for \(S^{1}/\mu_{n}\). **Lemma 3.3.2**.: _Let \(X\) be a bounded-below spectrum equipped with an action of \(S^{1}\). Then there is an equivalence_ \[\left(\bigoplus_{n\geq 1}X\otimes(S^{1}/\mu_{n})_{+}\right)^{tS^{1}}\simeq\lim_{k \to\infty}\bigoplus_{n\geq 1}\left(\Sigma\tau_{\leq k}X\right)^{t\mathbf{Z}/n}.\] **Example 3.3.3**.: Let \(S\) be the sphere spectrum. Recall that \(\mathbf{Z}_{p}[t]\simeq\mathbf{Z}_{p}\otimes S[t]\), so that \(\mathrm{THH}(\mathbf{Z}_{p}[t]/T(1))\simeq\mathrm{THH}(\mathbf{Z}_{p}/T(1)) \otimes\mathrm{THH}(S[t])\). Let \(\mathrm{THH}(S[t],(t))\) denote the fiber of the map \(\mathrm{THH}(S[t])\to\mathrm{THH}(S)\simeq S\) induced by the augmentation \(S[t]\to S\) sending \(t\mapsto 0\); note that the map \(\mathrm{THH}(S[t])\to S\) admits an \(S^{1}\)-equivariant splitting. Similarly, we write \(\mathrm{THH}(\mathbf{Z}_{p}[t]/T(1),(t))\) to denote the fiber of the map \(\mathrm{THH}(\mathbf{Z}_{p}[t]/T(1))\to\mathrm{THH}(\mathbf{Z}_{p}/T(1))\) induced by the augmentation \(\mathbf{Z}_{p}[t]\to\mathbf{Z}_{p}\). Then \(\mathrm{THH}(S[t],(t))\simeq\bigoplus_{n\geq 1}(S^{1}/\mu_{n})_{+}\), so that \[\mathrm{THH}(\mathbf{Z}_{p}[t]/T(1),(t))\simeq\mathrm{THH}(\mathbf{Z}_{p}/T(1) )\otimes\mathrm{THH}(S[t],(t))\simeq\bigoplus_{n\geq 1}(S^{1}/\mu_{n})_{+} \otimes\mathrm{THH}(\mathbf{Z}_{p}/T(1)). \tag{38}\] It follows from Lemma 3.3.2 that there is an equivalence \[\mathrm{THH}(\mathbf{Z}_{p}[t]/T(1),(t))^{tS^{1}}\simeq\lim_{n\geq 1}\bigoplus _{n\geq 1}\Sigma(\tau_{\leq k}\mathrm{THH}(\mathbf{Z}_{p}/T(1)))^{t\mathbf{Z}/ n}.\] Using Theorem 2.2.4(a), we have \(\tau_{\leq 2kp}\mathrm{THH}(\mathbf{Z}_{p}/T(1))\simeq\mathbf{Z}_{p}[J_{k}(S^{2p})]\). A simple calculation using Theorem 2.2.4(a) shows that there is an isomorphism \[\pi_{*}(\tau_{\leq 2kp}\mathrm{THH}(\mathbf{Z}_{p}/T(1)))^{t\mathbf{Z}/n} \cong\pi_{*}(\mathrm{BP}\langle 1\rangle/v_{1}^{k+1})^{t\mathbf{Z}/n}\cong\pi_{*}( \tau_{\leq 2k(p-1)}\mathrm{BP}\langle 1\rangle)^{t\mathbf{Z}/n}.\] Let \(\langle n\rangle(\hbar):=\frac{[n](\hbar)}{\hbar}\), so that \(\pi_{*}\mathrm{BP}\langle 1\rangle^{t\mathbf{Z}/n}\cong\mathbf{Z}_{p}[v_{1}](( \hbar))/\langle n\rangle(\hbar)\). In analogy to \(q=\beta\hbar+1\), if we define \(\widetilde{p}=v_{1}\hbar^{p-1}\), then \(\langle n\rangle(\hbar)\) defines an element of \(\mathbf{Z}_{p}\llbracket\widetilde{p}\rrbracket\) which we will denote \(\langle n\rangle_{\widetilde{p}}\). We conclude that \[\pi_{*}(\tau_{\leq 2kp}\mathrm{THH}(\mathbf{Z}_{p}/T(1)))^{t\mathbf{Z}/n} \cong\mathbf{Z}_{p}\llbracket\widetilde{p}\rrbracket((\hbar))/(\widetilde{p}^{ k+1},\langle n\rangle(\hbar)).\] It follows that \[\pi_{*}\mathrm{THH}(\mathbf{Z}_{p}[t]/T(1),(t))^{tS^{1}}\cong\lim_{k\to\infty }\bigoplus_{n\geq 1}\Sigma\mathbf{Z}_{p}\llbracket\widetilde{p} \rrbracket((\hbar))/(\widetilde{p}^{k+1},\langle n\rangle_{\widetilde{p}}),\] i.e., that \[\pi_{*}\mathrm{TP}(\mathbf{Z}_{p}[t]/T(1))\cong\mathbf{Z}_{p}[\widetilde{p} \rrbracket((\hbar))\times\lim_{k\to\infty}\bigoplus_{n\geq 1}\Sigma \mathbf{Z}_{p}\llbracket\widetilde{p}\rrbracket((\hbar))/\langle n\rangle_{ \widetilde{p}}.\] In a manner similar to Example 3.3.3, one calculates that if we write \(\mathbf{Z}_{p}[\beta]((\hbar))=\mathbf{Z}_{p}\llbracket q-1\rrbracket((\hbar))\) by setting \(q=1+\beta\hbar\), and \(\langle n\rangle_{\mathbf{G}_{m}}(\hbar)=\frac{[n]\mathbf{G}_{m}(\hbar)}{\hbar}\) is the divided \(n\)-series of the rescaled multiplicative formal group law \(x+y+(q-1)xy\), then \[\pi_{*}\mathrm{HP}(\mathrm{ku}_{p}^{\wedge}[t]/\mathrm{ku}_{p}^{\wedge})\cong \mathbf{Z}_{p}\llbracket q-1\rrbracket((\hbar))\times\lim_{k\to\infty} \bigoplus_{n\geq 1}\Sigma\mathbf{Z}_{p}\llbracket q-1\rrbracket((\hbar))/((q-1)^{k+1}, \langle n\rangle_{\mathbf{G}_{m}}(\hbar)). \tag{39}\] Moreover, \(\pi_{*}\mathrm{HP}(\mathrm{BP}\langle 1\rangle[t]/\mathrm{BP}\langle 1\rangle)\cong\pi_{*} \mathrm{HP}(\mathrm{ku}_{p}^{\wedge}[t]/\mathrm{ku}_{p}^{\wedge})^{\mathbf{F}_{ p}^{\times}}\), where \(\mathbf{F}_{p}^{\times}\) acts on \(\mathrm{HP}(\mathrm{ku}_{p}^{\wedge}[t]/\mathrm{ku}_{p}^{\wedge})\) via its action by Adams operations on \(\mathrm{ku}_{p}^{\wedge}\). Note that since \(\mathbf{F}_{p}^{\times}\) has order coprime to \(p\), taking \(\mathbf{F}_{p}^{\times}\)-invariants preserves small limits and colimits after \(p\)-localization. In particular, \(\pi_{*}\mathrm{HP}(\mathrm{BP}\langle 1\rangle[t]/\mathrm{BP}\langle 1\rangle)\) is isomorphic to \(\pi_{*}\mathrm{TP}(\mathbf{Z}_{p}[t]/T(1))\). The following is also a consequence of the forthcoming work of Arpon Raksit ([10]) mentioned above. **Lemma 3.3.4**.: _There is an \(\mathbf{Z}_{p}^{\times}\)-equivariant isomorphism_ \[\mathrm{H}^{*}(q\Omega_{\mathbf{Z}_{p}[t]})((\hbar))\cong\mathbf{Z}_{p} \llbracket q-1\rrbracket((\hbar))\times\bigoplus_{n\geq 1}\Sigma\mathbf{Z}_{p} \llbracket q-1\rrbracket((\hbar))/\langle n\rangle_{\mathbf{G}_{m}}(\hbar).\] Proof.: For the formal group law over \(\operatorname{ku}_{p}^{\wedge}\), we have \[\langle n\rangle_{\mathbf{G}_{m}}(\hbar)=\sum_{i=1}^{n}\binom{n}{i}\hbar^{i-1} \beta^{i-1}=[n]_{q}\in\mathbf{Z}_{p}[\beta](\hbar),\] where \(q:=1+\beta\hbar\). The claim now follows from the fact that the differential \(\nabla_{q}\) in \(q\Omega_{\mathbf{Z}_{p}[t]}\) sends \(t^{n}\mapsto[n]_{q}t^{n-1}dt\). In particular, \(\pi_{*}\mathrm{HP}(\mathrm{BP}\langle 1\rangle[t]/\mathrm{BP}\langle 1\rangle) \cong\pi_{*}\mathrm{TP}(\mathbf{Z}_{p}[t]/T(1))\) is a \(2\)-periodification of a completion of \(\mathrm{H}^{*}(\widetilde{p}\Omega_{\mathbf{Z}_{p}[t]})\). This calculation leads to the following expectation related to Conjecture 3.1.14: **Conjecture 3.3.5**.: _Let \(R\) be an animated \(\mathbf{Z}_{p}\)-algebra. Then \(\mathrm{TP}(R/X(p))\) admits a motivic filtration \(\mathrm{F}_{\mathrm{mot}}^{*}\mathrm{TP}(R/X(p))\) such that \(\mathrm{gr}_{\mathrm{mot}}^{i}\mathrm{TP}(R/X(p))\simeq\hat{\mathbb{A}}_{R/ \mathbf{Z}_{p}[\overline{p}]}[2i]\otimes_{R}\epsilon^{R}\), where \(\hat{\mathbb{A}}_{R/\mathbf{Z}_{p}[\overline{p}]}\) is the Nygaard completion of \(\widetilde{p}\Omega_{R}\)._ We now turn to a higher chromatic analogue of (part of) this picture. **Definition 3.3.6**.: Let \(R\) be an \(\mathbf{E}_{2}\)-ring, and equip \(\mathrm{HH}(R[t]/R):=\mathrm{THH}(S[t])\otimes R\) with the \(S^{1}\)-action inherited from \(\mathrm{THH}(S[t])\) and the trivial action on \(R\). **Warning 3.3.7**.: If \(R\) is only an \(\mathbf{E}_{2}\)-ring, one _cannot_ define Hochschild homology relative to \(R\); in particular, the notation \(\mathrm{HH}(R[t]/R)\) is rather abusive. As explained in [1, Corollary 2.9], if \(R^{\prime}\) is an \(\mathbf{E}_{1}\)-\(R\)-algebra, then \(\mathrm{HH}(R^{\prime}/R)\) only exists (and has a natural \(S^{1}\)-action) when \(R\) is a _framed_\(\mathbf{E}_{2}\)-ring14. In other words, if \(R\) is merely an \(\mathbf{E}_{2}\)-ring, it would not be clear how to define \(\mathrm{HH}(R[t]/R)\), had we not known that \(R[t]\) admits a lift to the sphere spectrum. This leads to the following unfortunate warning: if \(R\) is an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring with a nontrivial \(S^{1}\)-action, then the (more natural) circle action on \(\mathrm{HH}(R[t]/R)\) arising via the \(S^{1}\)-action on \(R\)_cannot_ be necessarily identified with the circle action from Definition 3.3.6. However, for this article, we will only use the circle action from Definition 3.3.6. Footnote 14: Suppose that \(R\) is an \(\mathbf{E}_{3}\)-algebra, and \(\mathscr{C}\) is an \(R\)-linear \(\infty\)-category. The choice of a framed knot in \(\mathbf{R}^{3}\) also defines an \(\mathbf{E}_{n-3}\)-map \(\int_{S^{1}}R\to R\), and hence allows one to define relative Hochschild homology \(\mathrm{HH}(\mathscr{C}/R)\). However, this does not define an \(S^{1}\)-action on \(\mathrm{HH}(\mathscr{C}/R)\)! Thanks to Robert Burklund for this point. View \(\mathrm{BP}\langle n-1\rangle[t_{1},\cdots,t_{j}]\) as a \(\mathbf{Z}_{\geq 0}^{j}\)-graded ring, where \(t_{i}\) has weight \((0,\cdots,1,\cdots,0)\). Then, define \(\mathrm{HP}^{\mathrm{gr}}(\mathrm{BP}\langle n\rangle[t_{1},\cdots,t_{j}]/ \mathrm{BP}\langle n\rangle)\) to be the \(S^{1}\)-Tate construction of \(\mathrm{HH}(\mathrm{BP}\langle n\rangle[t_{1},\cdots,t_{j}]/\mathrm{BP} \langle n\rangle)\), taken internally to \(\mathbf{Z}_{\geq 0}^{j}\)-graded \(\mathrm{BP}\langle n\rangle\)-modules. Similarly, define \(\mathrm{TP}^{\mathrm{gr}}(\mathrm{BP}\langle n-1\rangle[t_{1},\cdots,t_{j}]/ X(p^{n}))\) to be the \(S^{1}\)-Tate construction of \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle[t_{1},\cdots,t_{j}]/X(p^{n}))\) taken internally to \(\mathbf{Z}_{\geq 0}^{j}\)-graded \(\mathrm{BP}\langle n-1\rangle\)-modules. Then, related to Conjecture 3.3.5, we have the following result (which, when \(n=0\), is a very special case of the main result of [19]): **Proposition 3.3.8**.: _There is a \(p\)-complete isomorphism of \(\mathbf{Z}_{\geq 0}^{j}\)-graded modules equipped with a map from \(\pi_{*}\mathrm{BP}\langle n\rangle^{t_{S^{1}}}[B\Delta_{n}]\cong\pi_{*} \mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\):_ \[\pi_{*}\mathrm{HP}^{\mathrm{gr}}(\mathrm{BP}\langle n\rangle[t_{1},\cdots,t_{j}] /\mathrm{BP}\langle n\rangle)[B\Delta_{n}]\cong\pi_{*}\mathrm{TP}^{\mathrm{gr} }(\mathrm{BP}\langle n-1\rangle[t_{1},\cdots,t_{j}]/X(p^{n})).\] _The map \(\mathrm{TP}^{\mathrm{gr}}(\mathrm{BP}\langle n-1\rangle[t]/X(p^{n}))\to \mathrm{TP}(\mathrm{BP}\langle n-1\rangle/X(p^{n}))\) is an equivalence after \(K(n)\)-localization._ Proof.: For simplicity, we assume that \(j=1\) and write \(t\) instead of \(t_{1}\). In the graded setting, we may commute the \(S^{1}\)-Tate construction with the infinite direct sum (i.e., Lemma 3.3.2 is not necessary). It follows that there are _graded_ equivalences \[\mathrm{TP}^{\mathrm{gr}}(\mathrm{BP}\langle n-1\rangle[t]/X(p^{n}),(t)) \simeq\bigoplus_{m\geq 1}\Sigma\mathrm{THH}(\mathrm{BP}\langle n -1\rangle/X(p^{n}))^{t\mathbf{Z}/m}(m),\] \[\mathrm{HP}^{\mathrm{gr}}(\mathrm{BP}\langle n\rangle[t]/\mathrm{BP}\langle n \rangle,(t)) \simeq\bigoplus_{m\geq 1}\Sigma\mathrm{BP}\langle n\rangle^{t \mathbf{Z}/m}(m).\] The desired result now follows from Theorem 2.2.4(a). The second statement follows from the above equivalences and the fact that \(L_{K(n)}(\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p^{m}})=0\) by Lemma 3.3.9. The proof above used the following (well-known) fact. **Lemma 3.3.9**.: _Let \(\mathrm{BP}\langle n\rangle\) denote any form of the truncated Brown-Peterson spectrum. Then we have \(L_{K(n)}(\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p^{m}})=0\)._ Proof.: We first observe that \(\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p^{m}}\) depends only on the \(p\)-completion of \(\mathrm{BP}\langle n\rangle\); indeed, the obvious variant of [11, Lemma I.2.9] shows that if \(X\) is a bounded-below spectrum with \(\mathbf{Z}/p^{m}\)-action, then \(X^{t\mathbf{Z}/p^{m}}\) is \(p\)-complete, and the map \(X^{t\mathbf{Z}/p^{m}}\to(X^{\wedge}_{p})^{t\mathbf{Z}/p^{m}}\) is an equivalence. Since all forms of \(\mathrm{BP}\langle n\rangle\) are equivalent after \(p\)-completion by [1], we may therefore reduce to proving the claim for a single form of \(\mathrm{BP}\langle n\rangle\). To show that \(L_{K(n)}(\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p^{m}})=0\), it suffices to show (since \(\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p^{m}}\) is an MU-module) that \((\pi_{*}\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p^{m}}[\frac{1}{v_{n}}])/(p, \cdots,v_{n-1})\cong\pi_{*}k(n)^{t\mathbf{Z}/p^{m}}[\frac{1}{v_{n}}]=0\). Recall that \(\pi_{*}\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p^{m}}\cong\mathrm{BP}\langle n \rangle_{*}((\hbar))/[p^{m}](\hbar)\). We will work with the form of \(\mathrm{BP}\langle n\rangle\) such that the associated formal group law over \(\pi_{*}\mathrm{BP}\langle n\rangle\) induces the Honda formal group law over \(\pi_{*}k(n)\). Then, the \(p^{m}\)-series of the formal group law over \(\pi_{*}k(n)\) satisfies \([p^{m}](\hbar)=v_{n}^{\frac{p^{m}-1}{p^{m}-1}}\hbar^{p^{m}}\); so \(\pi_{*}k(n)^{t\mathbf{Z}/p^{m}}\cong\mathbf{F}_{p}[v_{n}](\hbar)/v_{n}^{\frac {p^{m}-1}{p^{m}-1}}\). In particular, \(v_{n}\) is nilpotent in \(k(n)^{t\mathbf{Z}/p^{m}}\), so that \(\pi_{*}k(n)^{t\mathbf{Z}/p^{m}}[\frac{1}{v_{n}}]=0\). **Remark 3.3.10**.: In general, \(\pi_{*}\mathrm{HP}(\mathrm{BP}\langle n\rangle[t]/\mathrm{BP}\langle n\rangle)\) looks like a completion of the \(2\)-periodification of the cohomology of the following two-term complex: \[\mathrm{BP}\langle n\rangle_{*}[\![\hbar]\!][t]\xrightarrow{\nabla}\mathrm{BP} \langle n\rangle_{*}[\![\hbar]\!][t]dt,\ \nabla:t^{m}\mapsto\tfrac{[m]_{\mathrm{BP}\langle n\rangle}(\hbar)}{\hbar}t^{m- 1}dt. \tag{40}\] This is a variant of the \(q\)-de Rham complex, and was first considered by Arpon Raksit (in forthcoming work). Note that an analogue of (40) can be defined for a formal group law \(F(x,y)\) over any commutative ring \(A\): \[F\Omega_{A[t]/A}:=\left(A[\![\hbar]\!][t]\xrightarrow{\nabla}A[\![\hbar]\!][t] dt\right),\ \nabla:t^{m}\mapsto\tfrac{[m](\hbar)}{\hbar}t^{m-1}dt; \tag{41}\] we will study basic combinatorial properties of such complexes in [1]. After base-changing to \(\mathbf{Q}\), the operator \(\nabla\) can be characterized by the formula \(\hbar t\nabla=\exp_{F}(t\partial_{t}\mathrm{log}_{F}(\hbar))\). We also have: **Proposition 3.3.11**.: _If \(\mathscr{C}\) is a left \(\mathrm{BP}\langle n-1\rangle\)-linear \(\infty\)-category, and \(\mathscr{C}[t]\) denotes \(\mathscr{C}\otimes_{\mathrm{BP}(n-1)}\mathrm{BP}\langle n-1\rangle[t]\), then Conjecture 2.2.18 implies that the map \(L_{K(n)}\mathrm{TP}^{\mathrm{gr}}(\mathscr{C}[t]/X(p^{n}))\to L_{K(n)} \mathrm{TP}(\mathscr{C}/X(p^{n}))\) is an equivalence._ Proof.: Observe that \[\operatorname{THH}(\mathscr{C}[t]/X(p^{n}))\simeq\operatorname{THH}(\mathscr{C}/X( p^{n}))(0)\oplus\bigoplus_{m\geq 1}(S^{1}/\mu_{m})_{+}\otimes\operatorname{THH}( \mathscr{C}/X(p^{n}))(m),\] so that \[\operatorname{TP}^{\operatorname{gr}}(\mathscr{C}[t]/X(p^{n}))\simeq \operatorname{TP}(\mathscr{C}/X(p^{n}))(0)\oplus\bigoplus_{m\geq 1}\Sigma \operatorname{THH}(\mathscr{C}/X(p^{n}))^{t\mathbf{Z}/p^{m}}(m).\] Now, Conjecture 2.2.18 implies that \(\operatorname{THH}(\mathscr{C}/X(p^{n}))^{t\mathbf{Z}/p^{m}}\) is a \(\operatorname{BP}\langle n\rangle^{t\mathbf{Z}/p^{m}}\)-module. But \(L_{K(n)}(\operatorname{BP}\langle n\rangle^{t\mathbf{Z}/p^{m}})=0\) by Lemma 3.3.9, so that \(L_{K(n)}\operatorname{TP}^{\operatorname{gr}}(\mathscr{C}[t]/X(p^{n}))\simeq L _{K(n)}\operatorname{TP}(\mathscr{C}/X(p^{n}))\), as desired. **Example 3.3.12**.: Let \(n=0\), and suppose \(\mathscr{C}\) is the \(\infty\)-category of quasicoherent sheaves on an \(\mathbf{F}_{p}\)-scheme \(X\). Then Proposition 3.3.11 says that the map \(\operatorname{TP}^{\operatorname{gr}}(\mathbf{A}^{1}\times X)\to\operatorname{ TP}(X)\) is a rational equivalence. This is generally not true in the non-graded setting. **Remark 3.3.13**.: Note that the functor \(L_{K(0)}\mathrm{TP}\) is _not_ nil-invariant; the same is true of the functor \(L_{K(n)}\mathrm{TP}(-/T(n))\) on \(\operatorname{BP}\langle n-1\rangle\)-algebras. Indeed, [10, Theorem 1.1] says that the map \(L_{K(0)}\mathrm{TP}(\mathbf{F}_{p}[t]/t^{k})\to L_{K(0)}\mathrm{TP}(\mathbf{F }_{p})\simeq\mathbf{Q}_{p}^{tS^{1}}\) is an isomorphism if and only if \(k\) is a power of \(p\). We can also see this at the level of algebra by calculating the crystalline cohomology of \(\mathbf{F}_{p}[t]/t^{k}\). If \(R\) denotes the \(p\)-completion of the PD-envelope of the quotient map \(\mathbf{Z}_{p}[t]\to\mathbf{F}_{p}[t]/t^{k}\) (so that \(R=\mathbf{Z}_{p}\left[t,\frac{t^{kj}}{j!}|j\geq 1\right]_{p}^{\wedge}\)), then [1, Theorem 7.23] implies that \(\Gamma_{\mathrm{crys}}((\mathbf{F}_{p}[t]/t^{k})/\mathbf{Z}_{p})\) is quasi-isomorphic to the de Rham complex \(\Omega_{R/\mathbf{Z}_{p}}^{\bullet}\). Note that \(R\) is additively isomorphic to the \(p\)-completion of \(\bigoplus_{0\leq i\leq k-1}\bigoplus_{j\geq 0}\mathbf{Z}_{p}\{\frac{t^{kj+i}}{j!}\}\). Since the derivative of \(\frac{t^{kj+i}}{j!}\) is \((kj+i)\frac{t^{kj+i-1}}{j!}\), which simplifies to \(k\frac{t^{k(j-1)+k-1}}{(j-1)!}\) when \(i=0\), we find that \(\pi_{0}\Gamma_{\mathrm{crys}}((\mathbf{F}_{p}[t]/t^{k})/\mathbf{Z}_{p})\cong \mathbf{Z}_{p}\), and \[\pi_{-1}\Gamma_{\mathrm{crys}}((\mathbf{F}_{p}[t]/t^{k})/\mathbf{Z }_{p})\cong \left(\bigoplus_{j\geq 0}\mathbf{Z}_{p}/k\cdot\left\{\frac{t^{k(j+1)-1}}{j! }\right\}\right)_{p}^{\wedge}\] \[\oplus\left(\bigoplus_{0\leq i<k-1}\bigoplus_{j\geq 0}\mathbf{Z}_{p} /(kj+i+1)\cdot\left\{\frac{t^{kj+i}}{j!}\right\}\right)_{p}^{\wedge}.\] For instance, suppose \(k=p\). Then \(pj+i+1\equiv i+1\pmod{p}\), which is never zero since \(0\leq i<p-1\). Therefore, the second summand is zero since \(pj+i+1\) is a \(p\)-adic unit, and we find that \(\pi_{-1}\Gamma_{\mathrm{crys}}((\mathbf{F}_{p}[t]/t^{p})/\mathbf{Z}_{p})\cong \left(\bigoplus_{j\geq 0}\mathbf{Z}/p\cdot\left\{\frac{t^{p(j+1)-1}}{j!} \right\}\right)_{p}^{\wedge}\). However, if \(k\) is not a power of \(p\), the second summand contains a non-torsion piece; for example, if \(k=2\) and \(p\) is odd, the second summand contains the \(p\)-completion of \(\bigoplus_{m\geq 0}\mathbf{Z}/p^{m}\), which is non-torsion. **Example 3.3.14**.: Let \(n=1\); then, Proposition 3.3.11 says that Conjecture 2.2.18 implies that up to a Nygaard-type completion, \(L_{K(1)}\mathrm{TP}^{\operatorname{gr}}(R[t]/X(p))\xrightarrow{\sim}L_{K(1)} \mathrm{TP}(R/X(p))\) for \(R\) being an \(\mathbf{E}_{1}\)-\(\mathbf{Z}_{p}\)-algebra. In the non-graded setting, this is generally not true; this is in contrast to [1, Corollary 4.24] (for instance), which says that \(K(1)\)-local algebraic K-theory is \(\mathbf{A}^{1}\)-invariant on connective \(K(1)\)-acyclic ring spectra (in particular, on connective \(\mathbf{E}_{1}\)-\(\mathbf{Z}_{p}\)-algebras). Let us now pivot somewhat to a slightly different topic, working at the famed prime \(p=2\). Then \(\widetilde{p}\Omega_{R}=q\Omega_{R}\), and there is an interesting action of \(\mathbf{Z}/2\subseteq\mathbf{Z}_{2}^{\times}\) on \(q\Omega_{R}\) sending \(q\mapsto q^{-1}\). If we view \(q\) as the Chern class (in K-theory) of the tautological line bundle on \(\mathbf{C}P^{\infty}\), this corresponds to the action of \(\mathbf{Z}/2\) on \(\mathbf{C}P^{\infty}\) given by complex conjugation. This motivates the following discussion: **Remark 3.3.15**.: We expect that most of the results and conjectures in this article continue to hold with \(\mathbf{Z}/2\)-equivariance, where "real" topological Hochschild homology \(\mathrm{THH}_{\mathbf{R}}\) is interpreted to mean the construction described in [1, 1]. Recall that \(\mathbf{Z}/2\) acts on \(\mathrm{SU}(n)\) by complex conjugation; we will denote this \(\mathbf{Z}/2\)-space by \(\mathrm{SU}(n)_{\mathbf{R}}\). Let \(\sigma\) (resp. \(\rho=\sigma+1\)) denote the sign representation (resp. regular representation), and if \(X\) is a \(\mathbf{Z}/2\)-space, let \(\Omega^{\sigma}X\) denote the space of maps \(\mathrm{Map}(S^{\sigma},X)\). There is a \(\mathbf{Z}/2\)-equivariant \(\mathbf{E}_{\rho}\)-map \[\Omega^{\sigma}\mathrm{SU}(n)_{\mathbf{R}}\simeq\Omega^{\rho}\mathrm{BSU}(n) _{\mathbf{R}}\to\Omega^{\rho}\mathrm{BSU}_{\mathbf{R}}\simeq\mathrm{BU}_{ \mathbf{R}},\] which equips its Thom spectrum \(X(n)_{\mathbf{R}}\) with the structure of an \(\mathbf{E}_{\rho}\)-ring. One can show that the equivariant Quillen idempotent on \(\mathrm{MU}_{\mathbf{R}}\) restricts to an idempotent on \(X(n)_{\mathbf{R}}\), and we will write \(T(n)_{\mathbf{R}}\) to denote the resulting summand of \(X(2^{n})_{\mathbf{R}}\). Moreover, \(\Phi^{\mathbf{Z}/2}T(n)_{\mathbf{R}}\simeq y(n)\) as \(2\)-local \(\mathbf{E}_{1}\)-algebras. We then expect: **Conjecture 3.3.16**.: _The following are true:_ 1. \(T(n)_{\mathbf{R}}\) _admits the structure of an_ \(\mathbf{E}_{\rho}\rtimes\mathrm{U}(1)_{\mathbf{R}}\)_-algebra, and_ \(X(2^{n})_{\mathbf{R}}\) _splits as a direct sum of shifts of_ \(T(n)_{\mathbf{R}}\) _such that the inclusion_ \(T(n)_{\mathbf{R}}\to X(2^{n})_{\mathbf{R}}\) _of the unit summand is a map of_ \(\mathbf{E}_{\rho}\)_-algebras. In particular,_ \(\mathrm{THH}_{\mathbf{R}}(\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}/T(n)_{ \mathbf{R}})\) _exists and admits an_ \(\mathrm{U}(1)_{\mathbf{R}}\)_-action_15_._ Footnote 15: Note that \(\mathrm{U}(1)_{\mathbf{R}}=S^{\sigma}\). 2. _Let_ \(\mathrm{BP}\langle n\rangle_{\mathbf{R}}\) _denote the Real truncated Brown-Peterson spectrum. Then there are equivalences_ (42) \[\mathrm{THH}_{\mathbf{R}}(\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}/T(n)_{ \mathbf{R}}) \simeq\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}[\Omega S^{2^{n} \rho+1}],\] (43) \[\mathrm{THH}_{\mathbf{R}}(\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}/T(n-1)_{ \mathbf{R}}) \simeq\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}\oplus\bigoplus_{j \geq 1}\Sigma^{2^{n-1}j\rho-1}\mathrm{BP}\langle n\rangle_{\mathbf{R}}/j\] _of_ \(\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}\)_-modules. The second equivalence requires_ \(n\geq 1\)_. Furthermore, the class in_ \(\pi_{2^{n}\rho}\mathrm{THH}_{\mathbf{R}}(\mathrm{BP}\langle n-1\rangle_{ \mathbf{R}}/T(n)_{\mathbf{R}})\) _induced by the map_ \(E:S^{2^{n}\rho}\to\Omega S^{2^{n}\rho+1}\) _detects_ \(\sigma^{p}(\underline{v}_{n})\)_._ 3. _There is a_ \(\mathbf{Z}/2\)_-equivariant space_ \(\widetilde{K}_{n}\) _and an equivariant fibration_ \[S^{2^{n}\rho-1}\to\widetilde{K}_{n}\to\Omega S^{2^{n}\rho+1}\] _such that_ \(\widetilde{K}_{0}=\Omega S^{\rho}\) _and_ \(\widetilde{K}_{1}=\Omega S^{\rho+1}\langle\rho+1\rangle\)_. For_ \(n=0\)_, this is simply the EHP sequence for_ \(S^{\sigma}\)_. The boundary map_ \(\Omega^{2}S^{2^{n+1}+1}\to S^{2^{n+1}-1}\) _of the underlying fibration is degree_ \(2\) _on the bottom cell of the source, and_ \((\widetilde{K}_{n})^{\mathbf{Z}/2}=K_{n-1}\) _as_ \((S^{2^{n}\rho-1})^{\mathbf{Z}/2}=S^{2n-1}\)_-fibrations over_ \((\Omega S^{2^{n}\rho+1})^{\mathbf{Z}/2}=\Omega S^{2^{n}+1}\)_._ 4. _For any_ \(\mathbf{Z}/2\)_-equivariant_ \(\mathbf{E}_{\sigma}\)_-_\(T(n)_{\mathbf{R}}\)_-algebra_ \(R\)_, there is an equivariant cofiber sequence_ \[\mathrm{THH}_{\mathbf{R}}(R/T(n-1)_{\mathbf{R}})\to\mathrm{THH}_{\mathbf{R}}(R/ T(n)_{\mathbf{R}})\to\Sigma^{2^{n}\rho}\mathrm{THH}_{\mathbf{R}}(R/T(n)_{ \mathbf{R}}),\] _where the second map is a_ \(\mathbf{Z}/2\)_-equivariant analogue of the topological Sen operator._ * _Let_ \[\mathrm{TP}_{\mathbf{R}}(\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}/T(n)_{ \mathbf{R}}):=\mathrm{THH}_{\mathbf{R}}(\mathrm{BP}\langle n-1\rangle_{ \mathbf{R}}/T(n)_{\mathbf{R}})^{t_{C_{2}}\mathrm{U}(1)_{\mathbf{R}}},\] _where the notation_ "\(t_{C_{2}}\mathrm{U}(1)_{\mathbf{R}}\)_" means the parametrized Tate construction from_ _[_1_, Remark 1.17]__. Then there is a_ \(\mathbf{Z}/2\)_-equivariant equivalence_ \[\mathrm{TP}_{\mathbf{R}}(\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}/T(n)_{ \mathbf{R}})\simeq\mathrm{BP}\langle n\rangle_{\mathbf{R}}^{t_{C_{2}}\mathrm{ U}(1)_{\mathbf{R}}},\] _where_ \(\mathrm{U}(1)_{\mathbf{R}}\) _acts trivially on_ \(\mathrm{BP}\langle n\rangle_{\mathbf{R}}\)_._ * _Let_ \(R\) _be a_ \(2\)_-complete animated commutative ring, equipped with the trivial_ \(\mathbf{Z}/2\)_-action. Then there is a_ \(\mathbf{Z}/2\)_-equivariant filtration on_ \(\mathrm{TP}_{\mathbf{R}}(R/T(1)_{\mathbf{R}})\) _such that_ \[\mathrm{gr}_{\mathrm{mot}}^{i}\mathrm{TP}_{\mathbf{R}}(R/T(1)_{\mathbf{R}}) \simeq(\hat{\mathbb{A}}_{R/\mathbf{Z}[q-1]})_{2}^{\wedge}[2i]=(q\hat{\Omega}_{ R})_{2}^{\wedge}[2i],\] _where the_ \(\mathbf{Z}/2\)_-action on the right-hand side is obtained by viewing_ \(\mathbf{Z}/2\subseteq\mathbf{Z}_{2}^{\times}\cong\mathbf{Z}/2\times(1+4 \mathbf{Z}_{2})\) _and using the_ \(\mathbf{Z}_{2}^{\times}\)_-action on the_ \(2\)_-completed_ \(q\)_-de Rham complex._ **Example 3.3.17**.: Note that _[_1_, Theorem 5.18]_ and _[_1_, Theorem A.1]_ prove (42) for \(n=0\) and (43) for \(n=1\), respectively. **Remark 3.3.18**.: We expect that \(\mathrm{BP}\langle n\rangle_{\mathbf{R}}^{\mathbf{Z}/2}\otimes T(n)\) is concentrated in even degrees. **Proposition 3.3.19**.: _The equivalence (42) is true for \(n=1\)._ Proof sketch.: This can be proved analogously to Theorem 2.2.4(a) for \(n=1\) using the equivariant Toda fiber sequence \[S^{\rho+\sigma}\to\Omega S^{\rho+1}\langle\rho+1\rangle\to\Omega S^{2\rho+1}\] of _[_1_, Equation 7.1]_.16 Indeed, recall from _[_1_, Example 7.1.3]_ that \(X(2)_{\mathbf{R}}\) is the Thom spectrum of the map \(\Omega^{\sigma}S^{\rho+\sigma}\to\mathrm{BGL}_{1}(S)\) detecting \(\widetilde{\eta}\in\pi_{\sigma}S\). By an argument similar to _[_1_, this implies that \(\mathrm{THH}_{\mathbf{R}}(X(2)_{\mathbf{R}})\simeq X(2)_{\mathbf{R}}[S^{\rho+ \sigma}]\). Therefore: Footnote 16: See also _[_1_, Theorem 7.2.1]_, which says that \(\underline{\mathbf{Z}}\) is the Thom spectrum of a map \(\Omega^{\rho}S^{2\rho+1}\to\mathrm{BGL}_{1}(X(2)_{\mathbf{R}})\) whose bottom cell detects \(\underline{v}_{1}\in\pi_{\rho}X(2)_{\mathbf{R}}\). \[\mathrm{THH}_{\mathbf{R}}(\underline{\mathbf{Z}}/X(2)_{\mathbf{R}}) \simeq\underline{\mathbf{Z}}[\Omega S^{\rho+1}\langle\rho+1\rangle ]\otimes_{X(2)_{\mathbf{R}}[S^{\rho+\sigma}]}X(2)_{\mathbf{R}}\] \[\simeq\underline{\mathbf{Z}}[\Omega S^{\rho+1}\langle\rho+1\rangle ]\otimes_{\underline{\mathbf{Z}}[S^{\rho+\sigma}]}\underline{\mathbf{Z}} \simeq\mathbf{Z}[\Omega S^{2\rho+1}],\] where the last equivalence uses the equivariant Toda fiber sequence. **Example 3.3.20**.: Let us note some additional evidence for Conjecture 3.3.16(a): if \(X\) is a \(\mathbf{Z}/2\)-space, then the cofiber sequence \((\mathbf{Z}/2)_{+}\to S^{0}\to S^{\sigma}\) of spaces implies that \((\Omega^{\sigma}X)^{\mathbf{Z}/2}\) is equivalent to the fiber of the canonical map \(X^{\mathbf{Z}/2}\to X\). In particular, since \((\mathrm{SU}(n)_{\mathbf{R}})^{\mathbf{Z}/2}=\mathrm{SO}(n)\), we see that \((\Omega^{\sigma}\mathrm{SU}(n)_{\mathbf{R}})^{\mathbf{Z}/2}\simeq\Omega( \mathrm{SU}(n)/\mathrm{SO}(n))\). Since geometric fixed points preserves colimits, this implies that \(\Phi^{\mathbf{Z}/2}X(n)_{\mathbf{R}}\) is the Thom spectrum of the map \[(\Omega^{\sigma}\mathrm{SU}(n)_{\mathbf{R}})^{\mathbf{Z}/2}\simeq\Omega( \mathrm{SU}(n)/\mathrm{SO}(n))\to\Omega(\mathrm{SU}/\mathrm{SO})\simeq\mathrm{ BO}\simeq\mathrm{BU}_{\mathbf{R}}^{\mathbf{Z}/2}.\] Since \(\Phi^{\mathbf{Z}/2}T(n)_{\mathbf{R}}\simeq y(n)\) as \(2\)-local \(\mathbf{E}_{1}\)-algebras, Conjecture 3.3.16(a) would imply that \(\Phi^{\mathbf{Z}/2}X(2^{n})_{\mathbf{R}}\) (i.e., the Thom spectrum of the map \(\Omega(\mathrm{SU}(n)/\mathrm{SO}(n))\to\mathrm{BO}\) is a direct sum of shifts of \(y(n)\) such that the inclusion \(y(n)\to\Phi^{\mathbf{Z}/2}X(2^{n})_{\mathbf{R}}\) of the unit summand is an \(\mathbf{E}_{1}\)-map. This is indeed true, and was proved in [22]. **Example 3.3.21**.: The strongest evidence for Conjecture 3.3.16(d) is the following. It follows from [16, Construction 7.1.1] that there is a map \(\Omega S^{n\rho-1}\to\mathrm{BGL}_{1}(X(n-1)_{\mathbf{R}})\) whose Thom spectrum is \(X(n)_{\mathbf{R}}\). The same construction used to prove Theorem 3.1.4 then shows that for any \(\mathbf{Z}/2\)-equivariant \(\mathbf{E}_{\sigma}\)-\(X(n)_{\mathbf{R}}\)-algebra \(R\), there is an equivariant cofiber sequence \[\mathrm{THH}_{\mathbf{R}}(R/X(n-1)_{\mathbf{R}})\to\mathrm{THH}_{\mathbf{R}}( R/X(n)_{\mathbf{R}})\to\Sigma^{n\rho}\mathrm{THH}_{\mathbf{R}}(R/X(n)_{ \mathbf{R}}), \tag{44}\] where the second map is a \(\mathbf{Z}/2\)-equivariant analogue of the topological Sen operator. It is not difficult to see that given the first half of Conjecture 3.3.16(a), Conjecture 3.3.16(d) can be easily proved using the construction of Theorem 3.1.4. For example, we have \(X(2)_{\mathbf{R}}=T(1)_{\mathbf{R}}\), and the cofiber sequence of (44) is precisely Conjecture 3.3.16(d). For \(R=\underline{\mathbf{Z}}\), (44) becomes a cofiber sequence \[\underline{\mathbf{Z}}[\Omega S^{\rho+1}\langle\rho+1\rangle]\simeq\mathrm{ THH}_{\mathbf{R}}(\underline{\mathbf{Z}})\to\mathrm{THH}_{\mathbf{R}}( \underline{\mathbf{Z}}/X(2)_{\mathbf{R}})\simeq\underline{\mathbf{Z}}[\Omega S ^{2\rho+1}]\simeq\bigoplus_{n\geq 0}\Sigma^{2n\rho}\underline{\mathbf{Z}} \to\bigoplus_{m\geq 1}\Sigma^{2m\rho}\underline{\mathbf{Z}}.\] A version of this fiber sequence was in fact already studied in [11, Lemma A.3]. **Remark 3.3.22**.: Conjecture 3.3.16(a) and Conjecture 3.3.16(d) together imply that \[\pi_{*}\Phi^{\mathbf{Z}/2}\mathrm{THH}_{\mathbf{R}}(\mathrm{BP}\langle n-1 \rangle_{\mathbf{R}})\simeq\mathbf{F}_{2}[t,\sigma^{2}(v_{n-1}),\sigma(v_{j-1} )|1\leq j\leq n]/(\sigma(v_{j})^{2}),\] where \(|t|=2^{n}=|\sigma^{2}(v_{n-1})|\) and \(|\sigma(v_{j-1})|=2^{j}-1\). This can also be proved unconditionally using methods similar to that of [13, Theorem 5.23], by writing \[\Phi^{\mathbf{Z}/2}\mathrm{THH}_{\mathbf{R}}(\mathrm{BP}\langle n-1\rangle_{ \mathbf{R}})\simeq\Phi^{\mathbf{Z}/2}\mathrm{BP}\langle n-1\rangle_{\mathbf{R }}\otimes_{\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}}\Phi^{\mathbf{Z}/2} \mathrm{BP}\langle n-1\rangle_{\mathbf{R}},\] and using that \(\pi_{*}\Phi^{\mathbf{Z}/2}\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}\cong \mathbf{F}_{2}[t]\). Note that if we assume Conjecture 3.3.16(c), then \(\mathrm{THH}_{\mathbf{R}}(\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}/T(n-1)_{ \mathbf{R}})\simeq\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}[\widetilde{K}_{n}]\); the conjectural equivalence \(\widetilde{K}_{n}^{\mathbf{Z}/2}=K_{n-1}\) then gives an equivalence \[\Phi^{\mathbf{Z}/2}\mathrm{THH}_{\mathbf{R}}(\mathrm{BP}\langle n-1\rangle_{ \mathbf{R}}/T(n-1)_{\mathbf{R}})\simeq\Phi^{\mathbf{Z}/2}\mathrm{BP}\langle n- 1\rangle_{\mathbf{R}}[K_{n-1}]. \tag{45}\] Observe that \[\pi_{*}\Phi^{\mathbf{Z}/2}\mathrm{BP}\langle n-1\rangle_{\mathbf{R}}[K_{n-1}] \cong\mathbf{F}_{2}[t,x,e]/e^{2},\] where \(|x|=2^{n}\) and \(|y|=2^{n}-1\). For instance, when \(n=1\), there is an equivalence \((\Omega S^{\rho+1}\langle\rho+1\rangle)^{\mathbf{Z}/2}=\Omega S^{2}\), and (45) reduces to the equivalence \[\Phi^{\mathbf{Z}/2}\mathrm{THH}_{\mathbf{R}}(\underline{\mathbf{Z}})\simeq \Phi^{\mathbf{Z}/2}\underline{\mathbf{Z}}[\Omega S^{2}]\simeq(\tau_{\geq 0} \mathbf{F}_{2}^{tS^{1}})[\Omega S^{2}].\] ### Aside: the Segal conjecture In this section, we make some brief remarks regarding the Segal conjecture; the reader is referred to [11, Section 4] and [12, Section 5] for a discussion of its algebraic interpretation and a review of the literature on this topic. **Definition 3.4.1**.: An \(\mathbf{E}_{\infty}\)-ring \(R\) is said to satisfy the Segal conjecture if the cyclotomic Frobenius \(\mathrm{THH}(R)\to\mathrm{THH}(R)^{t\mathbf{Z}/p}\) is an equivalence in large degrees. **Example 3.4.2**.: Let \(R\) be a commutative \(\mathbf{F}_{p}\)-algebra. If \(R\) is Cartier smooth in the sense of [11, Section 2] and \(\Omega_{R/\mathbf{F}_{p}}^{n}=0\) for \(n\gg 0\), then \(R\) satisfies the Segal conjecture in the sense of Definition 3.4.1 (see [10, Corollary 9.5]). For instance, suppose \(R=k\) is a field of characteristic \(p>0\). Then \(\operatorname{THH}(k)\simeq\operatorname{HH}(k/\mathbf{F}_{p})[\sigma]\) as a module over \(\operatorname{THH}(\mathbf{F}_{p})\simeq\mathbf{F}_{p}[\sigma]\), and \(\pi_{i}\operatorname{HH}(k/\mathbf{F}_{p})=0\) for \(i>\log_{p}[k:k^{p}]=\dim_{k}\Omega_{k/\mathbf{F}_{p}}^{1}\). This implies that the localization map \(\operatorname{THH}(k)\to\operatorname{THH}(k)[\frac{1}{\sigma}]\simeq_{\varphi }\operatorname{THH}(k)^{t\mathbf{Z}/p}\) is an equivalence in degrees \(>\log_{p}[k:k^{p}]-2\). **Example 3.4.3**.: The proof of [10, Theorem 4.3.1 and Corollary 4.2.3] can be used to show that the map \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle)\otimes_{\operatorname {BP}\langle n-1\rangle}\mathbf{F}_{p}\to\operatorname{THH}(\operatorname{BP} \langle n-1\rangle)^{t\mathbf{Z}/p}\otimes_{\operatorname{BP}\langle n-1\rangle} \mathbf{F}_{p}\) is an equivalence in degrees \(>n+\sum_{i=0}^{n-1}|v_{i}|=\sum_{i=0}^{n-1}(2p^{i}-1)=2\frac{p^{n}-1}{p-1}-n\). Note that \(2\frac{p^{n}-1}{p-1}-n\) is also precisely the shift appearing in Mahowald-Rezk duality for \(\operatorname{BP}\langle n\rangle\) (see [11, Corollary 9.3]). **Remark 3.4.4**.: Assume Conjecture 2.1.9, so that we can define the THH of a left \(T(n)\)-linear \(\infty\)-category relative to \(T(n)\). Since we do not know if THH relative to \(T(n)\) admits the structure of a cyclotomic spectrum (presumably it does not), it does not seem possible to state a direct analogue of Definition 3.4.1 in this context. However, recall that if \(k\) is a perfect field of characteristic \(p>0\) and \(R\) is an animated \(k\)-algebra, the cyclotomic Frobenius \(\varphi:\operatorname{THH}(R)\to\operatorname{THH}(R)^{t\mathbf{Z}/p}\) is the Frobenius-linear map given by inverting \(\sigma\in\pi_{2}\operatorname{THH}(k)\): this is a consequence of the observation that the map \(\varphi:\operatorname{THH}(k)\to\operatorname{THH}(k)^{t\mathbf{Z}/p}\) is given by composing the localization \(\operatorname{THH}(k)\to\operatorname{THH}(k)[\sigma^{-1}]\) with a Frobenius-linear equivalence \(\operatorname{THH}(k)[\sigma^{-1}]\simeq_{\operatorname{Frob}}\operatorname{ THH}(k)^{t\mathbf{Z}/p}\). This observation motivates the following terminology: we say that an \(\mathbf{E}_{1}\)-\(\operatorname{BP}\langle n-1\rangle\)-algebra \(R\) satisfies the "\(T(n)\)-Segal conjecture" if the base-change of the localization map \(\operatorname{THH}(R/T(n))\to\operatorname{THH}(R/T(n))[\theta_{n}^{-1}]\) along \(\operatorname{BP}\langle n-1\rangle\to\mathbf{F}_{p}=\operatorname{BP}\langle n -1\rangle/(p,\cdots,v_{n-1})\) is an equivalence in large degrees. Note that if \(n=1\), this is equivalent to saying that the \(p\)-completion of the map \(\operatorname{THH}(R/T(1))\to\operatorname{THH}(R/T(1))[\theta^{-1}]\) is an equivalence in large degrees. One can similarly say that an \(\mathbf{E}_{1}\)-\(\mathbf{Z}_{p}\)-algebra \(R\) satisfies the "\(J(p)\)-Segal conjecture" if the map \(\operatorname{THH}(R/J(p))\to\operatorname{THH}(R/J(p))[x^{-1}]\) is an equivalence in large degrees. **Proposition 3.4.5**.: _If we assume Conjecture 2.1.9, the localization map_ \[\gamma:\operatorname{THH}(\operatorname{BP}\langle n-1\rangle[x_{1},\cdots,x_{ d}]/T(n))\to\operatorname{THH}(\operatorname{BP}\langle n-1\rangle[x_{1}, \cdots,x_{d}]/T(n))[\theta_{n}^{-1}]\] _is an equivalence in degrees \(>d-2p^{n}\) after base-changing along \(\operatorname{BP}\langle n-1\rangle\to\mathbf{F}_{p}\). In particular, the flat polynomial algebra \(\operatorname{BP}\langle n-1\rangle[x_{1},\cdots,x_{d}]\) satisfies the \(T(n)\)-Segal conjecture._ Proof.: Write \(T:=\operatorname{THH}(\operatorname{BP}\langle n-1\rangle/T(n))\) for notational simplicity. Using (38), we have \[\operatorname{THH}(\operatorname{BP}\langle n-1\rangle[t]/T(n))[\theta^{-1}] \simeq T[\theta^{-1}]\oplus\bigoplus_{n\geq 1}T[\theta^{-1}]\otimes(S^{1}/ \mu_{n})_{+}.\] Since the map \(T\to T[\theta^{-1}]\) is an equivalence in degrees \(>-2p^{n}\) after base-changing along \(\operatorname{BP}\langle n-1\rangle\to\mathbf{F}_{p}\), the map \(T\otimes(S^{1}/\mu_{n})_{+}\to T[\theta^{-1}]\otimes(S^{1}/\mu_{n})_{+}\) is an equivalence in degrees \(>-2p^{n}+1\) after base-changing along \(\operatorname{BP}\langle n-1\rangle\to\mathbf{F}_{p}\). Because the map \(\gamma:\operatorname{THH}(\operatorname{BP}\langle n-1\rangle[t]/T(n))\to \operatorname{THH}(\operatorname{BP}\langle n-1\rangle[t]/T(n))[\theta^{-1}]\) preserves the summands, we see that \(\gamma\) is an equivalence in degrees \(>-2p^{n}+1\) after base-changing along \(\operatorname{BP}\langle n-1\rangle\to\mathbf{F}_{p}\). Inducting on the number of variables, we find that the map \(\gamma\) is an equivalence in degrees \(>d-2p^{n}\) after base-changing along \(\operatorname{BP}\langle n-1\rangle\to\mathbf{F}_{p}\), as desired. **Remark 3.4.6**.: When \(d=0\), Proposition 3.4.5 should be compared to [10, Theorem 4.3.1]. In fact, we expect it is possible to recover their result using Proposition 3.4.5. We also note the following variant. Let \(R:=\operatorname{BP}\langle n-1\rangle[t_{1},\cdots,t_{d}]\) denote the flat polynomial \(\mathbf{E}_{2}\)-\(\operatorname{BP}\langle n-1\rangle\)-algebra on classes \(t_{i}\) in even degrees (i.e., the base-change of the \(\mathbf{E}_{\infty}\)-MU-algebra \(\operatorname{MU}[t_{1},\cdots,t_{d}]\) along the \(\mathbf{E}_{3}\)-map \(\operatorname{MU}\to\operatorname{BP}\langle n-1\rangle\)). The argument of Proposition 3.4.5 then shows that after base-change along the composite \(R\to\operatorname{BP}\langle n-1\rangle\to\mathbf{F}_{p}\), the localization map \(\gamma:\operatorname{THH}(R/T(n))\to\operatorname{THH}(R/T(n))[\theta_{n}^{-1}]\) is an equivalence in degrees \(>-2p^{n}+\sum_{j=1}^{d}(|t_{j}|+1)\). When \(n=0\), this is [10, Corollary 4.2.3]. **Proposition 3.4.7**.: _Let \(R\) be a \(p\)-torsionfree discrete commutative ring such that \(R/p\) is regular Noetherian. Suppose \(L\Omega_{R}^{n}=0\) for \(n>d\). Then Conjecture 3.1.14 implies that \(R\) satisfies the \(J(p)\)-Segal conjecture: in fact, the map \(\operatorname{THH}(R/J(p))\to\operatorname{THH}(R/J(p))[x^{-1}]\) is an equivalence in degrees \(>d-2\)._ Proof.: Recall that Conjecture 3.1.14 asserts that \(\operatorname{THH}(R/J(p))\) has a filtration such that \(\operatorname{gr}_{\operatorname{mot}}^{i}\operatorname{THH}(R/J(p))\simeq( \operatorname{F}_{i}^{\operatorname{conj}}\widehat{\Omega}_{R}^{\not{D}})[2i]\), and such that the map \(\gamma:\operatorname{THH}(R/J(p))\to\operatorname{THH}(R/J(p))[x^{-1}]\) induces the map \(\operatorname{F}_{i}^{\operatorname{conj}}\widehat{\Omega}_{R}^{\not{D}}\to \widehat{\Omega}_{R}^{\not{D}}\) on \(\operatorname{gr}_{\operatorname{mot}}^{i}[-2i]\). By [1, Remark 4.7.4], \(\widehat{\Omega}_{R}^{\not{D}}/\operatorname{F}_{i}^{\operatorname{conj}} \widehat{\Omega}_{R}^{\not{D}}\) is concentrated in cohomological degrees \(\geq i+1\), so that the cofiber of \(\operatorname{gr}^{i}(\gamma)\) is concentrated in degrees \(\leq 2i-(i+1)=i-1\). Moreover, the hypothesis that \(L\Omega_{R}^{n}=0\) for \(n>d\) implies that \(\gamma\) induces an equivalence on \(\operatorname{gr}_{\operatorname{mot}}^{i}\) for \(i\geq d\). Combining these observations gives the desired statement (see also the proof of [1, Corollary 9.5]). ### Aside: Cartier isomorphism In this section, we study a topological analogue of the Cartier isomorphism for the two-term complexes from Remark 3.3.10; we will study basic algebraic properties of these complexes in future work. To avoid dealing with completion issues, we use the following (see Warning 3.3.7 for a remark about the notation \(\operatorname{HH}(R[t]/R)\)): **Definition 3.5.1**.: Let \(R\) be an \(\mathbf{E}_{2}\)-ring. The polynomial \(\mathbf{E}_{1}\)-\(R\)-algebra \(R[t]=R[\mathbf{N}]\) acquires a natural \(\mathbf{Z}\)-grading, and we will write \(\operatorname{HH}(R[t]/R)_{\leq m}\) to denote the graded left \(R\)-module given by truncating \(\operatorname{HH}(R[t]/R):=R\otimes\operatorname{THH}(S[t])\) in weights \(\geq m+1\). Explicitly, \(\operatorname{HH}(R[t]/R)_{\leq m}\) is equivalent to \(R\oplus\left(\bigoplus_{1\leq n\leq m}R\otimes(S^{1}/\mu_{n})_{+}\right)\). **Lemma 3.5.2**.: _If \(X\in\operatorname{Sp}^{BS^{1}}\), the following composite is an equivalence:_ \[X^{t\mathbf{Z}/p}\otimes(S^{1}/\mu_{n})_{+}\xrightarrow{\sim}(X\otimes(S^{1}/ \mu_{n})_{+})^{t\mathbf{Z}/p}\xrightarrow{\psi}(X\otimes(S^{1}/\mu_{np})_{+} )^{t\mathbf{Z}/p}.\] _Moreover, if \(p\nmid m\), then \((X\otimes(S^{1}/\mu_{m})_{+})^{t\mathbf{Z}/p}=0\)._ Proof.: If \(\varphi:(S^{1}/\mu_{n})_{+}\to((S^{1}/\mu_{np})_{+})^{t\mathbf{Z}/p}\) denotes the unstable Frobenius (sending \(x\mapsto x^{1/p}\)), the cofiber of the composite \[\psi:(S^{1}/\mu_{n})_{+}\to((S^{1}/\mu_{np})_{+})^{t\mathbf{Z}/p}\to(S^{1}/\mu _{np})_{+}\] has induced \(\mathbf{Z}/p\)-action, where \((S^{1}/\mu_{n})_{+}\) and \(((S^{1}/\mu_{np})_{+})^{t\mathbf{Z}/p}\) are equipped with the trivial \(\mathbf{Z}/p\)-action. Therefore, the canonical map \((S^{1}/\mu_{n})_{+})^{t\mathbf{Z}/p}\) is an equivalence (since \((S^{1}/\mu_{n})_{+}\) is a finite spectrum with trivial \(\mathbf{Z}/p\)-action). This gives the first claim. Finally, if \(p\nmid m\), then the \(\mathbf{Z}/p\)-action on \(S^{1}/\mu_{m}\) is free, so that \((X\otimes(S^{1}/\mu_{m})_{+})^{t\mathbf{Z}/p}=0\), as desired. **Proposition 3.5.3** (Cartier isomorphism).: _Let \(R\) be an \(\mathbf{E}_{2}\)-ring. Then:_ 1. _There is an_ \(S^{1}\)_-equivariant map_ \(\mathfrak{C}:\operatorname{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p})\to \operatorname{HH}(R[t]/R)^{t\mathbf{Z}/p}\)_, where_ \(\operatorname{HH}(R[t]/R)^{t\mathbf{Z}/p}\) _is endowed with the residual_ \(S^{1}/\mu_{p}\)_-action and_ \(\operatorname{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p})\) _is endowed with the diagonal_ \(S^{1}\)_-action arising from the_ \(S^{1}\)_-action on_ \(\operatorname{HH}\) _and the residual_ \(S^{1}/\mu_{p}\)_-action on_ \(R^{t\mathbf{Z}/p}\)_. Moreover, the map_ \(\mathfrak{C}\) _sends_ \(t\mapsto t^{p}\)_._ 2. _For each_ \(m\geq 1\)_, the map_ \(\mathfrak{C}\) _induces an equivalence_ \(\mathfrak{C}_{\leq m}:\operatorname{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p}) _{\leq m}\xrightarrow{\sim}\)__\((\operatorname{HH}(R[t]/R)_{\leq mp})^{t\mathbf{Z}/p}\)_._ Proof.: Recall that there is an equivalence \(\operatorname{HH}(R[t]/R)\simeq R\otimes\operatorname{THH}(S[t])\). Since the \(\mathbf{Z}/p\)-Tate construction is lax symmetric monoidal, we obtain the map \(\mathfrak{C}\) via the composite \[\operatorname{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p}) \simeq R^{t\mathbf{Z}/p}\otimes\operatorname{THH}(S[t])\] \[\xrightarrow{\operatorname{id}\otimes\varphi}R^{t\mathbf{Z}/p} \otimes\operatorname{THH}(S[t])^{t\mathbf{Z}/p}\] \[\to(R\otimes\operatorname{THH}(S[t]))^{t\mathbf{Z}/p}\simeq \operatorname{HH}(R[t]/R)^{t\mathbf{Z}/p}.\] For each \(m\geq 1\), there is an equivalence \[(\operatorname{HH}(R[t]/R)_{\leq m})^{t\mathbf{Z}/p}\simeq R^{t\mathbf{Z}/p} \oplus\left(\bigoplus_{1\leq n\leq m}R\otimes(S^{1}/\mu_{n})_{+}\right)^{t \mathbf{Z}/p}.\] Since the maps \(\varphi:(S^{1}/\mu_{n})_{+}\to((S^{1}/\mu_{np})_{+})^{t\mathbf{Z}/p}\) define the Frobenius on \(\operatorname{THH}(S[t])\simeq S\oplus\bigoplus_{n\geq 1}(S^{1}/\mu_{n})_{+}\), we see from Lemma 3.5.2 that for each \(m\geq 1\), the map \(\mathfrak{C}_{\leq m}\) defines an equivalence \[\bigoplus_{1\leq j\leq m}R^{t\mathbf{Z}/p}\otimes(S^{1}/\mu_{j})_{+} \xrightarrow{\sim}\left(\bigoplus_{1\leq n\leq mp}R\otimes(S^{1}/\mu_{n})_{+} \right)^{t\mathbf{Z}/p}.\] The left-hand side is \(\operatorname{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p})_{\leq m}\), and the right-hand side is \((\operatorname{HH}(R[t]/R)_{\leq mp})^{t\mathbf{Z}/p}\). **Remark 3.5.4**.: When \(R\) is an \(\mathbf{E}_{\infty}\)-ring, the map \(\mathfrak{C}:\operatorname{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p})\to \operatorname{HH}(R[t]/R)^{t\mathbf{Z}/p}\) of Proposition 3.5.3 can also be constructed using (a simple case of) [1, Theorem 1.3]. The cited result says the following. Suppose \(k\) is an \(\mathbf{E}_{\infty}\)-ring, so that the Tate-valued Frobenius \(k\to k^{t\mathbf{Z}/p}\) admits an extension \(\operatorname{THH}(k)\to k^{t\mathbf{Z}/p}\) to an \(S^{1}\)-equivariant map of \(\mathbf{E}_{\infty}\)-rings. If \(A\) is an \(\mathbf{E}_{1}\)-\(k\)-algebra, and \(M\) is an \(A\)-bimodule in \(\operatorname{Mod}_{k}\), then there is a relative Tate diagonal \[k^{t\mathbf{Z}/p}\otimes_{\operatorname{THH}(k)}\operatorname{THH}(A,M)\to \operatorname{THH}^{k}(A,M^{\otimes_{AP}})^{t\mathbf{Z}/p},\] where \(\operatorname{THH}^{k}\) denotes THH relative to \(k\). To construct \(\mathfrak{C}\), take \(k=R\) and \(A=M=k[t]\). Then \[k^{t\mathbf{Z}/p}\otimes_{\operatorname{THH}(k)}\operatorname{THH}(A,M)\simeq k ^{t\mathbf{Z}/p}\otimes\operatorname{THH}(S^{0}[t])\simeq\operatorname{HH}(k^{ t\mathbf{Z}/p}[t]/k^{t\mathbf{Z}/p}),\] since \(\operatorname{THH}(A,M)\simeq\operatorname{THH}(A)\simeq\operatorname{THH}(S^{0}[t]) \otimes\operatorname{THH}(k)\). Similarly, \(\operatorname{THH}^{k}(A,M^{\otimes_{AP}})\simeq\operatorname{HH}(A/k)\), and it is straightforward to check that Lawson's relative Tate diagonal agrees with the map \(\mathfrak{C}\). One advantage of the construction of \(\mathfrak{C}\) in Proposition 3.5.3 is that it is manifestly \(S^{1}\)-equivariant, and does not rely on \(R\) being an \(\mathbf{E}_{\infty}\)-ring. More generally, one finds that if \(\mathscr{C}\) is a stable \(\infty\)-category and \(R\) is any \(\mathbf{E}_{2}\)-ring, the cyclotomic Frobenius on \(\operatorname{THH}(\mathscr{C})\) defines an \(S^{1}\)-equivariant map \(\mathfrak{C}:\operatorname{HH}(\mathscr{C}\otimes R^{t\mathbf{Z}/p}/R^{t \mathbf{Z}/p})\to\operatorname{HH}(\mathscr{C}\otimes R/R)^{t\mathbf{Z}/p}\) which generalizes the map of Proposition 3.5.3. This map is furthermore an equivalence if \(\mathscr{C}\) is smooth and proper. **Remark 3.5.5**.: In Proposition 3.5.3, the map \(\mathfrak{C}:\operatorname{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p})\to \operatorname{HH}(R[t]/R)^{t\mathbf{Z}/p}\) is itself almost an equivalence: the main issue is that the canonical map \[\operatorname{colim}_{m}\left(\operatorname{HH}(R[t]/R)_{\leq mp}\right)^{t \mathbf{Z}/p}\to\left(\operatorname{colim}_{m}\operatorname{HH}(R[t]/R)_{\leq mp }\right)^{t\mathbf{Z}/p}\simeq\operatorname{HH}(R[t]/R)^{t\mathbf{Z}/p}\] may not be an equivalence. However, Proposition 3.5.3 implies that the _graded_ map \(\mathfrak{C}^{\operatorname{gr}}:\operatorname{HH}(R[t]/R)^{t\mathbf{Z}/p}\to \operatorname{HH}(R[t]/R)^{t\mathbf{Z}/p}\) is itself an equivalence. **Remark 3.5.6**.: If \(R\) is a complex-oriented \(\mathbf{E}_{2}\)-ring, let \([p](\hbar)\in\pi_{-2}R^{hS^{1}}\) denote the \(p\)-series of the formal group law over \(R\). If \(M\in\operatorname{LMod}_{R}^{BS^{1}}\), then it is not difficult to show that there is an equivalence \(M^{tS^{1}}/\frac{[p](\hbar)}{\hbar}\xrightarrow{\sim}M^{t\mathbf{Z}/p}\). (Although certainly well-known, the only reference in the literature for a statement in this generality seems to be [10], Lemma 6.2.2].) In particular, \(\operatorname{HH}(R[t]/R)^{t\mathbf{Z}/p}\simeq\operatorname{HP}(R[t]/R)/\frac{[ p](\hbar)}{\hbar}\), so that Proposition 3.5.3 and Remark 3.5.5 imply that there is an \(S^{1}\)-equivariant _graded_ equivalence \[\mathfrak{C}:\operatorname{HH}(R^{t\mathbf{Z}/p}[t]/R^{t\mathbf{Z}/p})\to \operatorname{HP}(R[t]/R)/\frac{[p](\hbar)}{\hbar}\simeq\operatorname{HH}(R[t]/ R)^{t\mathbf{Z}/p}.\] In future work, we will show that if \(R\) is further assumed to be an \(\mathbf{E}_{\infty}\)-ring and \(\mathscr{C}\) is a \(R\)-linear \(\infty\)-category, then the \((R^{t\mathbf{Z}/p})^{hS^{1}}\simeq(R^{tS^{1}})^{\wedge}_{p}\)-module \((\operatorname{THH}(\mathscr{C})\otimes_{\operatorname{THH}(R)}R^{t\mathbf{Z}/ p})^{hS^{1}}\) behaves as a noncommutative analogue of \(L\eta_{[p](\hbar)/\hbar}\) applied to \(\operatorname{HP}(\mathscr{C}/R)\). Here, \(\hbar\in\pi_{-2}R^{hS^{1}}\) is the complex-orientation of \(R\), and \([p](\hbar)/\hbar\in\pi_{0}R^{tS^{1}}\) is the quotient of the \(p\)-series of the associated formal group law. **Remark 3.5.7**.: There is no reason to restrict to polynomial rings in a single variable in the equivalence of Proposition 3.5.3(b); we leave the details of the resulting statement to the reader. **Example 3.5.8**.: Let \(R=\mathbf{Z}\). Then \(\mathbf{Z}^{t\mathbf{Z}/p}\) is an \(\mathbf{E}_{\infty}\)-\(\mathbf{Z}\)-algebra, and has homotopy groups given by \(\mathbf{F}_{p}(\!(\hbar)\!)\) with \(|\hbar|=2\). Therefore, \(\mathbf{Z}^{t\mathbf{Z}/p}\simeq\mathbf{F}_{p}^{tS^{1}}\) as \(\mathbf{E}_{2}\)-rings17, and Proposition 3.5.3 (combined with Remark 3.5.5) specializes to the statement that there is a Frobenius-linear equivalence \[\mathfrak{C}:\operatorname{HH}(\mathbf{F}_{p}[t]/\mathbf{F}_{p})(\!(\hbar)\!)_{ \leq m}\simeq\operatorname{HH}(\mathbf{F}_{p}^{tS^{1}}[t]/\mathbf{F}_{p}^{tS^{ 1}})_{\leq m}\xrightarrow{\sim}(\operatorname{HH}(\mathbf{Z}[t]/\mathbf{Z})_{ \leq mp})^{t\mathbf{Z}/p}.\] Note that \(\operatorname{HH}(\mathbf{Z}[t]/\mathbf{Z})^{t\mathbf{Z}/p}\simeq\operatorname {HP}(\mathbf{Z}[t]/\mathbf{Z})/p\simeq\operatorname{TP}(\mathbf{F}_{p}[t])/p\). Since the HKR theorem implies that \(\operatorname{HH}(\mathbf{F}_{p}[t]/\mathbf{F}_{p})^{tS^{1}}\) is a \(2\)-periodification of the Hodge cohomology of \(\mathbf{A}_{\mathbf{F}_{p}}^{1}\), and \(\operatorname{HP}(\mathbf{Z}[t]/\mathbf{Z})/p\) is a \(2\)-periodification of the de Rham cohomology of \(\mathbf{A}_{\mathbf{Z}_{p}}^{1}\) modulo \(p\) (which is the de Rham cohomology of \(\mathbf{A}_{\mathbf{F}_{p}}^{1}\)), one can view \(\mathfrak{C}\) as a topological analogue of the Cartier isomorphism for the affine line. It reduces to the usual Cartier isomorphism on graded pieces. In this case, the statement of Proposition 3.5.3 should also be compared to [12, 12, 13]. Taking homotopy fixed points for the \(S^{1}\)-equivariance of \(\mathfrak{C}\) from Proposition 3.5.3(a), we obtain a Frobenius-linear equivalence \[\mathfrak{C}^{hS^{1}}:(\operatorname{HH}(\mathbf{Z}^{t\mathbf{Z}/p}[t]/\mathbf{ Z}^{t\mathbf{Z}/p})^{hS^{1}})_{\leq m}\xrightarrow{\sim}((\operatorname{HH}( \mathbf{Z}[t]/\mathbf{Z})_{\leq mp})^{tS^{1}})_{p}^{\wedge}. \tag{46}\] More succinctly, there is a _graded_ equivalence \[(\mathfrak{C}^{\mathrm{gr}})^{hS^{1}}:\operatorname{HH}(\mathbf{Z}^{t\mathbf{ Z}/p}[t]/\mathbf{Z}^{t\mathbf{Z}/p})^{hS^{1}}\xrightarrow{\sim}\operatorname{HP}^{ \mathrm{gr}}(\mathbf{Z}[t]/\mathbf{Z})_{p}^{\wedge}.\] Using the HKR filtration on \(\operatorname{HH}(\mathbf{Z}[t]/\mathbf{Z})\), one can prove that \(\operatorname{HH}(\mathbf{Z}^{t\mathbf{Z}/p}[t]/\mathbf{Z}^{t\mathbf{Z}/p})^{ hS^{1}}\) admits a filtration whose graded pieces are given by even shifts of \(L\eta_{p}\Gamma_{\operatorname{dR}}(\mathbf{Z}_{p}[t]/\mathbf{Z}_{p})\simeq L \eta_{p}\Gamma_{\operatorname{crys}}(\mathbf{F}_{p}[t]/\mathbf{Z}_{p})\). We will explain this in greater detail in a future article. Since \(\operatorname{HP}(\mathbf{Z}_{p}[t]/\mathbf{Z}_{p})\simeq\operatorname{TP}( \mathbf{F}_{p})\), (46) can be regarded as a \(2\)-periodification of the "Cartier isomorphism" \(L\eta_{p}\Gamma_{\operatorname{crys}}(\mathbf{F}_{p}[t]/\mathbf{Z}_{p})\simeq \Gamma_{\operatorname{crys}}(\mathbf{F}_{p}[t]/\mathbf{Z}_{p})\) for the crystalline cohomology of \(\mathbf{F}_{p}[t]\) (see [10, Theorem 8.20] for the general case). **Example 3.5.9**.: Let \(R=\operatorname{ku}\). Then \(\pi_{*}\mathrm{ku}^{t\mathbf{Z}/p}\cong\mathbf{Z}[\zeta_{p}](\!(\hbar)\!)\), and it is expected that this lifts to an equivalence \(\mathrm{ku}^{t\mathbf{Z}/p}\simeq\mathbf{Z}[\zeta_{p}]^{tS^{1}}\) of \(\mathbf{E}_{\infty}\)-rings (see also Example 3.5.11 below). Nevertheless, there _is_ an equivalence \(\mathrm{ku}^{t\mathbf{Z}/p}\simeq\mathbf{Z}[\zeta_{p}]^{tS^{1}}\) of \(\mathbf{E}_{2}\)-rings (one can show this by using [13, Theorem 1.19]; thanks to Arpon Raksit for pointing this out). Therefore, Proposition 3.5.3 and Remark 3.5.5 give an equivalence \[\mathfrak{C}:\operatorname{HH}(\mathbf{Z}[\zeta_{p}][t]/\mathbf{Z}[\zeta_{p}]) (\!(\hbar)\!)_{\leq m}\simeq\operatorname{HH}(\mathbf{Z}[\zeta_{p}]^{tS^{1}}[ t]/\mathbf{Z}[\zeta_{p}]^{tS^{1}})_{\leq m}\xrightarrow{\sim}(\operatorname{HH}( \operatorname{ku}[t]/\mathrm{ku})_{\leq mp})^{t\mathbf{Z}/p}.\] Note that \(\operatorname{HH}(\operatorname{ku}[t]/\mathrm{ku})^{t\mathbf{Z}/p}\simeq \operatorname{HP}(\operatorname{ku}[t]/\mathrm{ku})/[p]_{q}\). Here, we identify \(\frac{[p]_{\mathbf{G}_{m}}(\hbar)}{\hbar}\in\pi_{0}\mathrm{ku}^{tS^{1}} \cong\mathbf{Z}[\![q-1]\!]\) with the \(q\)-integer \([p]_{q}\). By the HKR theorem, one can view \(\operatorname{HH}(\mathbf{Z}[\zeta_{p}][t]/\mathbf{Z}[\zeta_{p}])(\!(\hbar)\!)\) as a \(2\)-periodification of the Hodge cohomology of \(\mathbf{A}_{\mathbf{Z}}^{1}\) base-changed along the map \(\mathbf{Z}\to\mathbf{Z}[\zeta_{p}]\). Similarly, the aforementioned work of Raksit (see [13] and Remark 3.3.1, as well as Lemma 3.3.4) implies that \(\operatorname{HP}(\operatorname{ku}[t]/\mathrm{ku})\) can be viewed as a \(2\)-periodification of the \(q\)-de Rham complex of \(\mathbf{Z}[t]\). Since killing \([p]_{q}\in\mathbf{Z}[\![q-1]\!]\) amounts to specializing \(q\) to a primitive \(p\)th root of unity, one can view Proposition 3.5.3 as a topological analogue of the Cartier isomorphism for the \(q\)-de Rham complex of the affine line (see, e.g., [12, Proposition 3.4]). Taking homotopy fixed points for the \(S^{1}\)-equivariance of \(\mathfrak{C}\) from Proposition 3.5.3(a), we obtain an equivalence \[\mathfrak{C}^{hS^{1}}:(\operatorname{HH}(\mathrm{ku}^{t\mathbf{Z}/p}[t]/\mathrm{ ku}^{t\mathbf{Z}/p})^{hS^{1}})_{\leq m}\xrightarrow{\sim}((\operatorname{HH}( \operatorname{ku}[t]/\mathrm{ku})_{\leq mp})^{tS^{1}})_{p}^{\wedge}. \tag{47}\] More succinctly, there is a _graded_ equivalence \[(\mathfrak{C}^{\mathrm{gr}})^{hS^{1}}:\operatorname{HH}(\mathrm{ku}^{t\mathbf{Z}/p }[t]/\mathrm{ku}^{t\mathbf{Z}/p})^{hS^{1}}\xrightarrow{\sim}\operatorname{HP}^{ \mathrm{gr}}(\operatorname{ku}[t]/\mathrm{ku})_{p}^{\wedge}.\] In future work, we show that Raksit's filtration on \(\mathrm{HP}(\mathrm{ku}[t]/\mathrm{ku})\) can be refined to construct a filtration on \(\mathrm{HH}(\mathrm{ku}^{t\mathbf{Z}/p}[t]/\mathrm{ku}^{t\mathbf{Z}/p})^{hS^{1}}\) whose graded pieces are given by even shifts of \(L\eta_{[p]_{q}}q\Omega_{\mathbf{Z}_{p}[t]}\). Then, (47) can be regarded as a \(2\)-periodification of the "Cartier isomorphism" \(\phi_{\mathbf{Z}_{p}[q-1]}^{*}q\Omega_{\mathbf{Z}_{p}[t]}\simeq L\eta_{[p]_{q}} q\Omega_{\mathbf{Z}_{p}[t]}\) for the \(q\)-de Rham cohomology of \(\mathbf{Z}_{p}[t]\). (See [10, Theorem 1.16(4)] applied to the \(q\)-crystalline prism \((\mathbf{Z}_{p}[\![q-1]\!],[p]_{q})\).) **Remark 3.5.10**.: Example 3.5.9 admits a mild generalization. Namely, if \(\mathrm{ku}^{\mathbf{Z}/p^{n-1}}\) denotes the strict fixed points (so \(\mathrm{ku}^{\mathbf{Z}/p^{n-1}}=\tau_{\geq 0}(\mathrm{ku}^{h\mathbf{Z}/p^{n-1}})\)), then one can calculate \(\pi_{*}(\mathrm{ku}^{\mathbf{Z}/p^{n-1}})^{t\mathbf{Z}/p}\cong\pi_{*}\mathbf{ Z}_{p}[\zeta_{p^{n}}]^{tS^{1}}\). One can show that this can be extended to an equivalence \((\mathrm{ku}^{\mathbf{Z}/p^{n-1}})^{t\mathbf{Z}/p}\simeq\mathbf{Z}_{p}[\zeta_ {p^{n}}]^{tS^{1}}\) of \(\mathbf{E}_{2}\)-rings. Proposition 3.5.3 and Remark 3.5.5 give a graded equivalence \[\mathfrak{C}:\mathrm{HH}(\mathbf{Z}[\zeta_{p^{n}}][t]/\mathbf{Z}[\zeta_{p^{n}} ])(\!(\hbar)\!)\stackrel{{\sim}}{{\to}}\mathrm{HH}(\mathrm{ku}^{ \mathbf{Z}/p^{n-1}}[t]/\mathrm{ku}^{\mathbf{Z}/p^{n-1}})^{t\mathbf{Z}/p}.\] Here, the action of \(\mathbf{Z}/p\) on \(\mathrm{HH}(\mathrm{ku}^{\mathbf{Z}/p^{n-1}}[t]/\mathrm{ku}^{\mathbf{Z}/p^{n- 1}})=\mathrm{THH}(S[t])\otimes\mathrm{ku}^{\mathbf{Z}/p^{n-1}}\) is via the _diagonal_ action on \(\mathrm{THH}\) and \(\mathrm{ku}^{\mathbf{Z}/p^{n-1}}\). In this case, one can therefore view Proposition 3.5.3 as a topological analogue of the Cartier isomorphism for Hodge-Tate cohomology relative to the prism \((\mathbf{Z}_{p}[\![q^{1/p^{n-1}}-1]\!],[p]_{q})\) of the affine line. **Example 3.5.11**.: More generally, let \(R=\mathrm{BP}\langle n\rangle\). As recalled in Remark 2.2.16, [1, Proposition 2.3] proved that there is an isomorphism \(\pi_{*}\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p}\cong\pi_{*}\mathrm{BP} \langle n-1\rangle^{tS^{1}}\), and this was conjectured to lift to an equivalence of spectra in [1, Conjecture 1.2]. If we assume that there is in fact an equivalence \(\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p}\simeq\mathrm{BP}\langle n-1\rangle ^{tS^{1}}\) of \(\mathbf{E}_{2}\)-rings, Proposition 3.5.3 and Remark 3.5.5 give an equivalence \[\mathfrak{C}:\mathrm{HH}(\mathrm{BP}\langle n-1\rangle[t]/\mathrm{BP}\langle n -1\rangle)(\!(\hbar)\!)_{\leq m}\stackrel{{\sim}}{{\to}}( \mathrm{HH}(\mathrm{BP}\langle n\rangle[t]/\mathrm{BP}\langle n\rangle)_{\leq mp })^{t\mathbf{Z}/p}.\] Therefore, Proposition 3.5.3 in this case can be viewed as an analogue of the Cartier isomorphism for the affine line in the setting of "\(v_{n}\)-adic Hodge theory". Taking homotopy fixed points for the \(S^{1}\)-equivariance of \(\mathfrak{C}\) from Proposition 3.5.3(a), we obtain an equivalence \[\mathfrak{C}^{hS^{1}}:(\mathrm{HH}(\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p} [t]/\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p})^{hS^{1}})_{\leq m}\stackrel{{ \sim}}{{\to}}((\mathrm{HH}(\mathrm{BP}\langle n\rangle[t]/\mathrm{BP}\langle n \rangle)_{\leq mp})^{tS^{1}})_{p}^{\wedge}. \tag{48}\] More succinctly, there is a _graded_ equivalence \[(\mathfrak{C}^{\mathrm{gr}})^{hS^{1}}:\mathrm{HH}(\mathrm{BP}\langle n\rangle^{t \mathbf{Z}/p}[t]/\mathrm{BP}\langle n\rangle^{t\mathbf{Z}/p})^{hS^{1}} \stackrel{{\sim}}{{\to}}\mathrm{HP}^{\mathrm{gr}}(\mathrm{BP} \langle n\rangle[t]/\mathrm{BP}\langle n\rangle)_{p}^{\wedge}.\] Note that Conjecture 2.2.18 in particular implies that if \(T(n)\) admits the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring, then \(\mathrm{HP}(\mathrm{BP}\langle n\rangle[t]/\mathrm{BP}\langle n\rangle)\) is closely related to \(\mathrm{TP}(\mathrm{BP}\langle n-1\rangle[t]/T(n))\) by Proposition 3.3.8. In this form, (48) holds when \(\mathrm{BP}\langle n\rangle\) is replaced by any complex-oriented \(\mathbf{E}_{2}\)-ring \(R\). As in the preceding examples, we believe that when \(R\) is connective, this can be regarded as a \(2\)-periodification of a "Cartier isomorphism" for the two-term complex (41). See [1] for further discussion. ## 4. Relationship to the moduli stack of formal groups ### Incarnation of the topological Sen operator over \(\mathcal{M}_{\mathrm{FG}}\) In Section 3, we showed that the descent spectral sequence of Remark 2.2.12 admits a generalization given by the topological Sen operator (Theorem 3.1.4). This has an incarnation over \(\mathcal{M}_{\mathrm{FG}}\), as we now explain. The analogues of Theorem 2.2.4, Theorem 3.1.4, etc., that we discuss in this section are useful for making topological predictions since the calculations involved are easier. **Recollection 4.1.1** (Even filtration).: Let \(\mathrm{F}_{\mathrm{ev}}^{\star}:\mathrm{CAlg}\to\mathrm{CAlg}(\mathrm{Sp}^{ \mathrm{fil}})\) be the _even filtration_ of [10]: if \(\mathrm{CAlg}^{\mathrm{ev}}\) denotes the full subcategory of \(\mathrm{CAlg}\) spanned by the \(\mathbf{E}_{\infty}\)-rings with even homotopy, then \(\mathrm{F}_{\mathrm{ev}}^{\star}\) is the right Kan extension of the functor \(\tau_{\geq 2\star}:\mathrm{CAlg}^{\mathrm{ev}}\to\mathrm{CAlg}(\mathrm{Sp}^{ \mathrm{fil}})\) along the inclusion \(\mathrm{CAlg}^{\mathrm{ev}}\hookrightarrow\mathrm{CAlg}\). Note that since \(\tau_{\geq 2\star}\) is lax symmetric monoidal and \(\mathrm{F}_{\mathrm{ev}}^{\star}\) is defined by a right Kan extension, it is also a lax symmetric monoidal functor. We will need the following result from [10]: if \(R\) is an \(\mathbf{E}_{\infty}\)-ring such that \(\mathrm{MU}\otimes R\in\mathrm{CAlg}^{\mathrm{ev}}\), then \(\mathrm{F}_{\mathrm{ev}}^{\star}R\) is \(p\)-completely equivalent to the underlying filtered \(\mathbf{E}_{\infty}\)-ring of its Adams-Novikov tower \(\nu(R)\in\mathrm{Syn}_{\mathrm{MU}}^{\mathrm{ev}}(\mathrm{Sp})=\mathrm{Mod}_{ \mathrm{Tot}(\tau_{\geq 2\star}\mathrm{MU}^{\otimes\bullet+1})}(\mathrm{Sp}^{ \mathrm{fil}})\). (Also see [10, 11].) In this case, the associated graded Hopf algebroid \((\mathrm{MU}_{*}(R),\mathrm{MU}_{*}(\mathrm{MU}\otimes R))\) defines a stack over \(B\mathbf{G}_{m}\). If \(R\) is complex-oriented, then this stack is isomorphic to \(\mathrm{Spec}(\pi_{*}R)/\mathbf{G}_{m}\), where the \(\mathbf{G}_{m}\)-action encodes the grading on \(\pi_{*}R\). **Observation 4.1.2**.: In order to define the stack \(\widetilde{\mathcal{M}}_{R}\) associated to the graded Hopf algebroid \((\mathrm{MU}_{*}(R),\mathrm{MU}_{*}(\mathrm{MU}\otimes R))\), one does not need \(R\) to be an \(\mathbf{E}_{\infty}\)-ring: it only needs to admit the structure of a homotopy commutative ring such that \(\mathrm{MU}_{*}(R)\) is concentrated in even degrees. This perspective is explained in Hopkins' lecture in [12, Chapter 9]. In particular, one can define the stack associated to \(X(n)\): this is the moduli stack of graded formal groups equipped with a coordinate of order \(\leq n\), and strict isomorphisms between them. (See, e.g., [11, Section 2].) **Variant 4.1.3**.: We will find it convenient to work with the \(p\)-typical variant of the graded Hopf algebroid \((\mathrm{MU}_{*}(R),\mathrm{MU}_{*}(\mathrm{MU}\otimes R))\). Namely, if \(R\) is a \(p\)-local homotopy commutative ring such that \(\mathrm{BP}_{*}(R)\) is concentrated in even degrees, then we will write \(\mathcal{M}_{R}\) to denote the graded stack associated to the graded Hopf algebroid \((\mathrm{BP}_{*}(R),\mathrm{BP}_{*}(\mathrm{BP}\otimes R))\). For example, \(\mathcal{M}_{T(n)}\) is the moduli stack of \(p\)-typical graded formal groups equipped with a coordinate up to order \(\leq p^{n+1}-1\); by \(p\)-typicality, this is further isomorphic to the moduli stack of \(p\)-typical graded formal groups equipped with a coordinate up to order \(\leq p^{n}\). In particular, \(\mathcal{M}_{S^{0}}\) is isomorphic to the moduli stack \(\mathcal{M}_{\mathrm{FG}}\) of \(p\)-typical graded formal groups. Similarly, if \(R\) is a \(p\)-local complex-oriented homotopy commutative ring, then \(\mathcal{M}_{R}\) is isomorphic to \(\mathrm{Spec}(\pi_{*}R)/\mathbf{G}_{m}\). **Example 4.1.4**.: The unit map \(S^{0}\to\mathrm{MU}\) induces the map \(\widetilde{\mathcal{M}}_{\mathrm{MU}}\cong\mathrm{Spec}(\mathrm{MU}_{*})/ \mathbf{G}_{m}\to\widetilde{\mathcal{M}}_{S^{0}}\) which describes the flat cover of the moduli stack of graded formal groups given by the graded Lazard ring. This map exhibits \(\widetilde{\mathcal{M}}_{S^{0}}\) as the quotient of \(\mathrm{Spec}(\mathrm{MU}_{*})/\mathbf{G}_{m}\) by the group scheme \(\mathrm{Spec}(\pi_{*}(\mathrm{MU}\otimes\mathrm{MU}))/\mathbf{G}_{m}\). Note that \(\mathrm{MU}\otimes\mathrm{MU}\simeq\mathrm{MU}[\mathrm{BU}]\); since \(\pi_{*}\mathbf{Z}[\mathrm{BU}]\) is the coordinate ring of the big Witt ring scheme, we see that \(\mathrm{Spec}(\pi_{*}(\mathrm{MU}\otimes\mathrm{MU}))/\mathbf{G}_{m}\) is a lift of the big Witt ring scheme to \(\mathrm{Spec}(\mathrm{MU}_{*})/\mathbf{G}_{m}\). Similarly, \(\mathcal{M}_{S^{0}}=\mathcal{M}_{\mathrm{FG}}\) is the quotient of \(\mathcal{M}_{\mathrm{BP}}=\mathrm{Spec}(\mathrm{BP}_{*})/\mathbf{G}_{m}\) by a lift of the \(p\)-typical Witt ring scheme \(W\) to \(\mathrm{Spec}(\mathrm{BP}_{*})/\mathbf{G}_{m}\). **Remark 4.1.5**.: If \(A\to B\) is a map of \(p\)-local \(\mathbf{E}_{\infty}\)-rings such that \(\operatorname{MU}_{*}(A)\) and \(\operatorname{MU}_{*}(B)\) are even, then there is an induced map \(\mathcal{M}_{B}\to\mathcal{M}_{A}\) of graded stacks. Recall that \(\operatorname{THH}(B/A)\) is the geometric realization of the simplicial \(A\)-algebra \(B^{\otimes_{A}\bullet+1}\). Applying \(\operatorname{F}_{\mathrm{ev}}^{\star}\) levelwise to \(B^{\otimes_{A}\bullet+1}\in\operatorname{Fun}(\mathbf{\Delta}^{\mathrm{op}}, \operatorname{CAlg}_{A})\) produces an Adams-Novikov analogue of the Bokstedt spectral sequence: \[\pi_{*}\operatorname{HH}(\mathcal{M}_{B}/\mathcal{M}_{A})\Rightarrow\pi_{*} \mathrm{gr}_{\mathrm{ev}}^{\bullet}\mathrm{THH}(B/A).\] In particular, note that \(\operatorname{HH}(\mathcal{M}_{B}/\mathcal{M}_{\mathrm{FG}})\) is an approximation to \(\mathrm{gr}_{\mathrm{ev}}^{\bullet}\mathrm{THH}(B)\). For this spectral sequence to exist, it is not necessary that \(A\) and \(B\) be \(\mathbf{E}_{\infty}\)-rings: for example, it suffices that \(A\to B\) be a map of \(p\)-local \(\mathbf{E}_{2}\)-rings such that \(\operatorname{MU}_{*}(A)\) and \(\operatorname{MU}_{*}(B)\) are even, and such that \(\operatorname{THH}(B/A)\) is bounded below and has even MU-homology. Then, \(\mathrm{gr}_{\mathrm{ev}}^{\bullet}\mathrm{THH}(B/A)\) must be interpreted as the associated graded of the Adams-Novikov filtration on \(\operatorname{THH}(B/A)\); see [10], Corollary 1.1.6]. **Example 4.1.6**.: Let \(\mathcal{M}_{\mathrm{FG}}^{s}\) denote the total space of the canonical line bundle over \(\mathcal{M}_{\mathrm{FG}}\) (determined by the map \(\mathcal{M}_{\mathrm{FG}}\to B\mathbf{G}_{m}\)). If \(R\) is a \(p\)-quasisynotmic ring, then [1, Theorem 1.12] and Remark 4.1.5 give a spectral sequence \[\pi_{*}\mathrm{HH}(\mathrm{Spec}(R)/\mathcal{M}_{\mathrm{FG}}^{s})\Rightarrow \pi_{*}\mathscr{N}^{*}\hat{\Delta}_{R}[2*].\] Indeed, \(\mathcal{M}_{R}=\mathrm{Spec}(R)/\mathbf{G}_{m}\cong\mathrm{Spec}(R)\times B \mathbf{G}_{m}\), so that the underlying \(R\)-algebra of \(\operatorname{HH}(\mathcal{M}_{R}/\mathcal{M}_{\mathrm{FG}})\) is \(\operatorname{HH}(\mathrm{Spec}(R)/\mathcal{M}_{\mathrm{FG}}^{s})\). **Example 4.1.7**.: The complex orientation \(\mathrm{BP}\to\mathrm{BP}\langle n\rangle\) induces a map \(\mathcal{M}_{\mathrm{BP}\langle n\rangle}\to\mathcal{M}_{\mathrm{BP}}\) which factors the structure map \(\mathcal{M}_{\mathrm{BP}\langle n\rangle}\to\mathcal{M}_{\mathrm{FG}}\). Explicitly, we have the following composite map of stacks over \(B\mathbf{G}_{m}\): \[\mathrm{Spec}(\mathrm{BP}\langle n\rangle_{*})/\mathbf{G}_{m}\to\mathrm{Spec}( \mathrm{BP}_{*})/\mathbf{G}_{m}\to\mathcal{M}_{\mathrm{FG}}.\] Taking cotangent complexes gives the following transitivity cofiber sequence in \(\mathrm{Mod}_{\mathrm{BP}\langle n\rangle_{*}}^{\mathrm{gr}}\): \[\mathrm{BP}\langle n\rangle_{*}\otimes_{\mathrm{BP}_{*}}L_{\mathrm{Spec}( \mathrm{BP}_{*})/\mathcal{M}_{\mathrm{FG}}}\to L_{\mathrm{BP}\langle n\rangle _{*}/\mathcal{M}_{\mathrm{FG}}}\to L_{\mathrm{BP}\langle n\rangle_{*}/ \mathrm{BP}_{*}}.\] Since \(\mathrm{BP}_{*}/(v_{n+1},v_{n+2},\cdots)\cong\mathrm{BP}\langle n\rangle_{*}\), observe that \(L_{\mathrm{BP}\langle n\rangle_{*}/\mathrm{BP}_{*}}\) is a free \(\mathrm{BP}\langle n\rangle_{*}\)-module generated by classes \(\sigma(v_{n+1}),\sigma(v_{n+2}),\cdots\). Similarly, the discussion in Example 4.1.4 implies that \(L_{\mathrm{Spec}(\mathrm{BP}_{*})/\mathcal{M}_{\mathrm{FG}}}\) is a free \(\mathrm{BP}_{*}\)-module generated by classes \(d(t_{i})\). From this, one can deduce that \(L_{\mathrm{Spec}(\mathrm{BP}\langle n\rangle_{*})/\mathbf{G}_{m}/\mathcal{M}_{ \mathrm{FG}}}\) is a free \(\mathrm{BP}\langle n\rangle_{*}\)-module generated by classes \(\sigma(v_{j})\) with \(j\geq n+1\) and \(d(t_{i})\) with \(i\geq 1\). By the HKR theorem, \(\pi_{*}\mathrm{HH}(\mathrm{Spec}(\mathrm{BP}\langle n\rangle_{*})/\mathbf{G}_{m }/\mathcal{M}_{\mathrm{FG}})\) is isomorphic to \(\mathrm{Sym}_{\mathrm{BP}\langle n\rangle_{*}}(L_{\mathrm{BP}\langle n \rangle_{*}/\mathcal{M}_{\mathrm{FG}}}[1])\), which can be identified as \[\pi_{*}\mathrm{HH}(\mathrm{Spec}(\mathrm{BP}\langle n\rangle_{*})/\mathbf{G}_ {m}/\mathcal{M}_{\mathrm{FG}})\cong\mathrm{BP}\langle n\rangle_{*}\langle \sigma^{2}v_{j}|j\geq n+1\rangle\otimes_{\mathrm{BP}\langle n\rangle_{*}} \Lambda_{\mathrm{BP}\langle n\rangle_{*}}(dt_{i}|i\geq 1).\] Since \(v_{j}\) lives in degree \(2p^{j}-2\) and weight \(p^{j}-1\), the class \(\sigma^{2}v_{j}\) lives in degree \(2p^{j}=|v_{j}|+2\) and weight \(p^{j}\); similarly, since \(t_{i}\) lives in degree \(2p^{i}-2\) and weight \(p^{j}-1\), the class \(dt_{i}\) lives in degree \(2p^{i}-1\) and weight \(p^{j}\). **Example 4.1.8**.: The same discussion for the following composite map of stacks over \(B\mathbf{G}_{m}\) \[\mathrm{Spec}(\mathrm{BP}\langle n-1\rangle_{*})/\mathbf{G}_{m}\to\mathrm{Spec}( \mathrm{BP}_{*})/\mathbf{G}_{m}\to\mathcal{M}_{T(n)}\] shows that \(L_{\mathrm{Spec}(\mathrm{BP}\langle n-1\rangle_{*})/\mathbf{G}_{m}/\mathcal{M}_{T (n)}}\) is a free \(\mathrm{BP}\langle n-1\rangle_{*}\)-module generated by classes \(\sigma(v_{j})\) with \(j\geq n\) and \(d(t_{i})\) with \(i\geq n+1\). Therefore, the HKR theorem implies that \(\pi_{*}\mathrm{HH}(\mathrm{Spec}(\mathrm{BP}\langle n-1\rangle_{*})/\mathbf{G}_ {m}/\mathcal{M}_{T(n)})\) is isomorphic to a symmetric algebra over \(\mathrm{BP}\langle n-1\rangle_{*}\) on classes \(\sigma^{2}(v_{i})\) for \(i\geq n\), and \(d(t_{i})\) for \(i\geq n+1\). Explicitly, \[\pi_{*}\mathrm{HH}(\mathrm{Spec}(\mathrm{BP}\langle n-1\rangle_{*})/\mathbf{G}_{ m}/\mathcal{M}_{T(n)})\cong\mathrm{BP}\langle n-1\rangle_{*}\langle\sigma^{2}v_{j}|j \geq n\rangle\otimes_{\mathrm{BP}\langle n-1\rangle_{*}}\Lambda_{\mathrm{BP} \langle n-1\rangle_{*}}(dt_{i}|i\geq n+1).\] The class \(\sigma^{2}v_{j}\) lives in degree \(2p^{j}=|v_{j}|+2\) and weight \(p^{j}\), and the class \(dt_{i}\) lives in degree \(2p^{i}-1\) and weight \(p^{j}\). This mirrors the calculation of the \(E^{2}\)-term of the Bokstedt spectral sequence in Proposition 2.2.14. In fact, one can recover Theorem 2.2.4 in this way by running the Adams-Novikov-Bokstedt spectral sequence (Remark 4.1.5) and using the \(\mathbf{E}_{2}\)-Dyer-Lashof argument of Proposition 2.2.14 to resolve the extension problems on the \(E^{\infty}\)-page. We use the term "recover" in a very weak sense here: the differentials in the Adams-Novikov-Bokstedt spectral sequence are forced by the differentials in the usual Bokstedt spectral sequence (Proposition 2.2.14). Explicitly, we have \[d^{p-1}(\gamma_{j}(\sigma^{2}v_{m}))=\gamma_{j-p}(\sigma^{2}v_{m})dt_{m}\] modulo decomposables, and the spectral sequence collapses on the \(E_{p}\)-page. There are topologically determined extensions \((\sigma^{2}v_{m})^{p}=\sigma^{2}v_{m+1}\) modulo decomposables, which give an isomorphism (as implied by Theorem 2.2.4) \[\pi_{*}\mathrm{gr}_{\mathrm{ev}}^{\bullet}\mathrm{THH}(\mathrm{BP}\langle n-1 \rangle/T(n))\cong\mathrm{BP}\langle n-1\rangle_{*}[\sigma^{2}(v_{n})].\] **Recollection 4.1.9**.: Let \(Y\) be a scheme, and let \(q:X\to\mathbf{A}^{1}\times Y\) be a morphism, so that \(X\) is a scheme over \(Y\) via the projection \(\mathrm{pr}:\mathbf{A}^{1}\times Y\to Y\). Then the transitivity cofiber sequence in \(\mathrm{QCoh}(X)\) runs \[q^{*}L_{\mathbf{A}^{1}\times Y/Y}\to L_{X/Y}\to L_{X/\mathbf{A}^{1}\times Y}.\] Since \(q^{*}L_{\mathbf{A}^{1}\times Y/Y}\) is a free \(\mathscr{O}_{X}\)-module of rank \(1\) generated by \(dt\) (where \(t\) is a coordinate on \(\mathbf{A}^{1}\)), we obtain a cofiber sequence \[\mathrm{dR}^{*}_{X/Y}\to\mathrm{dR}^{*}_{X/\mathbf{A}^{1}\times Y}\xrightarrow {\nabla}\mathrm{dR}^{*}_{X/\mathbf{A}^{1}\times Y}dt,\] where \(\mathrm{dR}^{*}_{X/Y}=\bigoplus_{i\geq 0}(\wedge^{i}L_{X/Y})[-i]\) denotes the underlying derived commutative algebra of the downwards-shearing of \(\mathrm{Sym}_{\mathscr{O}_{X}}(L_{X/Y}[1](1))\). The map \(\nabla\) is the Gauss-Manin connection for the morphism \(q\). Note that \(\nabla\) satisfies Griffiths transversality: it sends the \(n\)th piece of the Hodge filtration to the \((n-1)\)st piece. **Remark 4.1.10**.: Observe that if \(q\) is taken to be the morphism \(Y\to\mathbf{A}^{1}\times Y\) given by the inclusion of the origin into \(\mathbf{A}^{1}\), then \(\mathrm{dR}_{Y/\mathbf{A}^{1}\times Y}\) is \(p\)-completely isomorphic to the divided power algebra \(\mathscr{O}_{Y}\langle t\rangle\). Using the fact that \(\mathrm{dR}_{Y/Y}\cong\mathscr{O}_{Y}\), it is not difficult to see that the Gauss-Manin connection \(\nabla\) must send \(\gamma_{j}(t)\mapsto\gamma_{j-1}(t)dt\). Here, we set \(\gamma_{-1}(t)=0\). In particular, \(\nabla\) is a PD-derivation. **Example 4.1.11** (The topological Sen operator and \(\mathcal{M}_{\mathrm{FG}}\)).: The map \(T(n-1)\to T(n)\) of homotopy commutative rings induces a map \(\mathcal{M}_{T(n)}\to\mathcal{M}_{T(n-1)}\) of graded stacks, which sends a \(p\)-typical graded formal group equipped with a coordinate up to order \(\leq p^{n}\) to the underlying \(p\)-typical graded formal group equipped with a coordinate up to order \(\leq p^{n}-1\). The map \(\mathcal{M}_{T(n)}\to\mathcal{M}_{T(n-1)}\) is an affine bundle: in other words, it exhibits \(\mathcal{M}_{T(n-1)}\) as the quotient of \(\mathcal{M}_{T(n)}\) by the group scheme \(\mathbf{G}_{a}^{(p^{n}-1)}/\mathbf{G}_{m}\) over \(B\mathbf{G}_{m}\), where \(\mathbf{G}_{a}^{(p^{n}-1)}\) denotes the affine line with \(\mathbf{G}_{m}\)-action of weight \(p^{n}-1\). This follows, for instance, from [Pet17, Reduction of Lemma 3.2.3 to Lemma 3.2.7]. If \(f:X\to\mathcal{M}_{T(n)}\) is a stack over \(\mathcal{M}_{T(n)}\), the transitivity cofiber sequence in \(\operatorname{QCoh}(X)\) is given by \[f^{*}L_{\mathcal{M}_{T(n)}/\mathcal{M}_{T(n-1)}}\to L_{X/\mathcal{M}_{T(n-1)}} \to L_{X/\mathcal{M}_{T(n)}}.\] Since \(\mathcal{M}_{T(n)}\to\mathcal{M}_{T(n-1)}\) is a \(\mathbf{G}_{a}\)-bundle, we see that \(L_{\mathcal{M}_{T(n)}/\mathcal{M}_{T(n-1)}}\) is a free \(\mathscr{O}_{\mathcal{M}_{T(n)}}\)-module of rank \(1\) generated by the class \(dt_{i}\). It follows that there is a cofiber sequence \[\operatorname{HH}(X/\mathcal{M}_{T(n-1)})\to\operatorname{HH}(X/\mathcal{M}_{ T(n)})\xrightarrow{\Theta_{\operatorname{mot}}}\Sigma^{2p^{n},p^{n}} \operatorname{HH}(X/\mathcal{M}_{T(n)}) \tag{49}\] of quasicoherent sheaves on \(X\), where \(\Sigma^{n,w}\) denotes a shift by degree \(n\) and weight \(w\). As indicated by the notation, the map \(\Theta_{\operatorname{mot}}\) behaves as an analogue on \(\mathcal{M}_{\operatorname{FG}}\) of the topological Sen operator of Theorem 3.1.4; more precisely, it is the effect of the topological Sen operator at the level of the \(E^{2}\)-page of the Adams-Novikov-Bokstedt spectral sequence of Remark 4.1.5. Moreover, the discussion in Recollection 4.1.9 says that \(\Theta_{\operatorname{mot}}\) can be understood as an analogue of the Gauss-Manin connection. **Example 4.1.12**.: The topological Sen operator on \(\operatorname{THH}(\mathbf{Z}_{p}/J(p))\cong\mathbf{Z}_{p}[x]\) sends \(x^{j}\mapsto jx^{j-1}\), so that the action of the Sen operator is precisely the action of \(\mathbf{G}_{a}^{\sharp}\) on \(\mathbf{G}_{a}=\operatorname{Spec}\mathbf{Z}_{p}[x]\) given by \(\partial_{x}:\mathbf{Z}_{p}[x]\to\mathbf{Z}_{p}[x]\). Therefore, there is a \(p\)-complete graded isomorphism \(\operatorname{gr}_{\operatorname{ev}}^{\bullet}\operatorname{THH}(\mathbf{Z} _{p})\cong\Gamma(\mathbf{G}_{a}/\mathbf{G}_{a}^{\sharp};\mathscr{O})\). In the same way, one can argue that there is a \(p\)-complete isomorphism \(\operatorname{gr}_{\operatorname{ev}}^{\bullet}\operatorname{THH}(\mathbf{Z} _{p})^{t\mathbf{Z}/p}\cong\Gamma(\mathbf{G}_{m}/\mathbf{G}_{m}^{\sharp}; \mathscr{O})\). This perspective is related to the stacky approach to Hodge-Tate cohomology a la [10, 11] in the following way. By [10, Proposition 3.5.1], there is an isomorphism \(\mathbf{G}_{a}/\mathbf{G}_{a}^{\sharp}\cong\mathbf{G}_{a}^{\operatorname{dR}}\); similarly, \(\mathbf{G}_{m}/\mathbf{G}_{m}^{\sharp}\cong\mathbf{G}_{m}^{\operatorname{dR}}\). Therefore: \[\operatorname{gr}_{\operatorname{ev}}^{\bullet}\operatorname{THH}( \mathbf{Z}_{p}) \cong\Gamma(\mathbf{G}_{a}^{\operatorname{dR}};\mathscr{O}), \tag{51}\] \[\operatorname{gr}_{\operatorname{ev}}^{\bullet}\operatorname{THH}( \mathbf{Z}_{p})^{t\mathbf{Z}/p} \cong\Gamma(\mathbf{G}_{m}^{\operatorname{dR}};\mathscr{O}). \tag{50}\] Since \(\operatorname{gr}_{\operatorname{ev}}^{\bullet}\operatorname{THH}(\mathbf{Z} _{p})^{t\mathbf{Z}/p}\) is supposed to arise as the cohomology of the total space \(\operatorname{Tot}(\mathscr{O}_{\operatorname{WCarl}^{\operatorname{HT}}}\{1\})\) of the Breuil-Kisin twisting line bundle \(\mathscr{O}_{\operatorname{WCarl}^{\operatorname{HT}}}\{1\}\) over \(\operatorname{WCarl}^{\operatorname{HT}}\), the isomorphism (51) suggests that \(\operatorname{Tot}(\mathscr{O}_{\operatorname{WCarl}^{\operatorname{HT}}}\{1\}) \cong\mathbf{G}_{m}^{\operatorname{dR}}\). In turn, this suggests that \(\operatorname{WCarl}^{\operatorname{HT}}\) should be \(\mathbf{G}_{m}^{\operatorname{dR}}/\mathbf{G}_{m}=B\mathbf{G}_{m}^{\sharp}\). This is indeed true: it is precisely [11, Theorem 3.4.13]. Similarly, \(\operatorname{gr}_{\operatorname{ev}}^{\bullet}\operatorname{THH}(\mathbf{Z} _{p})\) is supposed to arise as the cohomology of the total space of the Breuil-Kisin twisting line bundle over the "extended Hodge-Tate locus" \(\Delta_{0}^{\prime}\) in Drinfeld's \(\Sigma^{\prime}\). (The stack \(\Delta_{0}^{\prime}\) is defined in [10, Section 5.10.1].) In [11], the stack \(\Sigma^{\prime}\) is denoted by \(\operatorname{Spf}(\mathbf{Z}_{p})^{\mathscr{N}}\), and one might therefore denote \(\Delta_{0}^{\prime}\) by \(\operatorname{Spf}(\mathbf{Z}_{p})^{\mathscr{N},\operatorname{HT}}\). The isomorphism (50) then suggests that the total space of the Breuil-Kisin line bundle over \(\operatorname{Spf}(\mathbf{Z}_{p})^{\mathscr{N},\operatorname{HT}}\) is \(\mathbf{G}_{a}^{\operatorname{dR}}\), which in turn suggests that \(\operatorname{Spf}(\mathbf{Z}_{p})^{\mathscr{N},\operatorname{HT}}\) should be \(\mathbf{G}_{a}^{\operatorname{dR}}/\mathbf{G}_{m}\cong\mathbf{G}_{a}/( \mathbf{G}_{a}^{\sharp}\rtimes\mathbf{G}_{m})\). This is indeed true: it is precisely [10, Lemma 5.12.4]. Had we worked with the evenly faithfully flat cover \(\operatorname{gr}_{\operatorname{ev}}^{\bullet}\operatorname{THH}(\mathbf{Z} _{p})\to\operatorname{gr}_{\operatorname{ev}}^{\bullet}\operatorname{THH}( \mathbf{Z}_{p}/S[t])\) (where \(t\mapsto p\)) instead, the stack associated to the even filtration on \(\operatorname{THH}(\mathbf{Z}_{p})\) would in fact be presented by (and is therefore isomorphic to) \(\mathbf{G}_{a}^{\operatorname{dR}}/\mathbf{G}_{m}\). **Variant 4.1.13**.: One can also study the stack \(\mathcal{M}_{J(p)}\) associated to the \(\mathbf{E}_{2}^{\operatorname{fr}}\)-ring \(J(p)\). It is not difficult to show that the morphism \(\mathcal{M}_{J(p)}\to\mathcal{M}_{\operatorname{FG}}\) exhibits \(\mathcal{M}_{J(p)}\) as a \(\mathbf{G}_{m}\)-bundle over \(\mathcal{M}_{\operatorname{FG}}\); for example, the fiber product \(\operatorname{Spec}(\operatorname{MU}_{*})/\mathbf{G}_{m}\times_{\mathcal{M}_{ \operatorname{FG}}}\mathcal{M}_{J(p)}\) is isomorphic to \(\operatorname{Spec}(\pi_{*}(\operatorname{MU}\otimes J(p)))/\mathbf{G}_{m}\), but there is an equivalence of \(\mathbf{E}_{2}\)-MU-algebras \(\operatorname{MU}\otimes J(p)\simeq\operatorname{MU}[t^{\pm 1}]\) with \(|t|=0\). Since \(\mathcal{M}_{J(p)}\) is a \(\mathbf{G}_{m}\)-bundle over \(\mathcal{M}_{\mathrm{FG}}\), descent in Hochschild homology is controlled by a Gauss-Manin connection. If \(Y\) is a scheme and \(q:X\to\mathbf{G}_{m}\times Y\) is a morphism, then there is a cofiber sequence \[\mathrm{dR}^{*}_{X/Y}\to\mathrm{dR}^{*}_{X/\mathbf{G}_{m}\times Y}\xrightarrow{ \nabla}\mathrm{dR}^{*}_{X/\mathbf{G}_{m}\times Y}d\mathrm{log}(t).\] If \(X\) is a stack over \(\mathcal{M}_{J(p)}\), we then obtain a cofiber sequence \[\mathrm{HH}(X/\mathcal{M}_{\mathrm{FG}})\to\mathrm{HH}(X/\mathcal{M}_{J(p)}) \xrightarrow{\Theta_{\mathrm{mot}}}\Sigma^{2,1}\mathrm{HH}(X/\mathcal{M}_{J(p)})\] of quasicoherent sheaves on \(X\). This is an analogue on \(\mathcal{M}_{\mathrm{FG}}\) of the topological Sen operator of (17). **Remark 4.1.14**.: Suppose that \(T(1)\) admits the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring (this is true at \(p=2\)). The unit map on \(T(1)\) defines a map \(\mathrm{TP}(\mathbf{Z}_{p})\to\mathrm{TP}(\mathbf{Z}_{p}/T(1))\). Since \(\mathrm{TP}(\mathbf{Z}_{p}/T(1))\) is concentrated in even degrees by Theorem 2.2.4, one can define the motivic filtration on \(\mathrm{TP}(\mathbf{Z}_{p}/T(1))\) using the double-speed Postnikov filtration. Under the isomorphism \(\pi_{*}\mathrm{TP}(\mathbf{Z}_{p}/T(1))\cong\pi_{*}\mathrm{BP}\langle 1\rangle^{tS^{ 1}}\cong\mathbf{Z}_{p}\llbracket\widetilde{p}\rrbracket^{tS^{1}}\), one can view \(\mathrm{gr}^{0}\) of the motivic filtration \(\mathrm{TP}(\mathbf{Z}_{p}/T(1))\) as \(\mathbf{Z}_{p}\llbracket\widetilde{p}\rrbracket\). Recall that \(\mathrm{TP}(\mathbf{Z}_{p})\) is a homotopical analogue of the Cartier-Witt stack \(\mathrm{WCart}_{\mathbf{Z}_{p}}\) from [1]. One can then view the map \(\mathrm{TP}(\mathbf{Z}_{p})\to\mathrm{TP}(\mathbf{Z}_{p}/T(1))\) as an analogue of the following map induced by the \(q\)-de Rham point: \[\rho_{\widetilde{p}\mathrm{dR}}:\mathrm{Spf}\,\mathbf{Z}_{p}\llbracket \widetilde{p}\rrbracket\cong(\mathrm{Spf}\,\mathbf{Z}_{p}\llbracket q-1 \rrbracket)/\mathbf{F}_{p}^{\times}\to(\mathrm{Spf}\,\mathbf{Z}_{p}\llbracket q -1\rrbracket)/\mathbf{Z}_{p}^{\times}\xrightarrow{\rho_{\mathrm{qdR}}}\mathrm{WCart }_{\mathbf{Z}_{p}}.\] This map classifies the prism \((\mathbf{Z}_{p}\llbracket\widetilde{p}\rrbracket,(\widetilde{p}))\), and can reasonably be called the \(\widetilde{p}\)-de Rham point. As explained in the end of the introduction to [11], one hopes that the unit map \(S^{0}\to\mathrm{TP}(\mathbf{Z}_{p})\) induces the map \(\mathrm{WCart}_{\mathbf{Z}_{p}}\to\mathcal{M}_{\mathrm{FG}}\) classifying Drinfeld's formal group over \(\mathrm{WCart}_{\mathbf{Z}_{p}}=\Sigma\) from [1] on the associated graded of the motivic filtration. If Conjecture 2.2.18 were true (i.e., there is an equivalence \(\mathrm{TP}(\mathbf{Z}_{p}/T(1))\simeq\mathrm{BP}\langle 1\rangle^{tS^{1}}\) of spectra), the resulting unit map \(S^{0}\to\mathrm{TP}(\mathbf{Z}_{p}/T(1))\to\mathrm{BP}\langle 1\rangle^{tS^{1}}\) would just be the unit of the \(\mathbf{E}_{\infty}\)-ring \(\mathrm{BP}\langle 1\rangle^{tS^{1}}\). Since \(\mathrm{BP}\langle 1\rangle\) is complex-oriented, the formal group over \(\pi_{0}\mathrm{BP}\langle 1\rangle^{tS^{1}}\cong\mathbf{Z}_{p}\llbracket \widetilde{p}\rrbracket\) must be isomorphic to the formal group of \(\mathrm{BP}\langle 1\rangle\), i.e., the \(p\)-typification of the multiplicative formal group. In particular, the aforementioned expectation about the formal group over \(\mathrm{WCart}_{\mathbf{Z}_{p}}\) and its relation to \(\mathrm{TP}(\mathbf{Z}_{p})\) would predict that the pullback of Drinfeld's formal group over \(\mathrm{WCart}_{\mathbf{Z}_{p}}\) along the map \(\rho_{\widetilde{p}\mathrm{dR}}\) is the \(p\)-typification of the multiplicative formal group over \(\mathbf{Z}_{p}\llbracket\widetilde{p}\rrbracket\). This is indeed true, and was proved in [1, Section 2.10.6]. This lends further evidence to the idea that the map \(\mathrm{TP}(\mathbf{Z}_{p})\to\mathrm{TP}(\mathbf{Z}_{p}/T(1))\) is a homotopical analogue of the \(\widetilde{p}\)-de Rham point of \(\mathrm{WCart}_{\mathbf{Z}_{p}}\). ### Comparing \(\mathrm{THH}\) relative to \(T(n)\) and \(T(n+1)\) Recall from Theorem 2.2.4 that \(\pi_{*}(\mathrm{THH}(\mathrm{BP}\langle n\rangle/T(n+1))\otimes_{\mathrm{BP} \langle n\rangle}\mathrm{BP}\langle n-1\rangle)\) is (additively) equivalent to the "subalgebra" \(\mathrm{BP}\langle n-1\rangle_{*}[\theta_{n-1}^{p}]\) of \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T(n))\cong\mathrm{BP} \langle n-1\rangle_{*}[\theta_{n-1}]\). This picture has an analogue over \(\mathcal{M}_{\mathrm{FG}}\), as we now explain. We first need a simple calculation. **Remark 4.2.1**.: Let \(R\) be a commutative ring, and let \(x\in R\) be a regular element. Then there is a \(p\)-completed equivalence \(\mathrm{dR}^{*}_{R[t]/x/R}\simeq R[t]\langle x^{\prime}\rangle/x\otimes_{R[t]/ x}\Lambda_{R[t]/x}(dt)\) with \(|x^{\prime}|=0\). Indeed, this follows from combining the observation that \(R[t]/x\cong R[t]\otimes_{R}R/x\) with the following \(p\)-completed equivalences: \(\mathrm{dR}^{*}_{R[t]/R}\simeq\Lambda_{R[t]}(dt)\) \(\mathrm{dH}^{*}_{R/x/R}\simeq R\langle x^{\prime}\rangle/x\). Similarly, there is an equivalence \(\mathrm{HH}(R[t]/x/R)\simeq R[t][S^{1}\times\mathbf{C}P^{\infty}]/x\). **Example 4.2.2**.: Let \(i_{n-1}:\mathscr{Z}(v_{n-1})\hookrightarrow\mathcal{M}_{T(n)}\) denote the closed substack cut out by the global section \(v_{n-1}\in\mathrm{H}^{0}(\mathcal{M}_{T(n)};\mathscr{O}_{\mathcal{M}_{T(n)}})\). If \(f:X\rightarrow\mathcal{M}_{T(n)}\) is a stack over \(\mathcal{M}_{T(n)}\), let \(X^{v_{n-1}=0}\) denote the pullback of \(X\) along \(i_{n-1}\), and let \(f:X^{v_{n-1}=0}\rightarrow\mathscr{Z}(v_{n-1})\) denote the structure morphism. Then \(i_{n-1}^{*}\mathrm{HH}(X/\mathcal{M}_{T(n)})=\mathrm{HH}(X^{v_{n-1}=0}/ \mathscr{Z}(v_{n-1}))\). In the case \(X=\mathrm{Spec}(\mathrm{BP}\langle n\rangle_{*})/\mathbf{G}_{m}\), there is an isomorphism \(X^{v_{n-1}=0}=\mathrm{Spec}(\mathrm{BP}\langle n-1\rangle_{*})/\mathbf{G}_{m}\). We will now relate \(\mathrm{HH}(X^{v_{n-1}=0}/\mathscr{Z}(v_{n-1}))\) to \(\mathrm{HH}(X^{v_{n-1}=0}/\mathcal{M}_{T(n-1)})\) by calculating \(\mathrm{HH}(\mathscr{Z}(v_{n-1})/\mathcal{M}_{T(n-1)})\). Recall from Example 4.1.11 that there is a \(\mathbf{G}_{a}\)-bundle \(\mathcal{M}_{T(n)}\rightarrow\mathcal{M}_{T(n-1)}\). Note that \(L_{\mathcal{M}_{T(n)}/\mathcal{M}_{T(n-1)}}\) is a free \(\mathscr{O}_{\mathcal{M}_{T(n)}}\)-module of rank 1 generated by a class \(d(t_{n})\), and that \(L_{\mathscr{Z}(v_{n-1})/\mathcal{M}_{T(n)}}\) is a free \(\mathscr{O}_{\mathscr{Z}(v_{n-1})}\)-module of rank 1 generated by a class \(\sigma^{2}(v_{n-1})\). Applying Remark 4.2.1, we find that \[\pi_{*}\mathrm{HH}(\mathscr{Z}(v_{n-1})/\mathcal{M}_{T(n-1)})\cong\mathscr{O} _{\mathscr{Z}(v_{n-1})}\langle\sigma^{2}(v_{n-1})\rangle\otimes_{\mathscr{O}_ {\mathscr{Z}(v_{n-1})}}\Lambda_{\mathscr{O}_{\mathscr{Z}(v_{n-1})}}(dt_{n}). \tag{52}\] We therefore see that \(\mathrm{HH}(X^{v_{n-1}=0}/\mathcal{M}_{T(n-1)})\) a subquotient of the tensor product of \(\mathrm{HH}(X^{v_{n-1}=0}/\mathscr{Z}(v_{n-1}))\) and \(f^{*}\mathrm{HH}(\mathscr{Z}(v_{n-1})/\mathcal{M}_{T(n-1)})\cong\mathscr{O}_ {X^{v_{n-1}=0}}\langle\sigma(v_{n-1})\rangle[dt_{n}]/(dt_{n})^{2}\). Let us now take \(f\) to be the morphism \(\mathrm{Spec}(\mathrm{BP}\langle n-1\rangle_{*})/\mathbf{G}_{m}\rightarrow \mathcal{M}_{T(n)}\). The \(E^{2}\)-page of the Adams-Novikov-Bokstedt spectral sequence for \(\mathrm{THH}(\mathrm{BP}\langle n-2\rangle/T(n-1))\) is given by \[E^{2}_{*,*}=\mathrm{BP}\langle n-2\rangle_{*}\langle\sigma^{2}v_{j}|j\geq n-1 \rangle\otimes_{\mathrm{BP}(n-1)_{*}}\Lambda_{\mathrm{BP}\langle n-1\rangle_{* }}(dt_{j}|j\geq n),\] and the extensions on the \(E^{\infty}\)-page are given by \((\sigma^{2}v_{j})^{p^{n-j}}=\sigma^{2}v_{n}\). The above discussion therefore shows that \(f^{*}\mathrm{HH}(\mathscr{Z}(v_{n-1})/\mathcal{M}_{T(n-1)})\) precisely detects the "bottom piece" of this \(E^{2}\)-page, i.e., the subalgebra \(\mathrm{BP}\langle n-2\rangle_{*}\langle\sigma^{2}v_{n-1}\rangle\otimes_{ \mathrm{BP}\langle n-2\rangle_{*}}\Lambda_{\mathrm{BP}\langle n-2\rangle_{*} }(dt_{n})\). Therefore, the preceding calculation of \(\mathrm{HH}(\mathscr{Z}(v_{n-1})/\mathcal{M}_{T(n-1)})\) gives one explanation for why \(\pi_{*}(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T(n))\otimes_{\mathrm{BP} \langle n-1\rangle}\mathrm{BP}\langle n-2\rangle)\) is (additively) equivalent to the "subalgebra" \(\mathrm{BP}\langle n-2\rangle_{*}[\theta_{n-2}^{p}]\) of \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-2\rangle/T(n-1))\cong\mathrm{BP} \langle n-2\rangle_{*}[\theta_{n-2}]\). **Remark 4.2.3**.: We can extend the analysis of Example 4.2.2 further. Let \(0\leq j\leq n-1\), and let \(i_{j,\cdots,n-1}:\mathscr{Z}(v_{[j,n)})\hookrightarrow\mathcal{M}_{T(n)}\) denote the closed substack cut out by the global sections \(v_{j},\cdots,v_{n-1}\in\mathrm{H}^{0}(\mathcal{M}_{T(n)};\mathscr{O}_{ \mathcal{M}_{T(n)}})\). If \(f:X\rightarrow\mathcal{M}_{T(n)}\) is a stack over \(\mathcal{M}_{T(n)}\), let \(X^{v_{j},\cdots,v_{n-1}=0}\) denote the pullback of \(X\) along \(i\), and let \(f:X^{v_{j},\cdots,v_{n-1}=0}\rightarrow\mathscr{Z}(v_{[j,n)})\) denote the structure morphism. Then \(i_{j,\cdots,n-1}^{*}\mathrm{HH}(X/\mathcal{M}_{T(n)})\) is equivalent to \(\mathrm{HH}(X^{v_{j},\cdots,v_{n-1}=0}/\mathscr{Z}(v_{[j,n)}))\). In the case \(X=\mathrm{Spec}(\mathrm{BP}\langle n-1\rangle_{*})/\mathbf{G}_{m}\), there is an isomorphism \(X^{v_{j},\cdots,v_{n-1}=0}=\mathrm{Spec}(\mathrm{BP}\langle j-1\rangle_{*})/ \mathbf{G}_{m}\). We can now relate \(\mathrm{HH}(X^{v_{j},\cdots,v_{n-1}=0}/\mathscr{Z}(v_{[j,n)}))\) to \(\mathrm{HH}(X^{v_{j},\cdots,v_{n-1}=0}/\mathcal{M}_{T(j)})\) by calculating \(\mathrm{HH}(\mathscr{Z}(v_{[j,n)})/\mathcal{M}_{T(j)})\). We claim that there is an isomorphism \[\mathrm{HH}(\mathscr{Z}(v_{[j,n)})/\mathcal{M}_{T(j)})\cong\mathscr{O}_{\mathscr{Z} (v_{[j,n)})}\langle\sigma(v_{i})|j\leq i\leq n-1\rangle\otimes_{\mathscr{O}_{ \mathscr{Z}(v_{[j,n)})}}\Lambda_{\mathscr{O}_{\mathscr{Z}(v_{[j,n)})}}(dt_{i}|j +1\leq i\leq n)\] To prove this, we will use descending induction on \(j\); the base case \(j=n-1\) was studied in Example 4.2.2. For the inductive step, suppose we know the result for \(j+1\). Let \(i_{j}:\mathscr{Z}(v_{[j,n)})\rightarrow\mathscr{Z}(v_{j+1},\cdots,v_{n-1})\) denote the closed substack cut out by \(v_{j}\). Then there are isomorphisms \[\operatorname{HH}(\mathscr{Z}(v_{[j,n)})/\mathcal{M}_{T(j+1)}^{v_{j} =0} \cong i_{j}^{*}\mathrm{HH}(\mathscr{Z}(v_{j+1},\cdots,v_{n-1})/ \mathcal{M}_{T(j+1)})\] \[\cong\mathscr{O}_{\mathscr{Z}(v_{[j,n)})}\langle\sigma^{2}(v_{i}) |j+1\leq i\leq n-1\rangle\otimes_{\mathscr{O}_{\mathscr{Z}(v_{[j,n)})}}\Lambda_{ \mathscr{O}_{\mathscr{Z}(v_{[j,n)})}}(dt_{i}|j+2\leq i\leq n)\] Recall that Example 4.2.2 gives an isomorphism between \(\operatorname{HH}(\mathcal{M}_{T(j+1)}^{v_{j}=0}/\mathcal{M}_{T(j)})\) and \(\mathscr{O}_{\mathcal{M}_{T(j+1)}^{v_{j}=0}}\langle\sigma^{2}(v_{j})\rangle \otimes_{\mathscr{O}_{\mathcal{M}_{T(j+1)}^{v_{j}=0}}}\Lambda_{\mathscr{O}_{ \mathcal{M}_{T(j+1)}^{v_{j}=0}}}(dt_{j+1})\). The desired calculation of \(\operatorname{HH}(\mathscr{Z}(v_{[j,n)})/\mathcal{M}_{T(j)})\) is now a simple computation with the transitivity sequence for the composite \[\mathscr{Z}(v_{[j,n)})\to\mathcal{M}_{T(j+1)}^{v_{j}=0}\to\mathcal{M}_{T(j)}.\] Let \(X=\operatorname{Spec}(\mathrm{BP}(j-1)_{*})/\mathbf{G}_{m}\), and let \(f:X\to\mathscr{Z}(v_{[j,n)})\) be the structure map. Then the above discussion implies that \(\operatorname{HH}(\operatorname{Spec}(\mathrm{BP}\langle j-1\rangle_{*})/ \mathbf{G}_{m}/\mathcal{M}_{T(j)})\) is isomorphic to the tensor product of \(f^{*}\mathrm{HH}(\mathscr{Z}(v_{[j,n)})/\mathcal{M}_{T(j)})\) and \(\operatorname{HH}(\operatorname{Spec}(\mathrm{BP}\langle j-1\rangle_{*})/ \mathbf{G}_{m}/\mathscr{Z}(v_{[j,n)}))\). This gives the \(E^{2}\)-page of the Adams-Novikov-Bokstedt spectral sequence computing \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle j-1\rangle/T(j))\) (see Remark 4.1.5), and one can run this spectral sequence as in Proposition 2.2.14. If \(\mathrm{BP}\langle j-1\rangle\) admits the structure of an \(\mathbf{E}_{3}\)-algebra, there are extensions \(\sigma^{2}(v_{i})^{p}=\sigma^{2}(v_{i+1})\) modulo decomposables on the \(E^{\infty}\)-page of this spectral sequence. Let \(T(n)/v_{[j,n)}\) denote \(T(n)/(v_{j},\cdots,v_{n-1})\). Since \(\theta_{j}\in\pi_{*}\mathrm{THH}(\mathrm{BP}\langle j-1\rangle/T(j))\) is represented by \(\sigma^{2}(v_{j})\), we find that \(\operatorname{THH}(T(n)/v_{[j,n)}/T(j))\) is (additively) equivalent to as \(T(n)[\theta_{j}]/(v_{[j,n)},\theta_{j}^{p^{n-j}})\). (See Remark 4.2.5 for a more topological perspective on this observation.) This discussion provides an algebraic perspective on why \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T(n))/v_{[j,n)}\) is (additively) equivalent to as the "subalgebra" of \(\pi_{*}\mathrm{THH}(\mathrm{BP}\langle j-1\rangle/T(j))\) generated by \(\theta_{j}^{p^{n-j}}\). **Remark 4.2.4**.: In topology, Example 4.2.2 plays out as follows, if we assume18 Conjecture 2.1.9. Let \(n\geq 1\). We begin by observing that \(T(n)/v_{n-1}\) is the Thom spectrum of an \(\mathbf{E}_{1}\) map \(\mu:\Omega J_{p-1}(S^{2p^{n-1}})\to\mathrm{BGL}_{1}(T(n-1))\); in particular, \(T(n)/v_{n-1}\) admits the structure of an \(\mathbf{E}_{1}\)-ring. To see this, we first define the map \(\mu\) as follows. There is a map \(S^{2p^{n-1}}\to B^{2}\mathrm{GL}_{1}(X(p^{n}-1))\) which detects the class \(v_{n-1}\in\pi_{2p^{n-1}-2}X(p^{n}-1)\), which naturally extends to a map \(J_{p-1}(S^{2p^{n-1}})\to B^{2}\mathrm{GL}_{1}(X(p^{n}-1))\) since we are working \(p\)-locally. Therefore, we obtain an \(\mathbf{E}_{1}\)-map \(\Omega J_{p-1}(S^{2p^{n-1}})\to\mathrm{BGL}_{1}(X(p^{n}-1))\). The projection \(X(p^{n}-1)\to T(n-1)\) is a map of \(\mathbf{E}_{2}\)-rings by Conjecture 2.1.9, and therefore induces an \(\mathbf{E}_{1}\)-map \(\mathrm{BGL}_{1}(X(p^{n}-1))\to\mathrm{BGL}_{1}(T(n-1))\). Composition with the \(\mathbf{E}_{1}\)-map \(\Omega J_{p-1}(S^{2p^{n-1}})\to\mathrm{BGL}_{1}(X(p^{n}-1))\) produces the desired map \(\mu\). The fact that the Thom spectrum of \(\mu\) can be identified with \(T(n)/v_{n-1}\) can be proved directly using (54) below. It follows from this discussion that there is an equivalence Footnote 18: There is an unconditional variant of the following discussion, obtained by replacing \(T(n)\) with \(X(p^{n+1}-1)\). However, this comes at the cost of adding the spaces \(\Delta_{n}\) into the mix. \[\mathrm{THH}(T(n)/v_{n-1}/T(n-1))\simeq T(n)[J_{p-1}(S^{2p^{n-1}})]/v_{n-1}.\] Moreover, under the equivalence \(\mathrm{THH}(\mathrm{BP}\langle n-2\rangle/T(n-1))\simeq\mathrm{BP}\langle n-2 \rangle[\Omega S^{2p^{n-1}+1}]\) of Theorem 2.2.4(a), the map \(\mathrm{THH}(T(n)/v_{n-1}/T(n-1))\to\mathrm{THH}(\mathrm{BP}\langle n-2 \rangle/T(n-1))\) induced by the map \(T(n)/v_{n-1}\to\mathrm{BP}\langle n-2\rangle\) is given by the skeletal inclusion of \(J_{p-1}(S^{2p^{n-1}})\to\Omega S^{2p^{n-1}+1}\). The projection \(\mathrm{THH}(\mathrm{BP}\langle n-2\rangle/T(n-1))\to\Omega S^{2p^{n-1}+1}\) \({\rm THH}({\rm BP}\langle n-1\rangle/T(n))/v_{n-1}\) can be identified with the effect on \({\rm BP}\langle n-2\rangle\)-chains of the James-Hopf map \(\Omega S^{2p^{n-1}+1}\to\Omega S^{2p^{n}+1}\). Therefore, the EHP sequence \[J_{p-1}(S^{2p^{n-1}})\to\Omega S^{2p^{n-1}+1}\to\Omega S^{2p^{n}+1}\] shows that \({\rm THH}({\rm BP}\langle n-1\rangle/T(n))/v_{n-1}\) is (additively) equivalent to precisely as the "subalgebra" of \({\rm THH}({\rm BP}\langle n-2\rangle/T(n-1))\) generated by \(\theta_{n-1}^{p}\). The above calculation of \({\rm THH}(T(n)/v_{n-1}/T(n-1))\) is a topological incarnation of the calculation of \({\rm HH}({\mathscr{Z}}(v_{n-1})/{\mathcal{M}}_{T(n-1)})\) in Example 4.2.2. Indeed, the Adams-Novikov-Bokstedt spectral sequence (see Remark 4.1.5) runs \[E_{*,*}^{2}=\pi_{*}{\rm HH}({\mathscr{Z}}(v_{n-1})/{\mathcal{M}}_{T(n-1)}) \Rightarrow\pi_{*}{\rm gr}_{\rm ev}^{\bullet}{\rm THH}(T(n)/v_{n-1}/T(n-1)), \tag{53}\] and the \(E^{2}\)-page is given by (52). Again, one can establish analogues of the Bokstedt differentials (6) in the Adams-Novikov-Bokstedt spectral sequence, and thereby obtain an alternative approach to the above calculation of \({\rm THH}(T(n)/v_{n-1}/T(n-1))\). **Remark 4.2.5**.: Let us continue to assume Conjecture 2.1.9, and let \(0\leq j\leq n-1\). Recall that \(T(n)/v_{[j,n)}\) denote \(T(n)/(v_{j},\cdots,v_{n-1})\). Recall that \[{\rm H}_{*}(T(n)/v_{[j,n)};{\bf F}_{p})=\begin{cases}{\bf F}_{2}[\zeta_{1}^{2},\cdots,\zeta_{j}^{2},\zeta_{j+1},\cdots,\zeta_{n}]&p=2,\\ {\bf F}_{p}[\zeta_{1},\cdots,\zeta_{n}]\otimes\Lambda_{{\bf F}_{p}}(\tau_{j}, \cdots,\tau_{n-1})&p>2.\end{cases} \tag{54}\] It is natural to ask if the discussion in Remark 4.2.4 extends to a description of \({\rm THH}(T(n)/v_{[j,n)}/T(j))\), paralleling Remark 4.2.3. This is an ill-posed question, since it is not clear that \(T(n)/v_{[j,n)}\) admits the structure of an \({\bf E}_{1}\)-algebra. Nevertheless, if \(T(n)/v_{[j,n)}\) did admit the structure of an \({\bf E}_{1}\)-\(T(j)\)-algebra, then an analysis similar to Theorem 2.2.4 shows that \[{\rm THH}(T(n)/v_{[j,n)}/T(j))\simeq T(n)[J_{p^{n-j}-1}(S^{2p^{j}})]/v_{[j,n)}.\] This is the topological analogue of the calculation of Remark 4.2.3. Under the equivalence \({\rm THH}({\rm BP}\langle j-1\rangle/T(j))\simeq{\rm BP}\langle j-1\rangle[ \Omega S^{2p^{j}+1}]\) of Theorem 2.2.4(a), the map \({\rm THH}(T(n)/v_{[j,n)}/T(j))\to{\rm THH}({\rm BP}\langle j-1\rangle/T(j))\) induced by the map \(T(n)/v_{[j,n)}\to{\rm BP}\langle j-1\rangle\) is given by the skeletal inclusion of \(J_{p^{n-j}-1}(S^{2p^{j}})\to\Omega S^{2p^{j}+1}\). The projection \[{\rm THH}({\rm BP}\langle j-1\rangle/T(j))\to{\rm THH}({\rm BP}\langle n-1 \rangle/T(n))/v_{[j,n)}\] can be identified with the effect on \({\rm BP}\langle j-1\rangle\)-chains of the James-Hopf map \(\Omega S^{2p^{j}+1}\to\Omega S^{2p^{n}+1}\). Therefore, the EHP sequence \[J_{p^{n-j}-1}(S^{2p^{j}})\to\Omega S^{2p^{j}+1}\to\Omega S^{2p^{n}+1}\] shows that \(\pi_{*}{\rm THH}({\rm BP}\langle n-1\rangle/T(n))/v_{[j,n)}\) is (additively) equivalent to precisely the "subalgebra" of \(\pi_{*}{\rm THH}({\rm BP}\langle j-1\rangle/T(j))\) generated by \(\theta_{j}^{p^{n-j}}\). Since \({\rm THH}(T(n)/v_{[j,n)}/T(j))\simeq T(n)[J_{p^{n-j}-1}(S^{2p^{j}})]/v_{[j,n)}\), one expects \(T(n)/v_{[j,n)}\) to have an \({\bf E}_{1}\)-cell structure over \(T(j)\) described by the cell structure of \(J_{p^{n-j}-1}(S^{2p^{j}})\). Although we do not know how to prove this unconditionally, it is not difficult to show if we further assume [10, Conjectures D and E]. In this case, [10, Corollary B] says that there is a map \(f:\Omega^{2}S^{2p^{j}+1}\to{\rm BGL}_{1}(T(j))\) which detects \(v_{j}\in\pi_{2p^{j}-2}T(j)\) on the bottom cell of the source, such that the Thom spectrum of \(f\) is a form of \(\mathrm{BP}\langle j-1\rangle\). Let19\(f_{n,j}:\Omega J_{p^{n-j}-1}(S^{2p^{j}})\to\mathrm{BGL}_{1}(T(j))\) denote the composite of \(f\) with the \(\mathbf{E}_{1}\)-map \(\Omega J_{p^{n-j}-1}(S^{2p^{j}})\to\Omega^{2}S^{2p^{j}+1}\). Then the Thom spectrum of \(f_{n,j}\) is equivalent to \(T(n)/v_{[j,n)}\) as a \(T(j)\)-module. This is not quite an "\(\mathbf{E}_{1}\)-cell structure" for \(T(n)/v_{[j,n)}\), since \(f_{n,j}\) is not an \(\mathbf{E}_{1}\)-map; nevertheless, this construction of \(T(n)/v_{[j,n)}\) suffices to calculate \(\mathrm{THH}(T(n)/v_{[j,n)}/T(j))\). Footnote 19: In Remark 4.2.4, we described the map \(f_{n,n-1}\) without assuming [4, Conjectures D and E]. It is generally not possible to describe \(f_{n,j}\) similarly if \(j<n-1\): although there is a map \(S^{2p^{j}}\to B^{2}\mathrm{GL}_{1}(T(j))\) which detects \(v_{j}\in\pi_{2p^{j}-2}T(j)\), there are \(p\)-local obstructions to extending along \(S^{2p^{j}}\to J_{p^{n-j}-1}(S^{2p^{j}})\) if \(n-j>1\). These obstructions can be viewed as the \(\mathbf{E}_{2}\)-Browder brackets on \(v_{j}\); [4, Conjecture E] implies that these Browder brackets can be compatibly trivialized. **Example 4.2.6**.: If \(R\) is an \(\mathbf{E}_{2}\)-algebra which is an \(\mathbf{E}_{1}\)-\(T(n)\)-algebra, one can loosely interpret the above discussion as saying that the square (55) exhibits the top-right corner as the "tensor product of the top-left and bottom-right corners". Note that the homotopy of the top-left corner is \(R[\theta_{j}]/\theta_{j}^{p^{n-j}}\). The bottom-right corner should be thought of as \(\mathrm{THH}(R/v_{[j,n)}/T(n)/v_{[j,n)})\), although it is difficult to make this picture precise (since \(T(n)/v_{[j,n)}\) does not admit the structure of an \(\mathbf{E}_{2}\)-algebra). For instance, if \(R=\mathrm{BP}\langle n-1\rangle\), then the square (55) says that the square (56) exhibits the top-right corner as the tensor product of the top-left and bottom-right corners. This is essentially the observation that the map \(\mathrm{THH}(\mathrm{BP}\langle j-1\rangle/T(j))\to\mathrm{THH}(\mathrm{BP} \langle n-1\rangle/T(n))/v_{[j,n)}\) sends \(\theta_{j}^{m}\mapsto 0\) unless \(p^{n-j}|m\), in which case \(\theta_{j}^{m}\mapsto\theta_{n}^{m/p^{n-j}}\). This therefore explains the similarity between \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle/T(n))\) and \(\mathrm{THH}(\mathrm{BP}\langle j-1\rangle/T(j))\) given by Theorem 2.2.4(a). **Remark 4.2.7**.: As mentioned before, the lack of structure on the objects involved above make it difficult to use the above picture to understand the multiplicative structure on \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle)\); but it does point to a plan of attack. Namely, one can attempt to understand the even filtration on \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle)/v_{[0,n)}\) by considering the natural map \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle)/v_{[0,n)}\to\mathrm{THH}(\mathbf{ F}_{p})\). It is not hard to see that this map is an eff cover, so that the stack associated to the even filtration on \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle)/v_{[0,n)}\) is the quotient of the scheme associated to the even filtration on \(\mathrm{THH}(\mathbf{F}_{p})\) by a certain group scheme. The scheme associated to the even filtration on \(\mathrm{THH}(\mathbf{F}_{p})\) is precisely \(\mathbf{G}_{a}\), and the above discussion suggests that the stack associated to the even filtration on \(\mathrm{THH}(\mathrm{BP}\langle n-1\rangle)/v_{[0,n)}\) is isomorphic to \({\bf G}_{a}/W[F^{n}]\); this is also suggested by work of Lee in [11]. We hope to study this in future work joint with Jeremy Hahn and Arpon Raksit. To this end, we set up some groundwork for future investigation of this stack in Appendix C, where we study some basic properties of \(W[F^{n}]\). **Remark 4.2.8**.: The calculation of \({\rm THH}(T(n)/v_{[j,n)}/T(j))\) in Remark 4.2.5 shows that more is true: if \(n\geq k-1\), the structure of \({\rm BP}\langle n\rangle\) as an \({\bf E}_{1}\)-\(X(p^{k})\)-algebra (i.e., \({\rm THH}({\rm BP}\langle n\rangle/T(k))\)) mirrors the structure of \({\rm BP}\langle n-k\rangle\) as an \({\bf E}_{1}\)-algebra over the sphere (i.e., \({\rm THH}({\rm BP}\langle n-k\rangle)\)). **Remark 4.2.9**.: Note that the Thom spectrum of the map \(f_{n,0}\) has been studied in [10], where it was denoted \(y(n)\). Just as the \(y(n)\) describe a filtration of \(y(\infty)={\bf F}_{p}\) by \({\bf E}_{1}\)-algebras, the spectra \(T(n)/v_{[j,n)}\) describe a filtration of \({\rm BP}\langle j-1\rangle\). For instance, it is not difficult to show that for \(j\leq k\leq n-1\), the spectrum \(T(n)/v_{[j,n)}\) is \({\rm Tel}(k)\)-acyclic. Therefore, if \(T(n)/v_{[j,n)}\) admits the structure of an \({\bf E}_{1}\)-ring, the same argument as [12], Corollary 4.15] implies that the map \(K(T(n)/v_{[j,n)})\to K({\rm BP}\langle j-1\rangle)\) is an \({\rm Tel}(k)\)-equivalence for \(j\leq k\leq n-1\). Since \(K({\rm BP}\langle j-1\rangle)\) is \({\rm Tel}(k)\)-locally contractible for \(k\geq j+1\), the only interesting case is \(k=j\); in this case, we find that the maps \[K(T(j+1)/v_{j})\to K(T(j+2)/(v_{j},v_{j+1}))\to\cdots\to K({\rm BP}\langle j -1\rangle)\] are all \({\rm Tel}(j)\)-equivalences. **Remark 4.2.10**.: Since \(T(n)/v_{[j,n)}\) is closely related to \(T(j)\) by Remark 4.2.5, it is natural to wonder if there is a relationship between \(T(n)/v_{[j,n)}\) and \(T(n+k)/v_{[j,n+k)}\), in a manner compatible with their relationship to \(T(j)\). By Remark 4.2.4, \(T(n+k)/v_{[n,n+k)}\) is the Thom spectrum of a map \(\Omega J_{p^{k}-1}(S^{2p^{n}})\to{\rm BGL}_{1}(T(n))\). It follows that if \(T(n)/v_{[j,n)}\) admits the structure of an \({\bf E}_{1}\)-ring, then \(T(n+k)/v_{[j,n+k)}\) is the Thom spectrum of a map \(\Omega J_{p^{k}-1}(S^{2p^{n}})\to{\rm BGL}_{1}(T(n)/v_{[j,n)})\). As mentioned in Remark 4.2.5, if we further assume [13, Conjectures D and E], the spectrum \(T(n)/v_{[j,n)}\) (resp. \(T(n+k)/v_{[j,n+k)}\)) is the Thom spectrum of a map \(\Omega J_{p^{n-j}-1}(S^{2p^{j}})\to{\rm BGL}_{1}(T(j))\) (resp. \(\Omega J_{p^{n+k-j}-1}(S^{2p^{j}})\to{\rm BGL}_{1}(T(j))\)). The relationship between the two presentations of \(T(n+k)/v_{[j,n+k)}\) (as a Thom spectrum over \(T(n)/v_{[j,n)}\) and over \(T(j)\)) is explained by the following observation in unstable homotopy theory: there is a fibration20 Footnote 20: To construct the fibration (57), recall that there is an EHP sequence \[J_{p^{m}-1}(S^{2d})\to\Omega S^{2d+1}\xrightarrow{H}\Omega S^{2dp^{m}+1}.\] By dimension considerations, the canonical map \(J_{p^{m+k}-1}(S^{2d})\to\Omega S^{2d+1}\) factors through \(J_{p^{m+k}-1}(S^{2d})\to J_{p^{k}-1}(S^{2dp^{m}})\times_{\Omega S^{2d}p^{m}+1}\Omega S ^{2d+1}\), and one can easily check that this map is an equivalence. This implies the desired fiber sequence (57). \[J_{p^{m}-1}(S^{2d})\to J_{p^{m+k}-1}(S^{2d})\xrightarrow{H}J_{p^{k}-1}(S^{2dp^ {m}}). \tag{57}\] Indeed, applying (57) when \(m=n-j\) and \(d=p^{j}\), we obtain a fibration of \({\bf E}_{1}\)-spaces: \[\Omega J_{p^{n-j}-1}(S^{2p^{j}})\to\Omega J_{p^{n+k-j}-1}(S^{2p^{j}}) \xrightarrow{H}\Omega J_{p^{k}-1}(S^{2p^{n}}).\] The composite of \(f_{n+k,j}:\Omega J_{p^{n+k-j}-1}(S^{2p^{j}})\to{\rm BGL}_{1}(T(j))\) with the map \(\Omega J_{p^{n-j}-1}(S^{2p^{j}})\to\Omega J_{p^{n+k-j}-1}(S^{2p^{j}})\) is \(f_{n,j}:\Omega J_{p^{n-j}-1}(S^{2p^{j}})\to{\rm BGL}_{1}(T(j))\). Therefore, [13, Proposition 2.1.6] implies that there is a map \(f_{n+k,j}:\Omega J_{p^{k}-1}(S^{2p^{n}})\to{\rm BGL}_{1}(T(n)/v_{[j,n)})\) whose Thom spectrum is \(T(n+k)/v_{[j,n+k)}\). This is the desired relationship between the various presentations of \(T(n+k)/v_{[j,n+k)}\). **Remark 4.2.11**.: Observe that the preceding discussion implies, in particular, that there is a map \(q_{k}:\Omega J_{p^{k}-1}(S^{2p^{n}})\to\operatorname{BGL}_{1}(y(n))\) such that the composite \(S^{2p^{n}-1}\to\Omega J_{p^{k}-1}(S^{2p^{n}})\to\operatorname{BGL}_{1}(y(n))\) detects \(v_{n}\in\pi_{2p^{n}-2}y(n)\), and such that the Thom spectrum of the map \(q_{k}\) is \(y(n+k)\). Taking \(k\to\infty\), this implies that there is a map \(q_{\infty}:\Omega^{2}S^{2p^{n}+1}\to\operatorname{BGL}_{1}(y(n))\) whose Thom spectrum is \(y(\infty)=\mathbf{F}_{p}\). The map \(q_{\infty}\) is adjoint to the \(\mathbf{E}_{1}\)-map \(\Omega^{3}S^{2p^{n}+1}_{+}\to y(n)\) from [11, Section 4.1] which detects \(v_{n}\) on the bottom cell of the source. ## Appendix A Analogues for \(\mathrm{ko}\) and \(\mathrm{tmf}\) Many of the results from the body of this article extend to the case of \(\mathrm{ko}\) and \(\mathrm{tmf}\). In this section, we will state these results; since the proofs are essentially the same, we will not give arguments unless the situation is substantially different. We will specialize to the case \(p=2\) for simplicity. One of the main observations in Theorem 2.2.4 is that the structure of \(\mathrm{BP}\langle n\rangle\) as an \(\mathbf{E}_{1}\)-algebra over \(X(p^{n})\) (or rather, \(T(n)\)) mirrors the structure of \(\mathbf{Z}_{p}\) as an \(\mathbf{E}_{1}\)-algebra over \(S^{0}\). For \(\mathrm{ko}\) and \(\mathrm{tmf}\), there are analogues of \(X(p^{n})\), which we studied in [20]. **Recollection A.1**.: Let \(A\) denote the free \(\mathbf{E}_{1}\)-algebra \(S/\!\!/\nu\) with a nullhomotopy of \(\nu\), i.e., the Thom spectrum of the \(\mathbf{E}_{1}\)-map \(\Omega S^{5}\to\mathrm{BGL}_{1}(S)\) which detects \(\nu\in\pi_{3}(S)\) on the bottom cell21. This spectrum has the property that \(\mathrm{H}_{*}(A;\mathbf{F}_{2})\cong\mathbf{F}_{2}[\zeta_{1}^{4}]\) (in fact, \(\mathrm{BP}_{*}(A)\cong\mathrm{BP}_{*}[\frac{\eta_{R}(v_{1}^{2})-v_{1}^{2}}{4 }]\cong\mathrm{BP}_{*}[t_{1}^{2}+v_{1}t_{1}]\)). There is an \(\mathbf{E}_{1}\)-map \(i:A\to\mathrm{ko}\) such that under the isomorphism \(\mathrm{H}_{*}(\mathrm{ko};\mathbf{F}_{2})\cong\mathbf{F}_{2}[\zeta_{1}^{4}, \zeta_{2}^{2},\zeta_{3},\cdots]\), the map \(i\) corresponds to the inclusion of \(\mathbf{F}_{2}[\zeta_{1}^{4}]\). In particular, the map \(A\to\mathrm{ko}\) is an equivalence in dimensions \(\leq 4\). There is in fact an \(\mathbf{E}_{1}\)-map \(A\to\mathrm{MSpin}\), induced from an \(\mathbf{E}_{1}\)-map \(\Omega S^{5}\to\mathrm{BSpin}\). There is also an \(\mathbf{E}_{1}\)-map \(A\to X(2)=T(1)\), such that \(T(1)\simeq A\otimes C\eta\). We note that the "\(Q_{0}\)-Margolis homology" of \(\mathrm{H}_{*}(\mathrm{ko};\mathbf{F}_{2})\) (i.e., the homology of \(\mathrm{Sq}^{1}\) viewed as a differential acting on \(\mathrm{H}_{*}(\mathrm{ko};\mathbf{F}_{2})\)) is precisely \(\mathrm{H}_{*}(A;\mathbf{F}_{2})\). Footnote 21: The spectrum \(A\) has been studied before by Mahowald and his coauthors in [16, 17, 18, 19, 19, 20, 21], where it is often denoted \(X_{5}\). Similarly, let \(B\) denote the \(\mathbf{E}_{1}\)-algebra of [20, Definition 3.2.17]22, so that there is an \(\mathbf{E}_{1}\)-space \(N\) such that \(B\) is the Thom spectrum of an \(\mathbf{E}_{1}\)-map \(N\to\mathrm{BSring}\). We will not recall the construction of \(N\) here; we only say that \(B\) is obtained from the \(\mathbf{E}_{1}\)-quotient \(S/\!\!/\sigma\) by further taking an "\(\mathbf{E}_{1}\)-quotient" by the class in \(\pi_{11}(S/\!\!/\sigma)\) constructed from a nullhomotopy of \(\nu\sigma\in\pi_{10}(S)\). This spectrum has the property that \(\mathrm{H}_{*}(B;\mathbf{F}_{2})\cong\mathbf{F}_{2}[\zeta_{1}^{8},\zeta_{2}^{4 }]\) (in fact, \(\mathrm{BP}_{*}(B)\cong\mathrm{BP}_{*}[b_{4},y_{6}]\), where \(b_{4}\equiv t_{1}^{4}\) and \(y_{6}\equiv t_{2}^{2}\) modulo \((2,v_{1},\cdots)\))23. There is an \(\mathbf{E}_{1}\)-map \(i:B\to\mathrm{tmf}\) such that under the isomorphism \(\mathrm{H}_{*}(\mathrm{tmf};\mathbf{F}_{2})\cong\mathbf{F}_{2}[\zeta_{1}^{8}, \zeta_{2}^{4},\zeta_{3}^{2},\zeta_{4},\cdots]\), the map \(i\) corresponds to the inclusion of \(\mathbf{F}_{2}[\zeta_{1}^{8},\zeta_{2}^{4}]\). In particular, the map \(B\to\mathrm{tmf}\) is an equivalence in dimensions \(\leq 12\). There is in fact an \(\mathbf{E}_{1}\)-map \(B\to\mathrm{MString}\). There is also an \(\mathbf{E}_{1}\)-map \(B\to T(2)\) such that \(T(2)\simeq B\otimes DA_{1}\), where \(DA_{1}\) is an 8-cell complex whose mod 2 cohomology is isomorphic to the subalgebra of the Steenrod algebra generated by \(\mathrm{Sq}^{2}\) and \(\mathrm{Sq}^{4}\). We note that the "\(Q_{0}\)-Margolis homology" of \(\mathrm{H}_{*}(\mathrm{tmf};\mathbf{F}_{2})\) (i.e., the homology of \(\mathrm{Sq}^{1}\) viewed as a differential acting on \(\mathrm{H}_{*}(\mathrm{tmf};\mathbf{F}_{2})\)) is precisely \(\mathrm{H}_{*}(B;\mathbf{F}_{2})\). Footnote 22: The \(\mathbf{E}_{1}\)-ring \(B\) has been briefly studied under the name \(\overline{X}\) in [19]. Footnote 23: For the sake of illustration, we remark that if \(p=2\), then \(b_{4}\) can be taken to be the following cobar representative for \(\sigma=\alpha_{4/4}\), where the \(v_{i}\)s are Hazewinkel’s generators: \[b_{4} =\frac{1}{2}\left(\frac{\eta_{R}(v_{1}^{4})-v_{1}^{4}}{8}-(\eta_{ R}(v_{1}v_{2})-v_{1}v_{2})\right)\] \[=5t_{1}^{4}+9t_{1}^{3}v_{1}+7t_{1}^{2}v_{1}^{2}-2t_{1}t_{2}+2t_{1 }v_{1}^{3}-t_{1}v_{2}-t_{2}v_{1}.\] Here, we used the formula \[\eta_{R}(v_{2})=v_{2}-5v_{1}t_{1}^{2}-3v_{1}^{2}t_{1}+2t_{2}-4t_{1}^{3}.\] **Conjecture A.2**.: _The \(\mathbf{E}_{1}\)-algebra structures on \(A\) and \(B\) admit extensions to \(\mathbf{E}_{2}^{\mathrm{fr}}\)-algebra structures such that the maps \(A\to X(2)\), \(B\to X(4)_{(2)}\), \(A\to\mathrm{ko}\), and \(B\to\mathrm{tmf}\) admit the structure of \(\mathbf{E}_{2}^{\mathrm{fr}}\)-maps._ A calculation paralleling Proposition 2.2.14 shows: **Proposition A.3**.: _Assume Conjecture A.2. There are isomorphisms_ \[\mathrm{H}_{*}(\mathrm{THH}(\mathrm{ko}/A);\mathbf{F}_{2})\cong\mathrm{H}_{*} (\mathrm{ko};\mathbf{F}_{2})[\sigma(\zeta_{3})]\otimes_{\mathbf{F}_{2}}\Lambda _{\mathbf{F}_{2}}(\sigma(\zeta_{2}^{2})),\] \[\mathrm{H}_{*}(\mathrm{THH}(\mathrm{tmf}/B);\mathbf{F}_{2})\cong\mathrm{H}_{*} (\mathrm{tmf};\mathbf{F}_{2})[\sigma(\zeta_{4})]\otimes_{\mathbf{F}_{2}}\Lambda _{\mathbf{F}_{2}}(\sigma(\zeta_{3}^{2})).\] _Here, \(|\sigma(\zeta_{3})|=8\), \(|\sigma(\zeta_{2}^{2})|=7\), \(|\sigma(\zeta_{4})|=16\), and \(|\sigma(\zeta_{3}^{2})|=15\)._ Using the Adams spectral sequence for \(\pi_{*}\mathrm{THH}(\mathrm{ko}/A)\) and \(\pi_{*}\mathrm{THH}(\mathrm{tmf}/B)\) as in Theorem 2.2.4(b) (and using \(\mathrm{ko}\)- and \(\mathrm{tmf}\)-linearity), one finds: **Theorem A.4**.: _Assume Conjecture A.2. Upon \(2\)-completion, there are equivalences_ \[\mathrm{THH}(\mathrm{ko}/A)\simeq\mathrm{ko}\oplus\bigoplus_{j\geq 1} \Sigma^{8j-1}\mathrm{ko}/2j,\] \[\mathrm{THH}(\mathrm{tmf}/B)\simeq\mathrm{tmf}\oplus\bigoplus_{j\geq 1} \Sigma^{16j-1}\mathrm{tmf}/2j.\] **Remark A.5**.: Since \(\mathrm{ko}\otimes C\eta\simeq\mathrm{ku}\), Theorem A.4 implies that \(\mathrm{THH}(\mathrm{ko}/A)\otimes C\eta\simeq\mathrm{THH}(\mathrm{ku}/T(1))\). Relatedly, there is an equivalence \(\mathrm{ko}\otimes T(1)\simeq\mathrm{ku}[\Omega S^{5}]\) of \(\mathbf{E}_{1}\)-\(T(1)\)-algebras, which implies that \[\mathrm{THH}(\mathrm{ko})\otimes T(1)\simeq\mathrm{THH}(\mathrm{ko}\otimes T( 1)/T(1))\simeq\mathrm{ku}[S^{5}]\oplus\bigoplus_{j\geq 1}\Sigma^{8j-1} \mathrm{ku}[S^{5}]/2j.\] Along similar lines, Theorem A.4 implies that \(\mathrm{THH}(\mathrm{tmf}/B)\otimes DA_{1}\simeq\mathrm{THH}(\mathrm{BP} \langle 2\rangle/T(2))\). There is also a \(2\)-local equivalence \(\mathrm{tmf}\otimes T(2)\simeq\mathrm{BP}\langle 2\rangle[N]\) of \(\mathbf{E}_{1}\)-\(T(2)\)-algebras, so that \[\mathrm{THH}(\mathrm{tmf})\otimes T(2)\simeq\mathrm{BP}\langle 2\rangle[BN] \oplus\bigoplus_{j\geq 1}\Sigma^{16j-1}\mathrm{BP}\langle 2\rangle[BN]/2j.\] Note that \(\mathrm{BP}\langle 2\rangle[N]\simeq\mathrm{BP}\langle 2\rangle[\Omega S^{9} \times\Omega S^{13}]\), so that \(\pi_{*}(\mathrm{tmf}\otimes T(2))\cong\mathbf{Z}_{(2)}[v_{1},v_{2},x_{8},y_{12}]\), where \(|v_{1}|=2\), \(|v_{2}|=6\), \(|x_{8}|=8\), and \(|y_{12}|=12\). This gives a potential approach to calculating \(\mathrm{THH}(\mathrm{ko})\) (resp. \(\mathrm{THH}(\mathrm{tmf})\)) via the \(T(1)\)-based (resp. \(T(2)\)-based) Adams-Novikov spectral sequence. Describing this spectral sequence is essentially equivalent to calculating the analogue of the topological Sen operator for \(\mathrm{THH}(\mathrm{ko}/A)\), whose construction is described below in Construction A.9. **Remark A.6**.: Recall from Figure 1 that the structure of \(\mathrm{ko}\) over \(A\) mirrors the structure of \(\mathrm{tmf}\) over \(B\), which in turn mirrors the structure of \(\mathrm{BP}\langle n\rangle\) over \(T(n)\); in other words, the calculation of Theorem A.4 is along the diagonal line \((n,n)\) in Figure 1. It is natural to wonder whether there is an \(\mathbf{E}_{1}\)-ring \(\widetilde{A}\) equipped with an \(\mathbf{E}_{1}\)-map \(A\to\widetilde{A}\) and an \(\mathbf{E}_{1}\)-map \(\widetilde{A}\to\mathrm{ko}\) such that the structure of \(\mathrm{ko}\) over \(\widetilde{A}\) mirrors the structure of \(\mathrm{BP}\langle n-1\rangle\) over \(T(n)\). (This is the "off-diagonal line" \((n,n-1)\) in Figure 1.) This question is only interesting when \(p=2\), since \(\mathrm{ko}_{(p)}\) splits as a direct sum of even shifts of \(\mathrm{BP}\langle 1\rangle\) if \(p>2\). Let us localize at \(2\) for the remainder of this discussion. Examining the argument establishing Theorem A.4 when \(p=2\), one finds that the mod \(2\) homology of \(\widetilde{A}\) must be \(\mathrm{H}_{*}(\widetilde{A};\mathbf{F}_{2})\cong\mathbf{F}_{2}[\zeta_{1}^{4}, \zeta_{2}^{2}]\). If \(A\) admits the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring, then \(\widetilde{A}\) can be constructed as follows. The class \(\sigma_{1}\in\pi_{5}(A)\) determined by a nullhomotopy of \(\eta\nu\) (see [10, Remark 3.2.17]) defines a map \(S^{6}\to\mathrm{BGL}_{1}(A)\), which, thanks to our assumption on \(A\), extends to an \(\mathbf{E}_{1}\)-map \(\Omega S^{7}\to\mathrm{BGL}_{1}(A)\). The desired \(\mathbf{E}_{1}\)-\(A\)-algebra \(\widetilde{A}\) can be defined as Thom spectrum of this map. (According to [10, Remark 5.1.5], one should not expect \(\widetilde{A}\) to admit a natural construction as a Thom spectrum over the sphere.) Note that \(\widetilde{A}\otimes C\eta\simeq T(2)\). The same argument as Theorem 2.2.4(a) shows: **Proposition A.7**.: _If both \(A\) and \(\widetilde{A}\) admit the structure of \(\mathbf{E}_{2}^{\mathrm{fr}}\)-rings and \(\mathrm{ko}\) admits the structure of an \(\mathbf{E}_{1}\)-\(\widetilde{A}\)-algebra, then there is a \(2\)-complete equivalence_ \[\mathrm{THH}(\mathrm{ko}/\widetilde{A})\simeq\mathrm{ko}[\Omega S^{9}],\] _where the generator in \(\pi_{8}\mathrm{THH}(\mathrm{ko}/\widetilde{A})\) is \(\sigma^{2}(v_{2})\). Moreover,_ \[\mathrm{H}_{*}^{c}(\mathrm{TP}(\mathrm{ko}/\widetilde{A});\mathbf{F}_{2}) \cong\mathbf{F}_{2}[\zeta_{1}^{4},\zeta_{2}^{2},\zeta_{3}^{2},\zeta_{4},\cdots] (\hbar). \tag{58}\] **Remark A.8**.: Similarly, if \(B\) admits the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring, the class \(\sigma_{2}\in\pi_{13}(B)\) from [10, Remark 3.2.24] defines a map \(S^{14}\to\mathrm{BGL}_{1}(B)\). Thanks to our assumption on \(B\), this extends to an \(\mathbf{E}_{1}\)-map \(\Omega S^{15}\to\mathrm{BGL}_{1}(B)\). Define \(\widetilde{B}\) to be Thom spectrum of this map, so that \(\mathrm{H}_{*}(\widetilde{B};\mathbf{F}_{2})\cong\mathbf{F}_{2}[\zeta_{1}^{8},\zeta_{2}^{4},\zeta_{3}^{2}]\). Note that \(\widetilde{B}\otimes DA_{1}\simeq T(3)\). If \(\widetilde{B}\) admits the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring and \(\mathrm{tmf}\) admits the structure of an \(\mathbf{E}_{1}\)-\(\widetilde{B}\)-algebra, then the same argument as in Theorem 2.2.4(a) shows that there is a \(2\)-complete equivalence \[\mathrm{THH}(\mathrm{tmf}/\widetilde{B})\simeq\mathrm{tmf}[\Omega S^{17}],\] where the generator in \(\pi_{16}\mathrm{THH}(\mathrm{tmf}/\widetilde{B})\) is \(\sigma^{2}(v_{3})\). Moreover, \[\mathrm{H}_{*}^{c}(\mathrm{TP}(\mathrm{tmf}/\widetilde{B});\mathbf{F}_{2}) \cong\mathbf{F}_{2}[\zeta_{1}^{8},\zeta_{2}^{4},\zeta_{3}^{2},\zeta_{4}^{2}, \zeta_{5},\cdots](\hbar). \tag{59}\] **Construction A.9** (Topological Sen operator for THH relative to \(A\)).: By [1, Theorem 1], \(\mathrm{THH}(A)\) is equivalent to the Thom spectrum of the composite \[\mathscr{L}S^{5}\stackrel{{\mathscr{L}_{\nu}}}{{\longrightarrow }}\mathscr{L}B^{2}\mathrm{GL}_{1}(S)\simeq B^{2}\mathrm{GL}_{1}(S)\times \mathrm{BGL}_{1}(S)\stackrel{{\mathrm{id}\times\eta}}{{ \longrightarrow}}\mathrm{BGL}_{1}(S).\] There is a _nonsplit_ fiber sequence \[\Omega S^{5}\to\mathscr{L}S^{5}\to S^{5}, \tag{60}\] and the restriction of the above composite along the map \(\Omega S^{5}\to\mathscr{L}S^{5}\) is the map \(\Omega S^{5}\to\mathrm{BGL}_{1}(S)\) which defines \(A\). It follows from the fiber sequence (60) that \(\mathrm{THH}(A)\) is the Thom spectrum of a map \(S^{5}\to\mathrm{BGL}_{1}(A)\) which detects a class \(x\in\pi_{4}(A)\cong\pi_{4}(\mathrm{ko})\). In particular, \(\mathrm{THH}(A)\) is an \(A\)-module with two cells. Now assume Conjecture A.2; then [1, Corollary 2.8] gives a splitting \(\mathrm{THH}(A)\to A\), which implies that the class \(x\in\pi_{4}(A)\) must be trivial. In other words, \(\mathrm{THH}(A)\simeq A[S^{5}]\). If \(\mathscr{C}\) is an \(A\)-linear \(\infty\)-category, this implies the existence of a cofiber sequence \[\mathrm{THH}(\mathscr{C})\to\mathrm{THH}(\mathscr{C}/A)\stackrel{{ \Theta^{\prime}}}{{\longrightarrow}}\Sigma^{6}\mathrm{THH}(\mathscr{C}/A). \tag{61}\] One should be able to recover the calculation of \(\mathrm{THH}(\mathrm{ko})\) from [1] using (61) in the case \(\mathscr{C}=\mathrm{Mod}_{\mathrm{ko}}\) and the calculation of Theorem A.4. Similarly to Remark 2.2.5, Theorem A.4 and (61) imply that \[\operatorname{THH(ko}/A)/2\simeq\operatorname{ko}[S^{7}\times\Omega S^{9}]/2,\] \[\operatorname{THH(ko)}\otimes_{\operatorname{ko}}\mathbf{F}_{2}\simeq \mathbf{F}_{2}[S^{5}\times S^{7}\times\Omega S^{9}].\] The latter of these has been proven by Angeltveit-Rognes in [1, Theorem 6.2]. **Remark A.10**.: Recall from [11, Corollary 9.3] that Mahowald-Rezk duality gives an equivalence \(W\!\operatorname{ko}\simeq\Sigma^{6}\!\operatorname{ko}\) (resp. \(W\!\operatorname{BP}\langle 1\rangle\simeq\Sigma^{2p}\!\operatorname{BP}\langle 1\rangle\)); the shift of \(6\) (resp. \(2p\)) in this equivalence arises for the same reason as in (61) (resp. (16) with \(n=1\)): both correspond to the class \(\sigma^{2}(t_{1}^{2})\) (resp. \(\sigma^{2}(t_{1})\)). We hope to explore this further in future work. **Remark A.11**.: There is also an analogue of the topological Sen operator for \(B\). To describe it, one observes using an argument similar to Construction A.9 that \(\operatorname{THH}(B/S/\!\!/\sigma)\simeq B[S^{13}]\) and that \(\operatorname{THH}(S/\!\!/\sigma)\simeq(S/\!\!/\sigma)[S^{9}]\). This implies that if \(\mathscr{C}\) is a \(B\)-linear \(\infty\)-category, there are cofiber sequences \[\operatorname{THH}(\mathscr{C}/S/\!\!/\sigma)\to\operatorname{THH}(\mathscr{C} /B)\xrightarrow{\Theta_{B}}\Sigma^{14}\operatorname{THH}(\mathscr{C}/B),\] \[\operatorname{THH}(\mathscr{C})\to\operatorname{THH}(\mathscr{C}/S/\!\!/\sigma) \xrightarrow{\Theta_{B}^{\prime}}\Sigma^{10}\operatorname{THH}(\mathscr{C}/S/ \!\!/\sigma).\] However, it is significantly more complicated to describe these cofiber sequences in almost any nontrivial example, so we omit further discussion. Nevertheless, one can use Theorem A.4 to show the following equivalences analogous to Remark 2.2.5: \[\operatorname{THH}(\operatorname{tmf}/B)/2\simeq\operatorname{tmf}[S^{15} \times\Omega S^{17}]/2,\] \[\operatorname{THH}(\operatorname{tmf})\otimes_{\operatorname{tmf}}\mathbf{F}_{ 2}\simeq\mathbf{F}_{2}[S^{9}\times S^{13}\times S^{15}\times\Omega S^{17}];\] note that \(\mathbf{F}_{2}[S^{9}\times S^{13}]\simeq\mathbf{F}_{2}[BN]\). The latter of these has been proven by Angeltveit-Rognes in [1, Theorem 6.2]. Assume Conjecture A.2, and let \(p=2\). Then there is a map \(\mathcal{M}_{T(1)}\to\mathcal{M}_{A}\) of stacks over \(\mathcal{M}_{\operatorname{FG}}\), which exhibits \(\mathcal{M}_{T(1)}\) as a \(2\)-fold fppf cover of \(\mathcal{M}_{A}\). Recall that \(\mathcal{M}_{T(1)}\) is isomorphic to the moduli stack of graded formal groups equipped with a coordinate up to order \(\leq 2\) (equivalently, order \(\leq 3\) for \(2\)-typical formal groups). Similarly, we have: **Proposition A.12**.: _The stack \(\mathcal{M}_{A}\) is isomorphic to the moduli stack of graded formal groups equipped with an even coordinate up to order \(\leq 5\)._ Proof.: Recall that there is a fiber sequence \[\operatorname{SU}(2)/\!\operatorname{U}(1)\cong S^{2}\to\operatorname{BU}(1) \simeq\mathbf{C}P^{\infty}\to\operatorname{BSU}(2)\simeq\mathbf{H}P^{\infty}.\] Let \(n\geq 1\). There is a homotopy equivalence \(\mathbf{H}P^{n}\times_{\mathbf{H}P^{\infty}}\mathbf{C}P^{\infty}\simeq \mathbf{C}P^{2n+1}\) (since \(S^{4n+3}/\!\operatorname{SU}(2)=\mathbf{H}P^{n}\) and \(S^{4n+3}/\!\operatorname{U}(1)=\mathbf{C}P^{2n+1}\)), which produces the "twistor fibration", i.e., the fiber sequence \[S^{2}\to\mathbf{C}P^{2n+1}\to\mathbf{H}P^{n}. \tag{62}\] The map \(\mathbf{C}P^{2n+1}\to\mathbf{H}P^{n}\) is given in coordinates by the map \([z_{1}:\cdots:z_{2n+2}]\mapsto[z_{1}+z_{2}\mathbf{j}:\cdots:z_{2n+1}+z_{2n+2} \mathbf{j}]\). Note that \(\operatorname{SU}(2)/\!\operatorname{U}(1)=\mathbf{C}P^{1}\) is the unit sphere \(S(\mathfrak{su}(2))\) in the adjoint representation of \(\operatorname{SU}(2)\), so (62) equivalently says that \(\mathbf{C}P^{2n+1}\) is the sphere bundle of the adjoint bundle of rank \(3\) over \(\mathbf{H}P^{n}\). Let \(R\) be a complex-oriented homotopy commutative ring with associated formal group \(\mathbf{G}\) over \(\pi_{*}(R)\); we will assume for simplicity that \(2\) is not a zero-divisor in \(\pi_{*}R\). Then \(R^{*}({\bf C}P^{5})\) is isomorphic to the ring of functions on \({\bf G}\) which vanish to order \(\geq 6\). The Serre spectral sequence associated to the fiber sequence (62) implies that \(R^{*}({\bf H}P^{2})\) is isomorphic to \(R^{*}({\bf C}P^{5})^{{\bf Z}/2}\), where \({\bf Z}/2\) acts by inversion on the formal group. This implies the desired claim, since there is an equivalence \({\bf H}P^{2}\simeq\Sigma^{4}C\nu\) of spectra, and \(A\) is the free \({\bf E}_{1}\)-ring whose unit factors through the inclusion \(S^{0}\to C\nu\). **Remark A.13**.: The description of \({\mathcal{M}}_{A}\) in Proposition A.12 has concrete applications; for instance, in [10], we show that \({\mathcal{M}}_{\rm tmf\otimes A}={\mathcal{M}}_{A}\times_{{\mathcal{M}}_{ \rm FG}}{\mathcal{M}}_{\rm ell}\) can be identified with the moduli stack of elliptic curves \({\mathscr{E}}\) equipped with a splitting of the Hodge filtration on \({\rm H}^{1}_{\rm dR}({\mathscr{E}})\), and use this to describe an topological analogue of the integral ring of quasimodular forms. **Remark A.14**.: As explained in [10, Remark 7.1.7], there is a \({\bf Z}/2\)-equivariant \({\bf E}_{1}\)-algebra \(A_{{\bf Z}/2}\) whose underlying \({\bf E}_{1}\)-algebra is \(A\), and such that \(\Phi^{{\bf Z}/2}A_{{\bf Z}/2}=X(2)_{(2)}\) as \({\bf E}_{1}\)-algebras. This is a topological interpretation of the following observation suggested by Proposition A.12: \({\mathcal{M}}_{A}\) is "half" of \({\mathcal{M}}_{X(2)}\); more precisely, there is a two-fold fppf cover \({\mathcal{M}}_{X(2)}\to{\mathcal{M}}_{A}\). This is an algebraic analogue of the equivalence \(A\otimes C\eta\simeq X(2)\). We also note that there is a \({\bf Z}/2\)-equivariant analogue of the fiber sequence (62): namely, there is a \({\bf Z}/2\)-equivariant twistor fibration where \({\bf Z}/2\) acts on \({\bf H}P^{n}\) via the action of \({\bf Z}/2\subseteq S^{1}\subseteq{\rm SO}(3)\) on \({\bf H}\). The underlying fibration is (62), while the \({\bf Z}/2\)-fixed points gives the fibration \[S^{1}\to{\bf R}P^{2n-1}\to{\bf C}P^{n-1}\] which exhibits \({\bf R}P^{2n-1}\) as the sphere bundle of the complex line bundle \({\mathscr{O}}(2)\) on \({\bf C}P^{n-1}\). **Construction A.15**.: One consequence of the identification of \({\mathcal{M}}_{A}\) in Proposition A.12 is that \({\mathcal{M}}_{A}\to{\mathcal{M}}_{\rm FG}\) is an affine bundle, so that the pullback of the cotangent complex \(L_{{\mathcal{M}}_{A}/{\mathcal{M}}_{\rm FG}}\) to \({\rm Spec}({\rm BP}_{*}(A))/{\bf G}_{m}\cong{\rm Spec}({\rm BP}_{*}[t_{1}^{2} +v_{1}t_{1}])/{\bf G}_{m}\) can be identified with a free \({\rm BP}_{*}[t_{1}^{2}+v_{1}t_{1}]\)-module of rank \(1\) generated by the class \(d(t_{1}^{2}+v_{1}t_{1})\) in weight \(2\). Using Recollection 4.1.9, we obtain the algebraic analogue of (61): if \(X\) is a stack over \({\mathcal{M}}_{A}\), there is a cofiber sequence \[{\rm HH}(X/{\mathcal{M}}_{\rm FG})\to{\rm HH}(X/{\mathcal{M}}_{A})\xrightarrow{ \Theta_{\rm mat}}\Sigma^{6,3}{\rm HH}(X/{\mathcal{M}}_{A}).\] The stack \({\mathcal{M}}_{\rm ko}\) can be identified with the moduli stack of curves of the form \(y=x^{2}+bx+c\) with change of coordinate \(x\mapsto x+r\), and \({\rm HH}({\mathcal{M}}_{\rm ko}/{\mathcal{M}}_{\rm FG})\) describes the \(E_{1}\)-page of the Adams-Novikov-Bokstedt spectral sequence calculating THH(ko) (see Remark 4.1.5). It would be interesting to explicitly describe \({\rm HH}({\mathcal{M}}_{\rm ko}/{\mathcal{M}}_{\rm FG})\); note that \[\pi_{*}{\rm HH}({\mathcal{M}}_{\rm ko}/{\mathcal{M}}_{A})\cong{\mathscr{O}}_{{ \mathcal{M}}_{\rm ko}}(\sigma^{2}v_{j}|j\geq 2)\otimes_{\mathscr{O}_{{ \mathcal{M}}_{\rm ko}}}\Lambda_{\mathscr{O}_{{\mathcal{M}}_{\rm ko}}}(dt_{i}|i \geq 2),\] where \(\sigma^{2}(v_{j})\) lives in degree \(2^{j+1}\) and weight \(2^{j}\), and \(dt_{i}\) lives in degree \(2^{i+1}-1\) and weight \(2^{i}\). This can be proved exactly as in Example 4.1.8; weight considerations presumably allow one to fully describe \(\Theta_{\mathrm{mot}}:\mathrm{HH}(X/\mathcal{M}_{A})\to\Sigma^{6,3}\mathrm{HH}(X/ \mathcal{M}_{A})\), and hence \(\mathrm{HH}(\mathcal{M}_{\mathrm{ko}}/\mathcal{M}_{\mathrm{FG}})\). **Remark A.16**.: Assume Conjecture A.2, and let \(p=2\). It is trickier to describe the stack \(\mathcal{M}_{B}\) in a manner analogous to Proposition A.12. As a first approximation, if we assume that \(S/\!\!/\sigma\) admits the structure of a homotopy commutative ring, one can attempt to describe the moduli stack \(\mathcal{M}_{S/\!\!/\sigma}\). However, it is provably impossible to construct a Hurewicz fibration \[S^{4}\to\mathbf{H}P^{5}\to\mathbf{O}P^{2}\] in the point-set category. This is a consequence of [1, Theorem 5.1], which states more generally that if \(F\to E\to X\) is a Hurewicz fibration where \(E\) is homotopy equivalent to \(\mathbf{H}P^{2n+1}\) and \(F\) and \(X\) are homotopy equivalent to finite CW-complexes, then either \(F\) or \(X\) must be contractible. Note that this result implies that there cannot even be a Hurewicz fibration \[S^{4}\to\mathbf{H}P^{3}\to S^{8}.\] Similarly, there cannot be Hurewicz fibrations \[\mathbf{C}P^{3}\to\mathbf{C}P^{7}\to S^{8},\] \[\mathbf{C}P^{3}\to\mathbf{C}P^{11}\to\mathbf{O}P^{2};\] see [10] for the impossibility of the first Hurewicz fibration (which implies the impossibility of the second Hurewicz fibration). These no-go results make it difficult to give a formal group-theoretic description of \(R^{*}(\mathbf{O}P^{2})\) (and hence of \(\mathcal{M}_{S/\!\!/\sigma}\), since \(\mathbf{O}P^{2}\simeq\Sigma^{8}C\sigma\)) where \(R\) is a complex-oriented homotopy commutative ring. The story for ko admits a slightly different generalization to higher heights. **Example A.17**.: Observe that \(S^{5}=\mathrm{SU}(4)/\mathrm{Sp}(2)\), and that the map \(\Omega S^{5}\to\mathrm{BU}\) (whose Thom spectrum is \(A\)) can be viewed as the composite \[\Omega(\mathrm{SU}(4)/\mathrm{Sp}(2))\to\Omega(\mathrm{SU}/\mathrm{Sp})\simeq \mathrm{BSp}\to\mathrm{BU}.\] The equivalence \(\Omega(\mathrm{SU}/\mathrm{Sp})\simeq\mathrm{BSp}\) is given by Bott periodicity, and the map \(\mathrm{BSp}\to\mathrm{BU}\) takes a symplectic bundle to its underlying unitary bundle. Motivated by Example A.17, we are led to the following definition: **Definition A.18**.: Define an \(\mathbf{E}_{1}\)-algebra \(X_{\mathbf{H}}(n)\) via the Thom spectrum of the composite \[\Omega(\mathrm{SU}(2n)/\mathrm{Sp}(n))\to\Omega(\mathrm{SU}/\mathrm{Sp})\simeq \mathrm{BSp}\to\mathrm{BU}.\] There is a canonical \(\mathbf{E}_{1}\)-map \(X_{\mathbf{H}}(n)\to X_{\mathbf{H}}(\infty)=\mathrm{MSp}\). The spectrum \(X_{\mathbf{H}}(n)\) has been studied by Andy Baker. **Remark A.19**.: See [11] for a detailed study of the space \(\Omega(\mathrm{SU}(2n)/\mathrm{Sp}(n))\). Let us note that if \(\mathrm{SU}(2n)_{\mathbf{H}}\) denotes the \(\mathbf{Z}/2\)-equivariant loop space with the \(\mathbf{Z}/2\)-action given by the symplectic involution \[A\mapsto J\overline{A}J^{-1},\ J=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}^{\oplus n},\] then \(\Omega(\mathrm{SU}(2n)/\mathrm{Sp}(n))\simeq(\Omega^{\sigma}\mathrm{SU}(2n)_{ \mathbf{H}})^{\mathbf{Z}/2}\). Indeed, the fixed points of \(\mathrm{SU}(2n)_{\mathbf{H}}\) is \(\mathrm{Sp}(n)\) (by definition), so we can apply the first sentence of Example 3.3.20 to conclude. **Remark A.20**.: As the notation indicates, \(X_{\mathbf{H}}(n)\) should be viewed as a quaternionic analogue of the \(X(n)\) spectra from [12]; see Table 2. Note that there are isomorphisms of algebras \[\mathrm{H}_{*}(\Phi^{\mathbf{Z}/2}X(n)_{\mathbf{R}};\mathbf{F}_{2}) \cong\mathrm{H}_{*}(\Omega(\mathrm{SU}(n)/\mathrm{SO}(n));\mathbf{F}_{2}) \cong\mathbf{F}_{2}[x_{1},\cdots,x_{n-1}],\] \[\mathrm{H}_{*}(X_{\mathbf{H}}(n);\mathbf{Z}) \cong\mathrm{H}_{*}(\Omega(\mathrm{SU}(2n)/\mathrm{Sp}(n));\mathbf{Z })\cong\mathbf{Z}[y_{1},\cdots,y_{n-1}],\] where \(|x_{j}|=j\) and \(|y_{j}|=4j\). **Example A.21**.: By construction, \(X_{\mathbf{H}}(2)\simeq A=S\!/\nu\). **Construction A.22**.: Suppose that \(X_{\mathbf{H}}(n)\) admits the structure of a homotopy commutative ring. One can then also ask for an interpretation of the stack \(\mathcal{M}_{X_{\mathbf{H}}(n)}\) analogous to Proposition A.12. It turns out that the difficulties of Remark A.16 are no longer an issue for \(X_{\mathbf{H}}(n)\). Indeed, the analogue of the map \(S^{4}\to\Omega S^{5}\) (whose Thomification is the map \(C\nu\to A\) used in the proof of Proposition A.12) is given by a map \(\iota:\mathbf{H}P^{n-1}\to\Omega(\mathrm{SU}(2n)/\mathrm{Sp}(n))\) which exhibits \(\mathbf{H}P^{n-1}\) as the generating complex of \(\Omega(\mathrm{SU}(2n)/\mathrm{Sp}(n))\). (See, e.g., [13, Proposition 1.4].) Moreover, the composite \[\mathbf{H}P^{n-1}\xrightarrow{\iota}\Omega(\mathrm{SU}(2n)/\mathrm{Sp}(n))\to \Omega(\mathrm{SU}/\mathrm{Sp})\simeq\mathrm{BSp}\] factors as \(\mathbf{H}P^{n-1}\hookrightarrow\mathbf{H}P^{\infty}\simeq\mathrm{BSp}(1)\to \mathrm{BSp}\). Since the Thom spectrum of the tautological quaternionic line bundle over \(\mathbf{H}P^{n-1}\) is \(\Sigma^{-4}\mathbf{H}P^{n}\), the map \(\iota\) Thomasifies to a map \(\Sigma^{-4}\mathbf{H}P^{n}\to X_{\mathbf{H}}(n)\). Using the twistor fibration (62) and the map \(\Sigma^{-4}\mathbf{H}P^{n}\to X_{\mathbf{H}}(n)\) of Construction A.22, one can argue as in Proposition A.12 to show: **Proposition A.23**.: _The stack \(\mathcal{M}_{X_{\mathbf{H}}(n)}\) is isomorphic to the moduli stack of graded formal groups equipped with an even coordinate up to order \(\leq 2n+1\)._ **Remark A.24**.: Suppose that \(X_{\mathbf{H}}(n)\) admits the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring. There is also a canonical map \(\mathcal{M}_{X_{\mathbf{H}}(n-1)}\to\mathcal{M}_{X_{\mathbf{H}}(n)}\) which exhibits \(\mathcal{M}_{X_{\mathbf{H}}(n)}\) as the quotient of \(\mathcal{M}_{X_{\mathbf{H}}(n-1)}\) by the group scheme \(\mathbf{G}_{a}^{(2n-2)}/\mathbf{G}_{m}\) over \(B\mathbf{G}_{m}\), where \(\mathbf{G}_{a}^{(2n-2)}\) denotes the affine line with \(\mathbf{G}_{m}\)-action of weight \(2n-2\). This is the algebraic analogue of the following: **Lemma A.25**.: _The spectrum \(X_{\mathbf{H}}(n)\) is equivalent to the Thom spectrum of a map \(\Omega S^{4n-3}\to\mathrm{BGL}_{1}(X_{\mathbf{H}}(n-1))\)._ Proof.: By [11, Proposition 2.1.6] (see also [1]), it suffices to establish that there is a fiber sequence of \(\mathbf{E}_{1}\)-spaces \[\Omega(\operatorname{SU}(2n-2)/\mathrm{Sp}(n-1))\to\Omega(\operatorname{SU}(2n) /\mathrm{Sp}(n))\to\Omega S^{4n-3}.\] To see this, observe that there is a diffeomorphism \(\operatorname{SU}(2n)/\mathrm{Sp}(n)\cong\operatorname{SU}(2n-1)/\mathrm{Sp}(n-1)\), and hence a fibration The desired fiber sequence is obtained by looping the bottom row. **Remark A.26**.: On the bottom cell of the source, the map \(\Omega S^{4n-3}\to\operatorname{BGL}_{1}(X_{\mathbf{H}}(n-1))\) defines a class \(\chi_{n}^{\mathbf{H}}\in\pi_{4n-5}X_{\mathbf{H}}(n-1)\), and \(\chi_{2^{n}}^{\mathbf{H}}\) is detected in the \(E_{2}\)-page of the Adams-Novikov spectral sequence for \(X_{\mathbf{H}}(2^{n}-1)\) by \([t_{n}^{2}]\). Moreover, if \(X_{\mathbf{H}}(n-1)\) admits the structure of an \(\mathbf{E}_{2}^{\mathrm{fr}}\)-ring and \(X_{\mathbf{H}}(n)\) admits the structure of an \(\mathbf{E}_{1}\)-\(X_{\mathbf{H}}(n-1)\)-algebra, then \(\operatorname{THH}(X_{\mathbf{H}}(n)/X_{\mathbf{H}}(n-1))\simeq X_{\mathbf{H}} (n)[\Omega S^{4n-3}]\). We can then conclude (as in Theorem 3.1.4 and Example 4.1.11) that if \(\mathscr{C}\) is an \(X_{\mathbf{H}}(n)\)-linear \(\infty\)-category and \(X\) is a stack over \(\mathcal{M}_{X_{\mathbf{H}}(n)}\), then there are cofiber sequences \[\operatorname{THH}(\mathscr{C}/X_{\mathbf{H}}(n-1))\to\operatorname{THH}( \mathscr{C}/X_{\mathbf{H}}(n))\xrightarrow{\Theta^{\prime}}\Sigma^{4n-2} \operatorname{THH}(\mathscr{C}/X_{\mathbf{H}}(n)),\] \[\operatorname{HH}(X/\mathcal{M}_{X_{\mathbf{H}}(n-1)})\to\operatorname{HH}(X/ \mathcal{M}_{X_{\mathbf{H}}(n)})\xrightarrow{\Theta_{\mathrm{mot}}}\Sigma^{4n -2,2n-1}\operatorname{HH}(X/\mathcal{M}_{X_{\mathbf{H}}(n)}).\] Only the first cofiber sequence requires that \(X_{\mathbf{H}}(n-1)\) and \(X_{\mathbf{H}}(n)\) admit the structure of \(\mathbf{E}_{2}^{\mathrm{fr}}\)-rings, and that \(X_{\mathbf{H}}(n)\) admits the structure of an \(\mathbf{E}_{1}\)-\(X_{\mathbf{H}}(n-1)\)-algebra; the second cofiber sequence only requires that \(X_{\mathbf{H}}(n)\) admit the structure of a homotopy commutative ring. ## Appendix B Alternative calculation of \(\widehat{\Omega}^{\not{D}}_{\mathbf{Z}/p^{n}}\) In this brief section, we give an alternative algebraic argument for Corollary 3.2.15 following [1, Example 5.15]. I am very grateful to Sasha Petrov for an illuminating discussion about this entire appendix; see also [1, Lemma 6.13]. Alternative proof of Corollary 3.2.15. Let \(R\) be a (discrete) commutative \(\mathbf{Z}/p^{n}\)-algebra. Then [1, Construction 3.8] implies that \[\operatorname{Spec}(\mathbf{Z}/p^{n})^{\not{D}}(R) \simeq\operatorname{Map}_{\operatorname{Calg}}(\mathbf{Z}/p^{n },W(R)/V(1))\] \[\simeq\{z\in W(R)|zV(1)=p^{n}\}=\{x\in W(R)|V(Fz)=p^{n}\}.\] Since \(V\) is injective, this is a torsor for \(W[F](R)=\mathbf{G}^{\sharp}_{a}(R)\). Moreover, this torsor is trivializable, i.e., \(p^{n}\) is in the image of \(VF\). In fact, we claim that \[p^{n}=V(p^{n-1})=VF(V(p^{n-2}))\in W(\mathbf{Z}/p^{n}). \tag{63}\] To see this, let us compute in ghost coordinates. Recall that if \(w(x)=(w_{0}(x),w_{1}(x),\cdots)\) are the ghost coordinates of \(x\in W(R)\), then \(w_{n+1}(Vx)=pw_{n}(x)\). Since \(w(p^{n})=(p^{n},p^{n},\cdots)\) and \(w(V(p^{n-1}))=(0,p^{n},p^{n},\cdots)\), we see that \[w(p^{n}-V(p^{n-1}))=(p^{n},0,0,\cdots).\] Since the map \(\mathbf{G}^{\sharp}_{a}\cong W[F]\to W\) sends \(x\in\mathbf{G}^{\sharp}_{a}\) to the Witt vector whose ghost coordinates are \((x,0,0,\cdots)\), the claim (63) follows from the observation that \(p^{n}\in W[F](\mathbf{Z}_{p})\) is sent to zero in \(W[F](\mathbf{Z}/p^{n})\). As pointed out by Sasha Petrov, the preceding calculation also determines the \(\mathbf{G}^{\sharp}_{m}\)-action on \(\operatorname{Spec}(\mathbf{Z}/p^{n})^{\not{D}}\) as follows. The above discussion says that the isomorphism \(\mathbf{G}^{\sharp}_{a}\xrightarrow{\sim}\operatorname{Spec}(\mathbf{Z}/p^{n })^{\not{D}}\) sends \(x\mapsto x+V(p^{n-2})\). Under this isomorphism, the action of \(g\in\mathbf{G}^{\sharp}_{m}\) on \(x+V(p^{n-2})\in\operatorname{Spec}(\mathbf{Z}/p^{n})^{\not{D}}\) is given by \[g(x+V(p^{n-2}))=gx+gV(p^{n-2})=gx+V(F(g)p^{n-2});\] but \(F(g)=1\) since \(\mathbf{G}^{\sharp}_{m}=W^{\times}[F]\), so that this can be identified with \(gx+V(p^{n-2})\). In other words, the isomorphism \(\mathbf{G}^{\sharp}_{a}\xrightarrow{\sim}\operatorname{Spec}(\mathbf{Z}/p^{n })^{\not{D}}\) is equivariant for the scaling action of \(\mathbf{G}^{\sharp}_{m}\) on \(\mathbf{G}^{\sharp}_{a}\). One can get a formula which is more "accurate" than (63) via the following (see also [1, Page 56], where part of this statement is attributed to Gabber)24. Footnote 24: Our understanding is that this result is quite well-known; some form is heavily used in [1, 2]. **Lemma B.2**.: _Let \(y\) denote the element of \(W(\mathbf{Z}_{p})\) associated to the ghost coordinates \((1-p^{p-1},1-p^{p^{2}-1},\cdots)\). Then \([p]+V(y)=p\). Moreover, \(y=Fx\) for some \(x\in W(\mathbf{Z}_{p})\) if and only if \(p>2\); in this case, \(x\in W(\mathbf{Z}_{p})^{\times}\) (and hence \(y\in W(\mathbf{Z}_{p})^{\times}\)). If \(p=2\), then \(y[2^{m}]\) is in the image of \(F\) for any \(m\geq 2\)._ Let us assume \(p\) is odd for simplicity. Then Lemma B.2 implies that \(p-[p]\in W(\mathbf{Z}_{p})\) is a unit multiple of \(V(1)\), since \(p-[p]=V(y)=xV(1)\) and \(x\in W(\mathbf{Z}_{p})^{\times}\).25 It follows from [1, Construction 3.8] that if \(X=\operatorname{Spf}(R)\) is a bounded \(p\)-adic formal scheme, then the diffracted Hodge complex \(X^{\not{D}}\) is given on \(p\)-nilpotent rings \(S\) by \(X^{\not{D}}(S)=X(W(R)/(p-[p]))\). **Remark B.4**.: Applying \(F\) to the identity \([p]+V(y)=p\), we see that \([p^{2}]=p(1-y)\). In particular, the element \(a\in W(\mathbf{Z}_{p})\) of [1, Lemma 4.7.3] can be identified with \(1-y\). **Remark B.5**.: Using Lemma B.2, we can give an "alternative" formula for a preimage of \(p^{n}\) under \(VF\). Indeed, we have \(p=[p]+V(y)\) for some \(y\in W(\mathbf{Z}_{p})\), so that \(p^{n}=[p^{n}]+\sum_{i=0}^{n-1}{n\choose i}[p^{i}]V(y)^{n-i}\) in \(W(\mathbf{Z}_{p})\). Because \(V(a)b=V(aFb)\) and \(FV=p\), we have \(V(a)^{n}=V(p^{n-1}a^{n})\) by an easy induction on \(n\). Moreover, \([p^{i}]V(a)=V(aF[p^{i}])=V([p^{pi}]a)\). Since \([p^{n}]=0\in W(\mathbf{Z}/p^{n})\) (and hence in \(W(R)\)), we have \[p^{n}=\sum_{i=0}^{n-1}{n\choose i}[p^{i}]V(y)^{n-i}=\sum_{i=0}^{n-1}{n\choose i }V(p^{n-i-1}y^{n-i}F[p^{i}])\in W(\mathbf{Z}/p^{n}).\] Assume \(p>2\), so that Lemma B.2 implies that \(y=Fx\) for some \(x\in W(\mathbf{Z}_{p})\). The multiplicativity of \(F\) now lets us conclude that \[p^{n}=VF\left(\sum_{i=0}^{n-1}{n\choose i}p^{n-i-1}x^{n-i}[p^{i}]\right)\in W (\mathbf{Z}/p^{n}),\] so that \(p^{n}\in W(R)\) is in the image of \(VF\), as desired. One can check that \[\sum_{i=0}^{n-1}{n\choose i}p^{n-i-1}y^{n-i}[p^{pi}]=p^{n-1}\in W(\mathbf{Z}/ p^{n}).\] This is essentially an elaboration on the proof of Lemma B.2. Indeed, applying \(w_{j}\), we have \[w_{j}\left(\sum_{i=0}^{n-1}{n\choose i}p^{n-i-1}y^{n-i}[p^{pi}]\right) =\frac{1}{p}\sum_{i=0}^{n-1}{n\choose i}(p-p^{p^{j+1}})^{n-i}p^{ p^{j+1}i}\] \[=p^{n-1}-p^{p^{j+1}n-1}.\] It therefore suffices to show that the Witt vector \(a\in W(\mathbf{Z}_{p})\) with coordinates \(w_{j}(a)=p^{p^{j+1}n-1}\) vanishes in \(W(\mathbf{Z}/p^{n})\), which follows from a direct calculation. Let us end with a proof of Lemma B.2; the explicit formulas below are unnecessary for any conceptual development, but we included it since the computation was rather fun. Proof of Lemma b.2.: First, it is easy to see that \(y\) is well-defined. Let us now check that \(p=[p]+V(y)\). If \(w(x)=(w_{0}(x),w_{1}(x),\cdots)\) are the ghost coordinates of \(x\in W(R)\), then \(w_{n+1}(Vx)=pw_{n}(x)\). It follows that \(w_{n}(Vy)=p-p^{p^{n}}\). Since \(w([p])=(p,p^{p},p^{p^{2}},\cdots)\), we have \[w([p]+Vy)=w([p])+w(Vy)=(p,p,\cdots)=w(p),\] so that \(p=[p]+V(y)\), as claimed. To prove the claim about \(y\) being in the image of \(F\), recall that if \(x\in W(R)\), then the ghost coordinates of \(Fx\) are given by \(w_{n}(Fx)=w_{n+1}(x)\). In particular, \(y=Fx\) for some \(x\in W(\mathbf{Z}_{p})\) if and only if we can solve \[1-p^{p^{n}-1}=x_{0}^{p^{n}}+px_{1}^{p^{n-1}}+\cdots+p^{n}x_{n}\] for some \(x_{0},\cdots,x_{n}\in{\bf Z}_{p}\) and all \(n\geq 1\). This is impossible for \(p=2\). Indeed, first note that we need \(x_{0}^{2}+2x_{1}=1-p^{p-1}=-1\), so that \(x_{0}^{2}\equiv 1\pmod{2}\) (and hence \(x_{0}\equiv 1\pmod{2}\)). Write \(x_{0}=1+2s\), so that \(x_{0}^{2}+2x_{1}=1+4s(1+s)+2x_{1}\). In order for this to equal \(1-p^{p-1}=-1\), we need \(4s(1+s)+2x_{1}=-2\), i.e., \(x_{1}\equiv 1\pmod{2}\). This implies that \(x_{0}^{4}\equiv 1\pmod{8}\) and \(x_{1}^{2}\equiv 1\pmod{4}\) (so \(2x_{1}^{2}\equiv 2\pmod{8}\)). Since \(1-p^{p^{2}-1}=-7=x_{0}^{4}+2x_{1}^{2}+4x_{2}\), we can reduce modulo 8 to find that \(1\equiv 1+2+4x_{2}\pmod{8}\). But then \(x_{2}\) would solve \(4x_{2}\equiv-2\pmod{8}\), which is impossible. Now assume \(p>2\). Since \(x_{0}^{p}+px_{1}=1-p^{p-1}\), we have \(x_{0}^{p}\equiv 1\pmod{p}\); this implies that \(x_{0}^{p^{n}}\equiv 1\pmod{p^{n+1}}\). Writing \(x_{0}^{p^{n}}=1-p^{n+1}s_{n}\) for some \(s_{n}\in{\bf Z}_{p}\), we have \(x_{1}=ps_{1}-p^{p-2}\). Since \(p>2\), we see that \(x_{1}=p(s_{1}-p^{p-3})\in p{\bf Z}_{p}\). We claim that \(x_{n}\) exists and is an element of \(p{\bf Z}_{p}\) for \(n\geq 1\). We established the base case \(n=1\) above, so assume that \(x_{1},\cdots,x_{n-1}\in p{\bf Z}_{p}\), and let \(x_{i}=pt_{i}\). We then have \[p^{n}x_{n} =1-p^{p^{n}-1}-(x_{0}^{p^{n}}+px_{1}^{p^{n-1}}+\cdots+p^{n-1}x_{n- 1}^{p})\] \[=p^{n+1}s_{n}-p^{p^{n}-1}-p^{p^{n-1}+1}t_{1}^{p^{n-1}}-\cdots-p^{p +n-1}t_{n-1}^{p},\] so that \[x_{n}=ps_{n}-p^{p^{n}-1-n}-p^{p^{n-1}+1-n}t_{1}^{p^{n-1}}-\cdots-p^{p-1}t_{n-1 }^{p}.\] This is clearly divisible by \(p\) since \(p>2\) (so that \(p^{n}-1-n\geq 1\) for \(n\geq 1\)). Therefore, \(x_{n}\) exists and lives in \(p{\bf Z}_{p}\), as desired. (Note that if \(p=2\) and \(n=1\), then \(p^{n}-1-n=0\), so \(x_{1}\not\in 2{\bf Z}_{2}\).) If one prefers an explicit formula, the above argument shows that once one writes \(x_{0}=1-ps_{0}\), then \(x_{j}=pt_{j}\) for \(j\geq 1\) can be defined inductively by \[t_{n}=\sum_{i=1}^{p^{n}}\frac{(-1)^{i+1}}{p^{n+1-i}}\binom{p^{n}}{i}s_{0}^{i}- p^{p^{n}-2-n}-\sum_{k=1}^{n-1}p^{p^{k}-k-1}t_{n-k}^{p^{k}}.\] The first term is \(s_{n}\); note that \(\frac{1}{p^{n+1-i}}\binom{p^{n}}{i}\in{\bf Z}\). Since \(x_{0}\equiv 1\pmod{p}\) and \(x_{i}\equiv 0\pmod{p}\) for \(i\geq 1\), it is easy to see that all the ghost components of \(x\) lie in \(1+p{\bf Z}_{p}\subseteq{\bf Z}_{p}^{\times}\); this implies that \(x\in W({\bf Z}_{p})\) is invertible, as claimed. Let us now assume that \(p=2\), and show that \(y[2^{m}]\) is in the image of \(F\) for any \(m\geq 2\). To see this, observe that the ghost components of \(y[2^{m}]\) are given by \[w_{n}(y[2^{m}])=w_{n}(y)w_{n}([2^{m}])=2^{m2^{n}}(1-2^{2^{n+1}-1}).\] We therefore need to solve \[2^{m2^{n-1}}(1-2^{2^{n}-1})=x_{0}^{2^{n}}+2x_{1}^{2^{n-1}}+\cdots+2^{n}x_{n}\] for some \(x_{0},\cdots,x_{n}\in{\bf Z}_{2}\) and all \(n\geq 1\). When \(n=1\), we have \(x_{0}^{2}+2x_{1}=-2^{m}\), so that \(x_{0}^{2}\equiv 0\pmod{2}\) since \(m>0\). It follows that \(x_{0}=2t_{0}\) for some \(t_{0}\in{\bf Z}_{2}\). We now claim that \(x_{n}\) exists for \(n\geq 0\) and lives in \(2{\bf Z}_{2}\). We established the base case \(n=0\) above, so assume \(x_{0},x_{1},\cdots,x_{n-1}\in 2{\bf Z}_{2}\), and write \(x_{i}=2t_{i}\). Then \[2^{n}x_{n} =2^{m2^{n-1}}(1-2^{2^{n}-1})-(x_{0}^{2^{n}}+2x_{1}^{2^{n-1}}+ \cdots+2^{n-1}x_{n-1}^{2})\] \[=2^{m2^{n-1}}(1-2^{2^{n}-1})-(2^{2^{n}}t_{0}^{2^{n}}+2^{2^{n-1}+ 1}t_{1}^{2^{n-1}}+\cdots+2^{n+1}t_{n-1}^{2}),\] so that \[x_{n}=2^{m2^{n-1}-n}(1-2^{2^{n}-1})-(2^{2^{n}-n}t_{0}^{2^{n}}+2^{2^{n-1}+1-n}t_ {1}^{2^{n-1}}+\cdots+2t_{n-1}^{2}).\] Because \(m\geq 2\) and \(2^{j}-j\geq 1\) for every \(j\geq 0\), we see that \(x_{n}\in 2{\bf Z}_{2}\), as desired. (Of course, the key case is \(m=2\); when \(m=1\) and \(n=1\), the term \(2^{2^{n}-1})=-1\not\in 2\mathbf{Z}_{2}\).) If one prefers an explicit formula, note that the above argument shows that once one writes \(x_{0}=2t_{0}\), then \(x_{j}=2t_{j}\) can be defined inductively by \[t_{n}=2^{m2^{n-1}-n-1}(1-2^{2^{n}-1})-\sum_{i=1}^{n}2^{2^{i}-i-1}t_{n-i}^{2^{i}}.\] Note that \(x\) is _not_ invertible in \(W(\mathbf{Z}_{2})\); instead, since \(x_{j}\in 2\mathbf{Z}_{2}\), the \(n\)th ghost component \(w_{n}(x)\in 2^{n+1}\mathbf{Z}_{2}\). ## Appendix C Cartier duals of \(W[F^{n}]\) and \(W^{\times}[F^{n}]\) This section was inspired by the results proved above, but it does not play an essential role in the body of this article. Corollary C.12 below can be viewed as an algebraic way to bookkeep the structure possessed by the topological Sen operators; and, as we hope to show in future work, it sits as an intermediary between the topological and algebraic Sen operators of Theorem 3.1.4 and Example 4.1.11 (see Remark C.19). We begin with the following (presumably well-known) result. I am (again) grateful to Sasha Petrov for a relevant discussion on it. **Proposition C.1**.: _There is an isomorphism of group schemes over \(\mathbf{Z}_{(p)}\) between \(W[F^{n}]:=\ker(F^{n}:W\to W)\) and the Cartier dual of the completion of \(W_{n}=W/V^{n}\) at the origin._ Proof.: Let us model \(W\) by the \(p\)-typical big Witt vectors. Given \(f(t)\in W\), let \(a_{0},a_{1},a_{2},\cdots\) denote the ghost components of \(f\), so that \(td\text{log}(f(t))=\sum_{m\geq 0}a_{m}t^{p^{m}}\). Then \(f(t)\in W[F^{n}]\) if and only if \(a_{m}=0\) for \(m\geq n\). Let us first prove the claim of the proposition when \(n=1\). Then, \(d\text{log}(f(t))\) is a constant, and \(f(0)=1\); we claim that this is equivalent to the condition that \(f\) defines a homomorphism \(\hat{\mathbf{G}}_{a}\to\mathbf{G}_{m}\), i.e., that \(f(x+y)=f(x)f(y)\). To check this, first suppose that \(f(x+y)=f(x)f(y)\). Then \(\partial_{x}f(x+y)=f(y)f^{\prime}(x)\), so that \[\frac{\partial_{x}f(x+y)}{f(x+y)}=\frac{f^{\prime}(x)}{f(x)}=d\text{log}(f(x))\] is independent of \(y\). Taking \(x=0\), we see that \(d\text{log}(f(x))\) is constant, as desired. The reverse direction (that \(d\text{log}(f(x))\) being constant and \(f(0)=1\) implies that \(f(x+y)=f(x)f(y)\)) is similar. In the general case, note that since the Frobenius on \(W\) shifts the ghost components by \(F:(a_{0},a_{1},a_{2},\cdots)\mapsto(a_{1},a_{2},a_{3},\cdots)\), the Frobenius \(F\) applied to \(f\) satisfies: \[d\text{log}(F^{j}(f)(t))=\sum_{m=0}^{n-j}a_{m+j}t^{p^{m}},\] so that there is an equality of power series \[F^{j}(f)(t)=\exp\left(\sum_{m=0}^{n-j}\frac{a_{m+j}}{p^{m}}t^{p^{m}}\right).\] Note that this is a slight variant of the the Artin-Hasse exponential. Define a map \(g:\hat{W}_{n}\to\mathbf{G}_{m}\) on Witt components \((x_{0},\cdots,x_{n-1})\) (not ghost components!) as follows: \[g(x_{0},\cdots,x_{n-1}) =\prod_{j=0}^{n-1}F^{j}(f)(x_{j})=\exp\left(\sum_{j=0}^{n-1}\sum_ {m=0}^{n-j}\frac{a_{m+j}}{p^{m}}x_{j}^{p^{m}}\right)\] \[=\exp\left(\sum_{m=0}^{n-1}\frac{a_{m}}{p^{m}}\left(\sum_{j=0}^{m }p^{j}x_{j}^{p^{m-j}}\right)\right).\] The coefficient of \(\frac{a_{m}}{p^{m}}\) is precisely the \(m\)th Witt polynomial, so that the function \(g\) is indeed additive on \(\hat{W}_{n}\). Moreover, the assignment \(f\mapsto g\) indeed gives an isomorphism \(W[F^{n}]\xrightarrow{\sim}\operatorname{Hom}(\hat{W}_{n},\mathbf{G}_{m})\), as one can check inductively using the case \(n=1\) and the fact that it induces an isomorphism over \(\mathbf{Q}\). **Remark C.2** (Integral case).: One does not need \(p\)-typicality for the above statement to hold. Namely, if \(\mathbf{W}\) denotes the big Witt ring scheme, it is a classical fact that the Cartier dual of \(\mathbf{W}\) over \(\mathbf{Z}\) is canonically identified with \(\hat{\mathbf{W}}\). As in the \(p\)-typical case above, the pairing \(\mathbf{W}\times\hat{\mathbf{W}}\to\mathbf{G}_{m}\) sends \[(a,b)\mapsto\exp\left(\sum_{n\geq 1}\frac{w_{n}(a)w_{n}(b)}{n}\right).\] One only needs to check that this expression is in fact defined over \(\mathbf{Z}\). To see this, first observe that if the Witt components of \(b\) are \((b_{1},b_{2},\cdots)\), we have \[\exp\left(\sum_{n\geq 1}\frac{w_{n}(a)w_{n}(b)}{n}\right)=\prod_{j\geq 1}\exp \left(\sum_{n\geq 1}\frac{w_{nj}(a)b_{j}^{n}}{n}\right).\] Note that \(w_{nj}(a)=w_{n}(F_{j}a)\), so that if \((F_{j}a)_{d}\) denote the Witt components of \(F_{j}a\), we have \[\exp\left(\sum_{n\geq 1}\frac{w_{n}(a)w_{n}(b)}{n}\right)=\prod_{j,d\geq 1}(1-(F_ {j}a)_{d}b_{j}^{d}),\] giving the desired integral representation. In fact, the last step can be generalized via the following rephrasing of the Dwork lemma: **Lemma C.3**.: _Let \(R\) be a torsionfree ring equipped with ring maps \(\phi_{p}:R\to R\) for each prime \(p\) such that \(\phi_{p}(r)\equiv r^{p}\pmod{p}\) for all \(r\in R\). Let \((x_{n})_{n\geq 1}\) be a sequence of elements such that \(x_{n}\equiv\phi_{p}(x_{n/p})\pmod{p^{v_{p}(n)}}\) for each prime \(p\) and every \(n\in p\mathbf{Z}_{\geq 0}\). Then \(f(t):=\exp\left(\sum_{n\geq 1}\frac{x_{n}t^{n}}{n}\right)\) lies in \(1+tR[\![t]\!]\subseteq 1+t(R\otimes\mathbf{Q})[\![t]\!]\)._ Proof.: Let \(g(t)=1-t\in R[\![t]\!]\), so that there is an identity \[g(t)=\exp\left(\log(1-t)\right)=\exp\left(-\sum_{n\geq 1}\frac{t^{n}}{n}\right).\] Because \(f(0)=1\), we can write \(f(t)=\prod_{j\geq 1}(1-r_{j}t^{j})=\prod_{j\geq 1}g(r_{j}t^{j})\) for unique \(r_{j}\in R\otimes\mathbf{Q}\). Since \(g(t)\) is integral, it is sufficient to show that the elements \(r_{j}\) are also integral. Applying \(d\!\log\), we find that \[\sum_{n\geq 1}\frac{x_{n}t^{n}}{n}=d\!\log(f)(t)=\sum_{j\geq 1}d\!\log(g)(r_{j}t^ {j})=-\sum_{j,m\geq 1}\frac{r_{j}^{m}t^{jm}}{m}.\] It follows that \(x_{n}=-\sum_{j\mid n}jr_{j}^{n/j}\). One can now argue in exactly the same way as the usual Dwork lemma (i.e., by induction on \(r_{j}\in R\) for \(j|n\) with \(j\neq n\)) to argue that each \(r_{j}\) is integral. **Corollary C.4**.: _Write the underlying scheme of \(W_{n}\) as \(\prod_{i=0}^{n-1}\mathbf{G}_{a}\) (where the \(i\)th copy of \(\mathbf{G}_{a}\) has coordinate \(\Phi_{i}\)). There is a fully faithful functor \(\operatorname{QCoh}(BW[F^{n}])\hookrightarrow\operatorname{QCoh}(W_{n})\) whose essential image consists of those \(p\)-complete \(M\in\operatorname{QCoh}(W_{n})\) such that \(\Phi_{i}\) acts locally nilpotently on \(\operatorname{H}^{*}(M/p)\) for each \(0\leq i\leq n-1\). Furthermore, this functor is symmetric monoidal for the convolution tensor product on \(\operatorname{QCoh}(W_{n})\)._ _If \(\mathscr{F}\in\mathrm{QCoh}(BW[F^{n}])\) is sent to \(M\in\mathrm{QCoh}(W_{n})\) under this functor, one obtains a cube26\(\Phi_{\bullet}:2^{[n-1]}\to\mathrm{Mod}_{\mathbb{Z}_{p}}\) whose vertices are all \(M\) and such that the edge from the subset \(\{i_{1},\cdots,i_{j-1}\}\) to \(\{i_{1},\cdots,i_{j}\}\) is given by the operator \(\Phi_{j}:M\to M\). Then, the global sections \(\Gamma(BW[F^{n}];\mathscr{F})\) can be identified with the total fiber of the cube \(\Phi_{\bullet}\)._ Footnote 26: Recall that \([n]\) denotes the set \(\{0,\cdots,n\}\). **Remark C.5**.: More generally, the argument of Proposition C.1 shows that there is an isomorphism of group schemes over \(\mathbb{Z}_{(p)}\) between \(W_{m}[F^{n}]:=\ker(F^{n}:W_{m}\to W_{m})\) and the Cartier dual of \(W_{n}[F^{m}]\). One can give a simpler proof of this fact over a perfect field \(k\) of characteristic \(p>0\) using the theory of Dieudonne modules: the Dieudonne module of \(W_{m}[F^{n}]\) over \(k\) is \(W(k)[F,V]/(F^{n},V^{m})\), while the Dieudonne module of \(W_{n}[F^{m}]\) over \(k\) is \(W(k)[F,V]/(F^{m},V^{n})\). The argument of Proposition C.1 also shows the following (which is already discussed in [1, Appendix D]): **Proposition C.6** ([1, Appendix D]).: _Let \(\hat{\mathbf{G}}_{\lambda}\) be the degeneration of \(\hat{\mathbf{G}}_{m}\) to \(\hat{\mathbf{G}}_{a}\) given by \(\mathrm{Spf}\,\mathbb{Z}_{(p)}[t,\lambda,\frac{1}{1+t\lambda}]^{\wedge}_{t}\) with group law \(x+y+\lambda xy\). Then the \(\mathbb{Z}_{(p)}[\lambda]\)-linear Cartier dual of \(\hat{\mathbf{G}}_{\lambda}\) is isomorphic to the group scheme \(\mathbf{D}(\hat{\mathbf{G}}_{\lambda})=\mathrm{Spec}\,\mathbb{Z}_{(p)}[ \lambda,z,\frac{\prod_{j=0}^{n-1}(z-j\lambda)}{n!}]\) over \(\mathbf{A}^{1}_{\lambda}=\mathrm{Spec}\,\mathbb{Z}_{(p)}[\lambda]\) with coproduct \(z\mapsto z\otimes 1+1\otimes z\)._ Proof.: A homomorphism \(f:\hat{\mathbf{G}}_{\lambda}\to\mathbf{G}_{m}\times\mathbf{A}^{1}_{\lambda}\) is an element of \(\mathbb{Z}_{(p)}[t,\lambda,\frac{1}{1+t\lambda}]^{\wedge}_{t}\) such that \[f(x+y+\lambda xy)=f(x)f(y).\] This condition implies that \[(1+\lambda y)f^{\prime}(x+y+\lambda xy)=f(y)f^{\prime}(x),\] so that dividing both sides by \(f(x+y+\lambda xy)\), we have \[(1+\lambda y)\cdot d\mathrm{log}(f)(x+y+\lambda xy)=d\mathrm{log}(f)(x).\] Taking \(x=0\), we see that \(d\mathrm{log}(f)(y)\) is a constant multiple of \(\frac{1}{1+\lambda y}\) (where the constant is given by \(\frac{f^{\prime}(0)}{f(0)}\)), and hence \[f(y)=(1+\lambda y)^{z/\lambda}=\sum_{n\geq 0}y^{n}\frac{\prod_{j=0}^{n-1}(z-j \lambda)}{n!}\] for some fixed \(z\). This gives the desired claim, similarly to Proposition C.1. Note that \(f(y)=\exp(z\mathrm{log}_{F}(y))\), where \(\mathrm{log}_{F}\) is the logarithm of the formal group law \(x+y+\lambda xy\) over \(\mathbf{A}^{1}_{\lambda}\). **Remark C.7**.: Observe that \(\mathbf{D}(\hat{\mathbf{G}}_{\lambda})\) is isomorphic to the subgroup \((W\times\mathbf{A}^{1}_{\lambda})[F+[-\lambda]^{p-1}]\) of \(W\times\mathbf{A}^{1}_{\lambda}\) cut out by \(\{x|Fx=[-\lambda]^{p-1}x\}\); see [19, Proposition 6.3.3] and [1, Proposition D.4.10]. The key point is that if \(f(x)\in W\) and \(xd\mathrm{log}(f(x))=\sum_{m\geq 1}a_{m}x^{m}\), then \(f(t)\in(W\times\mathbf{A}^{1}_{\lambda})[F+[-\lambda]^{p-1}]\) if and only if \[a_{p^{n+1}}=((-\lambda)^{p-1})^{p^{n}}a_{p^{n}}=(-\lambda)^{p^{n+1}-p^{n}}a_{p^ {n}}.\] To check this, note that \[xd\mathrm{log}(f)(x)=\frac{zx}{1+\lambda x}=\sum_{n\geq 0}(-\lambda)^{n}zx^{n+1},\] so that \(a_{m}=(-\lambda)^{m-1}z\), and \(a_{m}=(-\lambda)^{m-n}a_{n}\) if \(m\geq n\). **Remark C.8**.: A similar argument shows that if \(\mathbf{G}\) denotes the group scheme \(\operatorname{Spec}\mathbf{Z}/p^{N}[\lambda]\langle x\rangle\) with group law \(x+y+\lambda xy\) (so that when \(\lambda=0\), we get \(\mathbf{G}_{a}^{\sharp}\)), then the \(\mathbf{Z}/p^{N}[\lambda]\)-linear Cartier dual of \(\mathbf{G}\) is isomorphic to the completion of \(\operatorname{Spec}\mathbf{Z}/p^{N}[\lambda,z]\) at the locus \(\prod_{j=0}^{p-1}(z-j\lambda)=z(z^{p-1}-\lambda^{p-1})\) (see, e.g., [1, Section B.4]). It follows that the \(\infty\)-category of \(\mathbf{G}\)-representations is equivalent to the \(\infty\)-category of \(\mathbf{Z}/p^{N}\)-modules \(M\) equipped with an operator \(z:M\to M\) such that \(z(z^{p-1}-\lambda^{p-1})\) acts locally nilpotently on \(\operatorname{H}^{*}(M\otimes_{\mathbf{Z}/p^{N}}\mathbf{F}_{p})\). **Recollection C.9**.: In [1, Lemma 3.5.18], Bhatt and Lurie show that the following is a Cartesian square of group schemes over \(\mathbf{Z}/p^{k}\): (64) We will generalize this below in Corollary C.10. In [1], we prove another generalization of this square, albeit in a different direction: \(\mathbf{G}_{a}^{\sharp}\) is replaced by the Cartier dual of a formal group \(\hat{\mathbf{G}}\), and \(\mathbf{G}_{m}^{\sharp}\) is replaced by an appropriate \(\hat{\mathbf{G}}\)-analogue of the divided power completion. **Corollary C.10**.: _Let \(k\geq 0\). There is an isomorphism of group schemes over \(\mathbf{Z}/p^{k}\) between the Cartier dual of \(W^{\times}[F^{n}]:=\ker(F^{n}:W^{\times}\to W^{\times})\) and the completion of \(W_{n}\) at its \(\mathbf{F}_{p}\)-rational points \(W_{n}(\mathbf{F}_{p})\cong\mathbf{Z}/p^{n}\)._ Proof.: Following [1, Remark 3.5.17], it suffices to prove the following analogue of [1, Lemma 3.5.18]: there is a Cartesian diagram of flat group schemes over \(\mathbf{Z}/p^{k}\) given by (65) Here, the left vertical map \(W^{\times}[F^{n}]\to\mathbf{G}_{m}\) is the composite \[W^{\times}[F^{n}]\to W^{\times}\to W^{\times}/V\cong\mathbf{G}_{m}.\] Indeed, taking the Cartier dual of (65) and using Proposition C.1, we obtain a pushout diagram of formal group schemes This implies that \(\mathbf{D}(W^{\times}[F^{n}])\) is the completion of \(W_{n}\) at its \(\mathbf{F}_{p}\)-rational points \(W_{n}(\mathbf{F}_{p})\cong\mathbf{Z}/p^{n}\), as desired. The proof that the square (65) is Cartesian is in fact a consequence of [1, Lemma 3.5.18]. As in [1, Lemma 3.5.18], since all group schemes involved are flat over \({\bf Z}/p^{k}\), it suffices to prove that the diagram is Cartesian after base-changing to \({\bf F}_{p}\) (i.e., assume that \(k=1\)). We begin by noting that there is an isomorphism \(V:W^{\times}\xrightarrow{\sim}\{x\in W|x_{0}=0\}\) of \(p\)-adic formal schemes sending \(x\mapsto Vx\). This gives an isomorphism \(W[F^{n}]\times\mu_{p^{n}}\cong W^{\times}[F^{n}]\) of group schemes over \({\bf F}_{p}\), sending \((x,a)\mapsto[a]+Vx\). It therefore suffices to show that the composite \[W[F^{n}]\xrightarrow{x\mapsto 1+Vx}W^{\times}[F^{n}]\xrightarrow{\log}W[F^{n}]\] is an isomorphism. But there is a commutative diagram where the columns exhibit the third term as the quotient of the first two. By induction on \(n\) (with the base case being provided by [1, Lemma 3.5.18]), we may conclude that the middle horizontal composite is an isomorphism. **Remark C.11**.: One could have alternatively/equivalently proved Corollary C.10 by observing that the square (65) for \(n-1\) maps to (65) for \(n\); all components of this map of squares are the canonical ones, except on the bottom-right \({\bf G}_{m}\) (where it is given by the \(p\)th power map \({\bf G}_{m}\to{\bf G}_{m}^{(1)}\)). Diagramatically: (66) Taking Cartier duals, we obtain a map of pushout squares: (67) Again, Corollary C.10 follows by induction on \(n\), using [11, Lemma 3.5.18] for the base case. **Corollary C.12**.: _Write the underlying scheme of \(W_{n}\) as \(\prod_{i=0}^{n-1}\mathbf{G}_{a}\) (where the \(i\)th copy of \(\mathbf{G}_{a}\) has coordinate \(\Psi_{i}\)). There is a fully faithful functor \(\operatorname{QCoh}(BW^{\times}[F^{n}])\hookrightarrow\operatorname{QCoh}(W_{n})\) whose essential image consists of those \(p\)-complete \(M\in\operatorname{QCoh}(W_{n})\) such that \(\Psi_{i}^{p}-\Psi_{i}\) acts locally nilpotently on \(\operatorname{H}^{*}(M/p)\) for each \(0\leq i\leq n-1\). Furthermore, this functor is symmetric monoidal for the convolution tensor product on \(\operatorname{QCoh}(W_{n})\)._ _If \(\mathscr{F}\in\operatorname{QCoh}(BW^{\times}[F^{n}])\) is sent to \(M\in\operatorname{QCoh}(W_{n})\) under this functor, one obtains a cube \(\Psi_{\bullet}:2^{[n-1]}\to\operatorname{Mod}_{\mathbf{Z}_{p}}\) whose vertices are all \(M\) and such that the edge from the subset \(\{i_{1},\cdots,i_{j-1}\}\) to \(\{i_{1},\cdots,i_{j}\}\) is given by the operator \(\Psi_{j}:M\to M\). Then, the global sections \(\Gamma(BW^{\times}[F^{n}];\mathscr{F})\) can be identified with the total fiber of the cube \(\Psi_{\bullet}\)._ **Example C.13**.: If \(\mathscr{F},\mathscr{G}\in\operatorname{QCoh}(BW^{\times}[F^{2}])\) correspond to tuples \((M,\Psi_{0}^{M},\Psi_{1}^{M})\) and \((M^{\prime},\Psi_{0}^{M^{\prime}},\Psi_{1}^{M^{\prime}})\), then the global sections \(\Gamma(BW^{\times}[F^{2}];\mathscr{F})\) can be identified with the total fiber of the square Moreover, \(\mathscr{F}\otimes\mathscr{G}\) corresponds to the module \(M\otimes M^{\prime}\), where \[\Psi_{0}^{M\otimes M^{\prime}} =\Psi_{0}^{M}\otimes 1+1\otimes\Psi_{0}^{M^{\prime}},\] \[\Psi_{1}^{M\otimes M^{\prime}} =\Psi_{1}^{M^{\prime}}\otimes 1+1\otimes\Psi_{1}^{M^{\prime}}- \frac{1}{p}\sum_{i=1}^{p-1}\binom{p}{i}(\Psi_{0}^{M})^{i}\otimes(\Psi_{0}^{M^{ \prime}})^{p-i}.\] More generally, if \(\mathscr{F},\mathscr{G}\in\operatorname{QCoh}(BW^{\times}[F^{n}])\) correspond to tuples \((M,\Psi_{0}^{M},\cdots,\Psi_{n-1}^{M})\) and \((M^{\prime},\Psi_{0}^{M^{\prime}},\cdots,\Psi_{n-1}^{M^{\prime}})\), let us write \(\Psi:=(\Psi_{0},\Psi_{0},\cdots)\). Let \(w_{j}(\Psi)=\sum_{i=0}^{j}p^{i}\Psi_{i}^{p^{j-i}}\) denote the corresponding Witt polynomial; then \[w_{j}(\Psi^{M\otimes M^{\prime}})=w_{j}(\Psi^{M})\otimes 1+1\otimes w_{j}(\Psi^{M^{ \prime}}). \tag{68}\] **Proposition C.14**.: _Define a homomorphism \(W^{\times}[F^{n}]\to\mathbf{G}_{m}\) via the composite_ \[W^{\times}[F^{n}]\to W^{\times}\to(W/V)^{\times}\cong\mathbf{G}_{m}.\] _Let \(\mathscr{O}\{1\}\) denote the line bundle over \(BW^{\times}[F^{n}]\) determined by the resulting map \(BW^{\times}[F^{n}]\to B\mathbf{G}_{m}\). Under the functor of Corollary C.12, the total space of the line bundle \(\mathscr{O}\{1\}\) corresponds to the \(p\)-completion of \(\mathbf{Z}_{p}[x^{\pm 1}]\) with the action of \(\Psi_{j}\) determined by the following requirement on Witt polynomials:_ \[w(\Psi)=(w_{0}(\Psi),w_{1}(\Psi),w_{2}(\Psi),\cdots)=(x\partial_{x},x\partial_ {x},x\partial_{x},\cdots). \tag{69}\] Proof.: The map \(BW^{\times}[F^{n}]\to B\mathbf{G}_{m}\) determines the stack \(\mathbf{G}_{m}/W^{\times}[F^{n}]\) over \(BW^{\times}[F^{n}]\), so that the corresponding object in \(\operatorname{QCoh}(W_{n})\) under the functor of Corollary C.12 has underlying module given by \(\mathscr{O}_{\mathbf{G}_{m}}=\mathbf{Z}[x^{\pm 1}]\). It is not too hard to show from the definition of the map \(BW^{\times}[F^{n}]\to B\mathbf{G}_{m}\) that under the functor of Corollary C.12, the line bundle \(\mathscr{O}\{1\}\) over \(BW^{\times}[F^{n}]\) corresponds to the \(p\)-complete module \(\mathbf{Z}_{p}\) (with generator \(x\)) where \(\Psi_{0}\) acts on \(x\) by \(1\), and \(\Psi_{j}\) acts on \(x\) by zero for \(j\geq 1\). The action of \(w_{j}(\Psi)\) on \(\mathscr{O}\{m\}=\mathbf{Z}_{p}\cdot x^{m}\) then follows from (68). **Example C.15**.: For instance, it follows from (69) that \[\Psi_{0} =x\partial_{x},\] \[\Psi_{1} =\frac{x\partial_{x}}{p}\left(1-(x\partial_{x})^{p-1}\right),\] \[\Psi_{2} =\frac{x\partial_{x}}{p^{2}}\left(1-(x\partial_{x})^{p^{2}-1}- \frac{1}{p^{p-1}}\sum_{j=0}^{p}(-1)^{j}\binom{p}{j}(x\partial_{x})^{(p-1)(j+1) }\right).\] **Remark C.16**.: Using Corollary C.4, a similar calculation can be used to describe the \(\mathbf{G}_{a}\)-bundle \(\mathbf{G}_{a}/W[F^{n}]\) over \(BW[F^{n}]\); and, in particular, the \(\infty\)-category \(\operatorname{QCoh}((\mathbf{G}_{a}/W[F^{n}])/\mathbf{G}_{m})\). Let us summarize this calculation as follows. Recall that \(\mathbf{G}_{a}/W[F]\cong\mathbf{G}_{a}/\mathbf{G}_{a}^{\sharp}\) is isomorphic to \(\mathbf{G}_{a}^{\mathrm{dR}}\), so that if \(\mathscr{A}_{1}:=\mathbf{Z}_{p}\{x,\partial_{x}\}/([\partial_{x},x]=1)\) is the Weyl algebra, then \(\operatorname{QCoh}(\mathbf{G}_{a}/W[F])\simeq\operatorname{LMod}_{\mathscr{A }_{1}}^{\partial_{x}\text{-}\operatorname{nilp}}\). Similarly, if we write \(\frac{1}{m!}\partial_{x}^{m}=\partial_{x}^{[m]}\) (so that \(\partial_{x}^{[m]}(x^{k})=\binom{k}{m}x^{k-m}\), define \(\mathscr{A}_{1}^{[n]}\) via \[\mathscr{A}_{1}^{[n]}=\mathbf{Z}_{p}\left\{x,\partial_{x},\cdots,\partial_{x }^{[p^{n-1}]}\right\}/([\partial_{x}^{[p^{j}]},x]=\partial_{x}^{[p^{j}-1]}).\] Note that \(\partial_{x}^{[p^{j}-1]}\) is a \(p\)-adic unit multiple of \(\prod_{k=0}^{j-1}(\partial_{x}^{[p^{k}]})^{p-1}\). Then, the action of \(W[F^{n}]\) on \(\mathbf{G}_{a}\) implies that there is an equivalence \[\operatorname{QCoh}(\mathbf{G}_{a}/W[F^{n}])\simeq\operatorname{LMod}_{ \mathscr{A}_{1}^{[n]}}^{\partial_{x}^{[p^{j}]}\text{-}\operatorname{nilp}};\] this can be extended to an equivalence between \(\operatorname{QCoh}((\mathbf{G}_{a}/W[F^{n}])/\mathbf{G}_{m})\) and the \(\infty\)-category of graded \(\mathscr{A}_{1}^{[n]}\)-modules such that \(\partial_{x}^{[p^{j}]}\) acts nilpotently for \(0\leq j\leq n-1\), where \(x\in\mathscr{A}_{1}^{[n]}\) has weight \(1\) and \(\partial_{x}^{[p^{j}]}\in\mathscr{A}_{1}^{[n]}\) has weight \(-p^{j}\). Algebras of divided power differential operators such as \(\mathscr{A}_{1}^{[n]}\) were initially studied by Berthelot in [1]. **Warning C.17**.: Note that \(\mathbf{G}_{a}/W[F^{n}]\) is not a ring stack. Indeed, the map \(W[F^{n}]\to\mathbf{G}_{a}\) is not a quasi-ideal: the \(W\)-module structure on \(W[F^{n}]\) does not factor through \(W\twoheadrightarrow W_{1}=\mathbf{G}_{a}\) (if \(F^{n}(x)=0\), then \(xV(y)=V(F(x)y)\) need not vanish). However, it _does_ factor through \(W\twoheadrightarrow W_{n}\) (if \(F^{n}(x)=0\), then \(xV^{n}(y)=V^{n}(F^{n}(x)y)=0\)); indeed, \(W_{n}/W[F^{n}]\cong W/p^{n}\) admits the structure of a ring stack. **Remark C.18**.: The proof of Corollary C.10 showed that there is an isomorphism \[W^{\times}[F^{n}]\cong W[F^{n}]\times\mu_{p^{n}}\] over \({\bf F}_{p}\). Let \(\mathfrak{X}\) be a smooth \(p\)-adic formal scheme over \({\bf Z}_{p}\), and let \(X=\mathfrak{X}\otimes_{{\bf Z}_{p}}{\bf F}_{p}\). Suppose that the \({\bf G}_{m}^{\sharp}\)-action on \(\widehat{\Omega}_{\mathfrak{X}}^{\not{D}}\otimes_{{\bf Z}_{p}}{\bf F}_{p}=F_{X, *}\Omega_{X/{\bf F}_{p}}^{\bullet}\) refines to a \(W^{\times}[F^{n}]\)-action. (For instance, let \((\widehat{\Omega}_{\mathfrak{X}}^{\not{D}})_{0}\) denote the weight \(0\) piece of the \({\bf Z}/p\)-grading on \(\widehat{\Omega}_{\mathfrak{X}}^{\not{D}}\) inherited from the \({\bf G}_{m}^{\sharp}\)-action. The datum of a refinement to a \(W^{\times}[F^{2}]\)-action leads to an operator on \((\widehat{\Omega}_{\mathfrak{X}}^{\not{D}})_{0}\) which acts on \(\operatorname{gr}_{\operatorname{conj}}^{pi}\widehat{\Omega}_{\mathfrak{X}}^ {\not{D}}\) by multiplication by \(-i\).) In this case, the \({\bf Z}/p\)-grading on \(F_{X,*}\Omega_{X/{\bf F}_{p}}^{\bullet}\) from [1, Remark 4.7.20] would refine to a \({\bf Z}/p^{n}\)-grading; this would imply a refinement of the Deligne-Illusie theorem [1], stating that \(\tau_{\geq-p^{n}+1}F_{X,*}\Omega_{X/{\bf F}_{p}}^{\bullet}\) would be decomposable. **Remark C.19** ("Witty" interpretation of [1]).: The work of [1] suggests that the base-change along \(\operatorname{BP}\langle n-1\rangle_{*}\to{\bf F}_{p}\) (even along \(\operatorname{BP}\langle n-1\rangle_{*}\to{\bf Z}_{p}\)) of the stack constructed from the associated graded of the motivic filtration [1] on \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle)^{t{\bf Z}/p}\) (resp. \(\operatorname{THH}(\operatorname{BP}\langle n-1\rangle)\)) is isomorphic to the stack \(({\bf G}_{m}/W^{\times}[F^{n}])/{\bf G}_{m}\cong BW^{\times}[F^{n}]\) (resp. \(({\bf G}_{a}/W[F^{n}])/{\bf G}_{m}\cong(F_{*}W/pF^{n-1})/{\bf G}_{m}\)). We are currently investigating this and its consequences with Jeremy Hahn and Arpon Raksit. In particular, this suggests that if a \({\bf Z}_{p}\)-scheme "lifts to \(\operatorname{BP}\langle n-1\rangle\)", the \({\bf G}_{m}^{\sharp}\)-action on \(\widehat{\Omega}_{\mathfrak{X}}^{\not{D}}\) refines to a \(W^{\times}[F^{n}]\)-action. From this perspective, the operators \(\Psi_{j}\) from above are closely related to the topological Sen operators \(\Theta_{j}\) from the body of this article: roughly, \(\Theta_{j}\) can be understood as \(w_{j-1}(\Psi)\). Given Remark C.18, one is therefore naturally led to the following question: if \(X\) is a smooth and proper \({\bf F}_{p}\)-scheme which "lifts to \(\operatorname{BP}\langle n-1\rangle\)" and \(\dim(X)<p^{n}\), does the Hodge-de Rham spectral sequence for \(X\) degenerate at the \(E_{1}\)-page? This question need not make sense, since \(\operatorname{BP}\langle n-1\rangle\) is generally not an \({\bf E}_{\infty}\)-ring [1, 2]. However, since \(\operatorname{BP}\langle n-1\rangle\) admits the structure of an \({\bf E}_{3}\)-ring, one can nevertheless ask whether such a degeneration statement holds noncommutatively if \(\operatorname{QCoh}(X)\) admits a lift to a left \(\operatorname{BP}\langle n-1\rangle\)-linear \(\infty\)-category. This line of thinking was motivation for the following result (see [1]): if \(\operatorname{QCoh}(X)\) lifts to a left \(\operatorname{BP}\langle n-1\rangle\)-linear \(\infty\)-category, and \(\dim(X)<p^{n}\), then the Tate spectral sequence for \(\operatorname{HP}(X/{\bf F}_{p})\) degenerates at the \(E_{2}\)-page.
2309.02074
Some log-convexity theorems on quantum entropies
In this paper, we prove log-convexity of some parametrized versions of the relative entropy and fidelity. We also look at a R\'enyi generalization of relative entropy difference introduced by Seshadreesan et. al. in J. Phys. A: Math. Theor. 48 (2015) and give a counterexample to one of their conjectures.
Saptak Bhattacharya
2023-09-05T09:19:55Z
http://arxiv.org/abs/2309.02074v2
# Some log-convexity theorems on quantum entropies ###### Abstract. In this paper, we prove log-convexity of some parametrized versions of the relative entropy and fidelity. We also look at a Renyi generalization of relative entropy difference introduced by Seshadreesan et. al. in J. Phys. A: Math. Theor. 48 (2015) and give a counterexample to one of their conjectures. Key words and phrases:log-convexity, relative entropy, fidelity, recoverability 2010 Mathematics Subject Classification: 94A17, 15A45 ## 1. Introduction Let \(I\subset\mathbb{R}\) be an interval. A function \(f:I\to(0,\infty)\) is said to be _log-convex_ if \(\ln f\) is convex, or equivalently, \[f\big{(}\theta x+(1-\theta)y\big{)}\leq f(x)^{\theta}f(y)^{1-\theta}\] for all \(x,y\in I\), \(\theta\in[0,1]\). It follows from the weighted A.M-G.M inequality that every log-convex function is convex. If \(f:I\to(0,\infty)\) is log-convex, then for every \(y\in I\), the function \[x\to\frac{\ln f(x)-\ln f(y)}{x-y}\] is monotonically increasing in \(I\setminus\{y\}\). Let \(\mathbb{H}(n)\) be the real vector space of all \(n\times n\) Hermitian matrices and let \(K\) be a convex subset of \(\mathbb{H}(n)\). A function \(G:K\times K\to\mathbb{R}\) is said to be _jointly convex_ (_concave_) if for all \(A,B,C,D\in K\), \(\theta\in[0,1]\), \[G\big{(}\theta A+(1-\theta)C,\theta B+(1-\theta)D\big{)}\underset{(\geq)}{ \leq}\theta G(A,B)+(1-\theta)G(B,D).\] Convexity problems have been of general interest in quantum information theory (see eg. [16, 28, 10, 18, 20, 6, 2]). In this paper, we study some parametrized quantum entropies and discuss log-convexity with respect to the parameter. Let \(A\) and \(B\) be positive definite density matrices. The _relative entropy_, or the _Kullback-Liebler divergence_ of \(A\) with respect to \(B\) is defined as \[D(A|B)=\operatorname{tr}A(\ln A-\ln B). \tag{1}\] The data processing inequality, originally proved by Lindblad in [16], is a result of fundamental importance in quantum information theory. It states that for density matrices \(A,B\in M_{n}(\mathbb{C})\) and a completely positive, trace preserving (henceforth denoted by CPTP) map \(\phi:M_{n}(\mathbb{C})\to M_{k}(\mathbb{C})\), \[D(\phi(A)|\phi(B))\leq D(A|B). \tag{2}\] Given a parameter \(\theta\in[0,1]\), the \(\theta\)-_divergence_ of \(A\) with respect to \(B\) is defined to be \(\operatorname{tr}(A^{\theta}B^{1-\theta})\). Lieb's famous concavity theorem ([14, 1]) asserts that this is jointly concave in \(A\) and \(B\). _The Renyi \(\theta\)-relative entropy_ of \(A\) with respect to \(B\) is defined as \[D_{\theta}(A|B)=\frac{\ln\operatorname{tr}(A^{\theta}B^{1-\theta})}{\theta-1}. \tag{3}\] This has been discussed in [17], [20] and [18]. It is known that \(D_{\theta}(A|B)\) is monotonically increasing in \(\theta\) and \[\lim_{\theta\to 1-}D_{\theta}(A|B)=D(A|B).\] In the first section of this paper, we strengthen this monotonicity result and prove that the function \(\theta\to\operatorname{tr}(A^{\theta}B^{1-\theta})\) is log-convex. Two different proofs are given, the first one using majorization techniques, and the second one by proving a more general log-convexity result on \(C^{*}\)-algebras. We also give a refinement of Jensen's inequality for the function \(t\to t\ln t\). Given density matrices \(A\) and \(B\), the fidelity between \(A\) and \(B\) is defined as \[F(A|B)=\operatorname{tr}(A^{1/2}BA^{1/2})^{1/2}.\] In physical terms, this measures how close the states \(A\) and \(B\) are. It is known that \(F(A|B)\) is symmetric in \(A\) and \(B\), \(0\leq F(A|B)\leq 1\) and that it is jointly concave, a proof of which can be found in [28]. In section 2 we study a parametrized version of the fidelity defined by \[F_{\theta}(A|B)=\operatorname{tr}\lvert A^{\theta}B^{1-\theta}\rvert\] where \(\theta\in[0,1]\). This is a modified version of the \(\theta\)-divergence and hence, natural questions regarding its joint concavity, log-convexity and convergence to \(D(A|B)\) arise. We prove log-convexity, and show that just like \(D_{\theta}(A|B)\), \[\frac{\ln\mathrm{tr}|A^{\theta}B^{1-\theta}|}{\theta-1}\] converges to \(D(A|B)\) as \(\theta\to 1-\). However, a crucial difference with the \(\theta\)-divergence is observed on looking at joint concavity, as we show that \(\mathrm{tr}|A^{\theta}B^{1-\theta}|\) is jointly concave in density matrices \(A\) and \(B\) if and only if \(\theta=\frac{1}{2}\). In section 3 we look at another parametrized version of the fidelity known as the _sandwiched quasi-relative entropy_. This has been defined and studied in [18] and [30]. Here we show that this quantity is log-convex on \([\frac{1}{2},1]\) using interpolation techniques. The next focus is on a refinement of the data processing inequality conjectured by Seshadreesan et. al. ([23]) in 2015. Our analysis of log-convexity has been helpful in finding a counterexample to it. Let \(A,B\) be \(n\times n\) density matrices with \(\mathrm{supp}(A)\subset\mathrm{supp}(B)\). Let \(\phi:M_{n}(\mathbb{C})\to M_{k}(\mathbb{C})\) be a CPTP map (also called a quantum channel) with the Stinespring representaion \[\phi(X)=\mathrm{tr}_{2}VXV^{*}\] for all \(X\in M_{n}(\mathbb{C})\) where \(V:\mathbb{C}^{n}\to\mathbb{C}^{m}\otimes\mathbb{C}^{k}\) is an isometry for some \(m\in\mathbb{N}\). Let \(\mathcal{R}_{\phi,B}:M_{k}(\mathbb{C})\to M_{n}(\mathbb{C})\) be the Petz recovery map ([11], [21]) given by \[\mathcal{R}_{\phi,B}(Y) =B^{1/2}\phi^{*}\big{(}\phi(B)^{-1/2}Y\phi(B)^{-1/2}\big{)}B^{1/2}\] \[=B^{1/2}V^{*}(I_{m}\otimes\phi(B)^{-1/2}Y\phi(B)^{-1/2})VB^{1/2}\] for all \(Y\in M_{k}(\mathbb{C})\). Seshadreesan et. al. conjectured in [23] that \[-2\ln\big{[}F(A|\mathcal{R}_{\phi,B}(\phi(A)))\big{]}\leq D(A|B)-D(\phi(A)| \phi(B)). \tag{4}\] This arose from their study of a Renyi generalization of the relative entropy difference, which is defined in [23] and [29] as \[\tilde{\Delta}_{t}(A,B,\phi)=\frac{2t}{t-1}\ln\big{|}\big{|}(I_{m}\otimes\phi (A)^{\frac{1-t}{2t}}\phi(B)^{\frac{t-1}{2t}})VB^{\frac{1-t}{2t}}A^{\frac{1}{2 }}\big{|}\big{|}_{2t}\] for \(t\in[\frac{1}{2},1)\cup(1,\infty)\). Seshadreesan et. al. ([23]) proved that \(\tilde{\Delta}_{t}(A,B,\phi)\) converges to \(D(A|B)-D(\phi(A)|\phi(B))\) as \(t\to 1\) and conjectured that \(\tilde{\Delta}_{t}(A,B,\phi)\) is monotonically increasing in \(t\), thereby giving inequality (4) as a natural consequence. Though this has remained open, subsequent research has led to other significant refinements of the data processing inequality. Wilde in [29] used interpolation techniques to prove that \[-2\ln\big{[}\sup_{t\in\mathbb{R}}F(A|\mathcal{R}^{t}_{\phi,B}(\phi(A)))\big{]} \leq D(A|B)-D(\phi(A)|\phi(B)) \tag{5}\] where \(\mathcal{R}^{t}_{\phi,B}\) is a _rotated_ Petz recovery map given by \[\mathcal{R}^{t}_{\phi,B}(Y)=B^{\frac{1}{2}+it}\big{(}\phi(B)^{-\frac{1}{2}-it }Y\phi(B)^{-\frac{1}{2}-it}\big{)}B^{\frac{1}{2}+it} \tag{6}\] for each \(t\in\mathbb{R}\). At \(t=0\), this coincides with the usual Petz recovery map \(\mathcal{R}_{\phi,B}\). Sutter et. al. ([26], [27]) and Junge et. al. ([12]) have shown the existence of recovery maps \(\mathcal{R}^{\prime}\) depending only on \(\phi\) and \(B\) such that \[-2\ln\big{[}F(A|\mathcal{R}^{\prime}(\phi(A)))\big{]}\leq D(A|B)-D(\phi(A)| \phi(B)).\] This has recently been generalized to infinite dimensions in [8] and [9]. An approach to resolve the conjecture can be made by asking whether the map \[t\to 2t\,\ln\big{|}\big{|}(I_{m}\otimes\phi(A)^{\frac{1-t}{2t}}\phi(B)^{\frac{t- 1}{2t}})VB^{\frac{1-t}{2t}}A^{\frac{1}{2}}\big{|}\big{|}_{2t}\] is convex on \([\frac{1}{2},1]\). If true, this would imply the monotonicity of \(\tilde{\Delta}_{t}(A,B,\phi)\) and inequality (4) as its consequence. However, none of this holds, as we give an explicit counterexample with \(2\times 2\) matrices to disprove inequality (4). It should be noted, though, that its classical counterpart is true, a proof of which can be found in [13]. ## 2. The \(\theta\)-divergence Let us begin with a lemma which will be useful in the proof of the main result of this section. **Lemma 1**.: _Let \(X,Y\) be bounded random variables such that \(X,Y\geq\epsilon\) for some \(\epsilon{>}0\). Then the map \(\theta\to\ln\mathbb{E}(X^{\theta}Y^{1-\theta})\) is convex in \([0,1]\)._ Proof.: Let \(g(\theta)=\ln\mathbb{E}(X^{\theta}Y^{1-\theta})\). A straightforward computation gives \[g^{\prime\prime}(\theta)=\frac{\mathbb{E}(X^{\theta}Y^{1-\theta})\mathbb{E}( X^{\theta}Y^{1-\theta}(\ln X-\ln Y)^{2})-[\mathbb{E}(X^{\theta}Y^{1-\theta}( \ln X-\ln Y))]^{2}}{[\mathbb{E}(X^{\theta}Y^{1-\theta})]^{2}}\] for all \(\theta\in(0,1)\). By the Cauchy-Schwarz inequaity, \[[\mathbb{E}(X^{\theta}Y^{1-\theta}(\ln X-\ln Y))]^{2}\] \[=[\mathbb{E}(\sqrt{X^{\theta}Y^{1-\theta}}\sqrt{X^{\theta}Y^{1-\theta}}(\ln X -\ln Y))]^{2}\] \[\leq\mathbb{E}(X^{\theta}Y^{1-\theta})\mathbb{E}(X^{\theta}Y^{1-\theta}(\ln X -\ln Y)^{2}).\] This implies \(g^{\prime\prime}\geq 0\) and hence, \(g\) is convex. **Remark**.: The boundedness assumptions are only there to ensure that \(\mathbb{E}(X^{\theta}Y^{1-\theta})\) is well defined for all \(\theta\) and \(g\) is smooth. **Theorem 1**.: _Let \(A,B\) be positive definite density matrices. Then the function \(\theta\to\text{tr}(A^{\theta}B^{1-\theta})\) is log-convex on \([0,1]\)._ Proof.: Let \(f:[0,1]\to\mathbb{R}\) be given by \(f(\theta)=\ln\text{tr}(A^{\theta}B^{1-\theta})\). By unitary congruence, we may assume without loss of generality that \[A=\begin{pmatrix}\lambda_{1}&&\\ &\ddots&\\ &&\lambda_{n}\end{pmatrix}\] where \(\{\lambda_{i}\}_{i=1}^{n}\) are positive scalars such that \(\sum_{i}\lambda_{i}=1\) Let \(\{\sigma_{j}\}_{j=1}^{n}\) be the eigenvalues of \(B\) and \(\{d_{j}(\theta)\}_{j=1}^{n}\) denote the diagonal elements of \(B^{1-\theta}\). Since \(\{B^{1-\theta}\}_{\theta}\) is a commuting family of positive matrices, there exists a doubly stochastic matrix \(M=[p_{ij}]\) such that \[\begin{pmatrix}d_{1}(\theta)\\ \vdots\\ d_{n}(\theta)\end{pmatrix}=M\begin{pmatrix}\sigma_{1}^{1-\theta}\\ \vdots\\ \sigma_{n}^{1-\theta}\end{pmatrix}.\] It follows from here that \[f(\theta)=\ln\sum_{i,j}p_{ij}\lambda_{i}^{\theta}\sigma_{j}^{1-\theta}.\] Consider the probability measure \(\mu\) on \(\Omega=\{(i,j)\in\mathbb{N}\times\mathbb{N}:1\leq i,j\leq n\}\) given by \[\mu(i,j)=\frac{p_{ij}}{n}\] for all \((i,j)\in\Omega\). Then \[f(\theta)=\ln n+\ln\sum_{i,j}\mu(i,j)\lambda_{i}^{\theta}\sigma_{j}^{1-\theta}\] which is convex by Lemma 1. As a corollary, we get : **Corollary 1**.: _The Renyi relative entropy \(D_{\theta}(A|B)\) given by equation (3) is monotonically increasing in \([0,1)\)._ Proof.: By Theorem 1, the function \(f(\theta)=\ln\text{tr}(A^{\theta}B^{1-\theta})\) is convex. Since \(D_{\theta}(A|B)\) is the slope of the secant line joining \((\theta,f(\theta))\) and \((1,f(1))\), it is increasing in \(\theta\) **Remarks**.: 1. The idea behind the proof of Theorem 1 is due to Prof. Rajendra Bhatia (personal discussion). 2. From Corollary 1, we get the inequality \[-2\ln\operatorname{tr}(A^{1/2}B^{1/2})\leq D(A|B).\] (7) Since \(\operatorname{tr}(A^{1/2}B^{1/2})\leq F(A|B)\), it follows that \[-2\ln F(A|B)\leq D(A|B).\] This has an important physical interpretation : if the divergence of \(A\) with respect to \(B\) is small, the fidelity between \(A\) and \(B\) is large. A different proof of (7) can be found in [15]. We now give an alternate proof of Theorem 1 using a more general log-convexity result: **Theorem 2**.: _Let \(\mathcal{A}\) be a unital \(C^{*}\)-algebra. Let \(x\in\mathcal{A}\) be positive and invertible. Then for every positive definite state \(\phi:\mathcal{A}\to\mathbb{C}\) the function \(\theta\to\phi(x^{\theta})\) is log-convex on \([0,1]\)._ Proof.: It suffices to show that for all \(t,s\in[0,1]\), \[\ln\phi(x^{\frac{t+s}{2}})\leq\frac{\ln\phi(x^{t})+\ln\phi(x^{s})}{2}.\] Note that the \(2\times 2\) operator matrix \[\begin{pmatrix}x^{t}&x^{\frac{t+s}{2}}\\ x^{\frac{t+s}{2}}&x^{s}\end{pmatrix}\] is positive because the Schur complement of \(x^{t}\) is \(0\). Since \(\phi\) is a state, it is completely positive and therefore, \[\begin{pmatrix}\phi(x^{t})&\phi(x^{\frac{t+s}{2}})\\ \phi(x^{\frac{t+s}{2}})&\phi(x^{s})\end{pmatrix}\geq 0.\] Taking determinant, \[\phi(x^{t})\phi(x^{s})\geq\phi(x^{\frac{t+s}{2}})^{2}.\] Taking log on both sides, \[\ln\phi(x^{\frac{t+s}{2}})\leq\frac{\ln\phi(x^{t})+\ln\phi(x^{s})}{2}.\] This completes the proof. Theorem 2 yields the following consequences : **Corollary 2**.: _Let \(\mathcal{A}\) be a unital \(C^{*}\)-algebra and let \(x\in\mathcal{A}\) be positive and invertible. Then for any positive definite state \(\phi:\mathcal{A}\to\mathbb{C}\), \(\phi(x\ln x)\geq\phi(x)\big{[}\ln\phi(x)+2(\ln\phi(x)^{1/2}-\ln\phi(x^{1/2})) \big{]}\)._ Proof.: Replacing \(x\) with \(\frac{x}{\phi(x)}\) if necessary, we may assume, without loss of generality, that \(\phi(x)=1\). The inequality then reduces to \[\phi(x\ln x)\geq-2\ln\phi(x^{1/2}).\] By theorem 2, \(f(\theta)=\ln\phi(x^{\theta})\) is convex on \([0,1]\) and therefore, the function \(h(\theta)=\frac{\ln\phi(x^{\theta})}{\theta-1}\) is increasing on \([0,1)\) and converges to \(f^{\prime}(1)=\phi(x\ln x)\) as \(\theta\to 1-\). This implies that \[h(1/2)=-2\ln\phi(x^{1/2})\leq\phi(x\ln x).\] **Corollary 3**.: _Let \(A,B\) be positive definite density matrices. Then the function \(\theta\to\ln\text{tr}(A^{\theta}B^{1-\theta})\) is convex on \([0,1]\),_ Proof.: Consider \(M_{n}(\mathbb{C})\) as a Hilbert space with the Hilbert-Schmidt inner product \[\langle X,Y\rangle=\text{tr}(Y^{*}X).\] Let \(A,B\) be positive definite density matrices and \(B(M_{n}(\mathbb{C}))\) be the \(C^{*}\)-algebra of operators on \(M_{n}(\mathbb{C})\). Consider the pure state \(\phi:B(M_{n}(\mathbb{C}))\to\mathbb{C}\) given by \[\phi(T)=\langle TB^{1/2},B^{1/2}\rangle\] for all \(T\in B(M_{n}(\mathbb{C}))\) and the relative modular operator \(\Delta:M_{n}\mathbb{C}\to M_{n}(\mathbb{C})\) given by \[\Delta(X)=AXB^{-1}\] for all \(X\in M_{n}(\mathbb{C})\). Being a composition of two commuting positive definite operators, namely the left multiplication by \(A\) and the right multiplication by \(B^{-1}\), \(\Delta\) is also positive definite. We observe that \[\text{tr}(A^{\theta}B^{1-\theta})=\langle\Delta^{\theta}B^{1/2},B^{1/2} \rangle=\phi(\Delta^{\theta})\] and conclude the proof by invoking Theorem 2. **Remarks.** 1. The inequality in corollary 2 is stronger that Jensen's inequality for the convex function \(t\to t\ln t\) since \(\phi(x)^{1/2}\geq\phi(x^{1/2})\). 2. The relative modular operator used in corollary 3 is positive and therefore, admits a functional calculus, which can be used to define a large class of quantum entropies called \(f\)-divergences. We consider a function \(f:(0,\infty)\to\mathbb{R}\), often assumed to be operator convex or concave, and define the \(f\)-divergence of two positive definite density matrices \(A\) and \(B\) as \(\langle f(\Delta)B^{1/2},B^{1/2}\rangle\). This coincides with the Kullback-Liebler and the \(\theta\)-divergences when \(f(x)=x\ln x\) and \(x^{\theta}\) respectively. See [11] and [19] for more details on \(f\)-divergences. ## 3. A parametrized version of the fidelity Given density matrices \(A\) and \(B\), we consider a modified version of the \(\theta\)-divergence given by \[F_{\theta}(A|B)=\operatorname{tr}\lvert A^{\theta}B^{1-\theta}\rvert \tag{8}\] where \(\theta\in[0,1]\). This is a parametrized version of the fidelity because at \(\theta=\frac{1}{2}\), this coincides with \(F(A|B)\). Its natural to ask whether the important properties of the \(\theta\)-divergence like log-convexity and joint concavity hold for \(F_{\theta}(A|B)\). This is what we explore in this section. **Theorem 3**.: _Let \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\) and \(\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{n}\) be the eigenvalues of density matrices \(A\) and \(B\) respectively. Then for all \(\theta\in(0,1)\), \(\operatorname{tr}\lvert A^{\theta}B^{1-\theta}\rvert\leq\sum_{j}\lambda_{j}^ {\theta}\sigma_{j}^{1-\theta}\leq 1\)._ Proof.: Let \(\mathbb{U}(n)\) be the group of \(n\times n\) unitary matrices and \(U\in\mathbb{U}(n)\). Then the singular values of \(UA^{\theta}\) are precisely \(\{\lambda_{j}^{\theta}\}\). By von-Neumann's trace inequality, \[\lvert\operatorname{tr}(UA^{\theta}B^{1-\theta})\rvert\leq\sum_{j}\lambda_{j} ^{\theta}\sigma_{j}^{1-\theta}\leq\sum_{j}\theta\lambda_{j}+(1-\theta)\sigma_{ j}=1.\] Thus, \[\operatorname{tr}\lvert A^{\theta}B^{1-\theta}\rvert=\sup_{U\in\mathbb{U}(n )}\lvert\operatorname{tr}(UA^{\theta}B^{1-\theta})\rvert\leq\sum_{j}\lambda_ {j}^{\theta}\sigma_{j}^{1-\theta}\leq 1.\] The next two theorems discuss the log-convexity of \(F_{\theta}(A|B)\) and the convergence of \[\frac{F_{\theta}(A|B)}{\theta-1}\] as \(\theta\to 1-\). **Theorem 4**.: _Let \(A,B\) be positive definite density matrices and \(F_{\theta}(A|B)\) be as in equation (8). Then the function \(\theta\to F_{\theta}(A|B)\) is log-convex on \([0,1]\)._ Proof.: Let \(f(\theta)=F_{\theta}(A|B)=\operatorname{tr}\lvert A^{\theta}B^{1-\theta}\rvert\). Let \(U\) be a unitary and \(S\) be the strip \(\{z\in\mathbb{C}:0\leq\operatorname{Re}(z)\leq 1\}\). Consider the function \(\Phi:S\to\mathbb{C}\) given by \[\Phi(z)=\operatorname{tr}(UA^{z}B^{1-z})\] for all \(z\in S\). This is bounded, continuous on \(S\), and holomorphic in its interior. Let \(\alpha,\beta\ \in[0,1]\). Then, \[\lvert\Phi(\alpha+iy)\rvert\] \[=|\operatorname{tr}(UA^{iy}A^{\alpha}B^{1-\alpha}B^{-iy})|\] \[=|\operatorname{tr}(B^{-iy}UA^{iy}A^{\alpha}B^{1-\alpha})|\] \[\leq\operatorname{tr}\lvert A^{\alpha}B^{1-\alpha}\rvert\] \[=f(\alpha)\] for all \(y\in\mathbb{R}\), since \(A^{iy}\) and \(B^{-iy}\) are unitaries. Similarly, \[\lvert\Phi(\beta+iy)\rvert\leq f(\beta)\] for all \(y\in\mathbb{R}\). Hence, for any \(\theta\in[0,1]\), \[\lvert\Phi((1-\theta)\alpha+\theta\beta)\rvert\] \[=|\operatorname{tr}(UA^{(1-\theta)\alpha+\theta\beta}B^{\theta \alpha+(1-\theta)\beta})|\] \[\leq f(\alpha)^{1-\theta}f(\beta)^{\theta}\] by Hadamard's three lines theorem (see [24]). Taking maximum over unitaries \(U\), \[f((1-\theta)\alpha+\theta\beta)\leq f(\alpha)^{1-\theta}f(\beta)^{\theta}.\] This proves that \(f\) is log-convex. **Theorem 5**.: _Let \(A,B\) be positive definite density matrices. Then \(\lim\limits_{\theta\to 1-}\frac{\ln F_{\theta}(A|B)}{\theta-1}=D(A|B)\)._ Proof.: Let \(\mathbb{P}(n)\) be the set of all \(n\times n\) positive definite matrices. This is an open subset of \(\mathbb{H}(n)\). Consider \(g:\mathbb{P}(n)\to\mathbb{P}(n)\) given by \(g(X)=X^{1/2}\) for all \(X\in\mathbb{P}(n)\). The Frechet derivative of \(g\) at \(A\in\mathbb{P}(n)\), evaluated at \(Y\in\mathbb{H}(n)\) is given by \[dg(A)(Y)=\int_{0}^{\infty}e^{-tA^{1/2}}Ye^{-tA^{1/2}}dt.\] See Chapter 1 of [5] and Chapter 10 of [3]. Consider the maps \(\gamma:\mathbb{R}\to\mathbb{P}(n)\) given by \(\gamma(\theta)=B^{1-\theta}A^{2\theta}B^{1-\theta}\) and \(f:\mathbb{R}\to\mathbb{R}\) given by \(f(\theta)=\ln\operatorname{tr}\lvert A^{\theta}B^{1-\theta}\rvert\) and observe that \[f(\theta)=\ln\operatorname{tr}(g(\gamma(\theta))).\] Now, \[\lim_{\theta\to 1-}\frac{f(\theta)}{\theta-1}\] \[=\lim_{\theta\to 1-}\frac{\ln\operatorname{tr}\lvert A^{\theta}B^{1- \theta}\rvert}{\theta-1}\] \[=f^{\prime}(1)\] \[=\operatorname{tr}\frac{d(g\circ\gamma)}{d\theta}\lvert_{\theta=1}\] \[=\operatorname{tr}\int_{0}^{\infty}e^{-tA}\bigl{(}\frac{d\gamma} {d\theta}\bigl{|}_{\theta=1}\bigr{)}e^{-tA}dt.\] A simple computation gives \[\frac{d\gamma}{d\theta}\bigl{|}_{\theta=1}=2A\ln A-(\ln B)A^{2}-A^{2}\ln B.\] By cyclicity of trace, \[f^{\prime}(1) =\operatorname{tr}\bigl{[}(\int_{0}^{\infty}e^{-2tA}dt)(2A\ln A- A^{2}\ln B-(\ln B)A^{2})\bigr{]}\] \[=\frac{\operatorname{tr}\bigl{[}A^{-1}(2A\ln A-A^{2}\ln B-(\ln B) A^{2})\bigr{]}}{2}\] \[=\operatorname{tr}A(\ln A-\ln B).\] It is known that for \(\theta\in(0,1)\), \(\operatorname{tr}(A^{\theta}B^{1-\theta})\) is a jointly concave function of \(A\) and \(B\). Unfortunately, this is not the case with \(\operatorname{tr}\lvert A^{\theta}B^{1-\theta}\rvert\), as we see in the next result. **Theorem 6**.: _For \(\theta\in(0,1)\), the map \((A,B)\to F_{\theta}(A\lvert B)\) is jointly concave in density matrices \(A\) and \(B\) if and only if \(\theta=\frac{1}{2}\)_ Proof.: At \(\theta=\frac{1}{2}\) the quantity coincides with the fidelity of two density matrices \(A\) and \(B\), which is known to be jointly concave ([28]). Conversely, let \(\theta{>}\frac{1}{2}\) and assume that \(F_{\theta}(A\lvert B)\) is jointly concave. Then, it has to be increasing under pinchings ([4]). Consider density matrices \[A=\frac{1}{2}\begin{pmatrix}1&\frac{1}{2}\\ \frac{1}{2}&1\end{pmatrix}\] and \[B=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}.\] Pinching these along the main diagonal, we obtain \[X=\begin{pmatrix}\frac{1}{2}&0\\ 0&\frac{1}{2}\end{pmatrix}\] and \[Y=B=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\] respectively. Let \(e_{1}\) and \(e_{2}\) be the standard basis vectors of \(\mathbb{C}^{2}\). By our assumption, \[\operatorname{tr}\lvert X^{\theta}Y^{1-\theta}\rvert=2^{-\theta}\geq \operatorname{tr}\lvert A^{\theta}B^{1-\theta}\rvert\] which implies \[\lvert\lvert A^{\theta}e_{1}\rvert\rvert\leq 2^{-\theta}.\] Taking \(T=2A\) and squaring, we get \[\langle T^{2\theta}e_{1},e_{1}\rangle\leq 1.\] Since \(e_{1}\) is not an eigenvector of \(T\) and the map \(t\to t^{2\theta}\) on \([0,\infty)\) is strictly convex as \(2\theta{>}1\), Jensen's inequality implies \[\langle T^{2\theta}e_{1},e_{1}\rangle{>}\langle Te_{1},e_{1}\rangle^{2\theta}=1\] leading to a contradiction. Hence we must have \(\theta\leq\frac{1}{2}\). Now let \(\theta{<}\frac{1}{2}\). Let \(t\in(0,1)\) and consider density matrices \[A=\begin{pmatrix}1-t&0\\ 0&t\end{pmatrix}\] and \[B=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}\end{pmatrix}.\] We shall choose \(t\) appropriately so that \(\operatorname{tr}\lvert A^{\theta}B^{1-\theta}\rvert\) fails to increase under the main diagonal pinching. Let \[H=A=\begin{pmatrix}1-t&0\\ 0&t\end{pmatrix}\] and \[K=\begin{pmatrix}\frac{1}{2}&0\\ 0&\frac{1}{2}\end{pmatrix}\] be the images of \(A\) and \(B\) respectively under the pinching. Then \[\operatorname{tr}\lvert H^{\theta}K^{1-\theta}\rvert=\frac{t^{\theta}+(1-t)^ {\theta}}{2^{1-\theta}}\] and \[\operatorname{tr}\lvert A^{\theta}B^{1-\theta}\rvert =\operatorname{tr}\lvert A^{\theta}\big{(}\frac{e_{1}+e_{2}}{\sqrt{2} }\big{)}\big{(}\frac{e_{1}+e_{2}}{\sqrt{2}}\big{)}^{*}\rvert\] \[=\lvert\lvert A^{\theta}\big{(}\frac{e_{1}+e_{2}}{\sqrt{2}}\big{)} \rvert\rvert.\] Thus, it suffices to find \(t\) such that \[\big{(}\frac{t^{\theta}+(1-t)^{\theta}}{2^{1-\theta}}\big{)}^{2}{<}\lvert \lvert A^{\theta}\big{(}\frac{e_{1}+e_{2}}{\sqrt{2}}\big{)}\rvert\rvert^{2}\] which is equivalent to \[\big{(}\frac{1-t}{t}\big{)}^{\theta}+\big{(}\frac{t}{1-t}\big{)}^{\theta}{>} \frac{4}{4^{1-\theta}-2}.\] Since \(\theta{<}\frac{1}{2}\), \(4^{1-\theta}-2{>}0\). Observe that as \(t\to 0+\), \[\big{(}\frac{1-t}{t}\big{)}^{\theta}+\big{(}\frac{t}{1-t}\big{)}^{\theta} \rightarrow\infty.\] Thus, \(t\) can be chosen small enough to ensure that \[\big{(}\frac{1-t}{t}\big{)}^{\theta}+\big{(}\frac{t}{1-t}\big{)}^{\theta}{>} \frac{4}{4^{1-\theta}-2}\] thereby showing that we cannot have joint concavity if \(\theta{<}\frac{1}{2}\) either. **Remarks.** 1. The results discussed here and in the previous section hold for positive semi-definite matrices \(A\) and \(B\) too, with the assumption that \(\operatorname{supp}(A)\subset\operatorname{supp}(B)\). 2. Theorem 6 also follows from Theorem 4.4 in [7]. We have given a simpler proof in this special case. ## 4. Sandwiched Renyi entropy, relative entropy difference and recoverability Muller-Lennert et. al. ([18]) and Wilde et. al. ([30]) defined a \(t\)-parametrized version of \(F(A|B)\) for density matrices \(A\) and \(B\) with \(\operatorname{supp}(A)\subset\operatorname{supp}(B)\) as \[\mathcal{F}_{t}(A|B)=\operatorname{tr}(B^{\frac{1-t}{2t}}AB^{\frac{1-t}{2t}})^ {t}\] for all \(t\in(0,\infty)\). This quantity is known as the sandwiched quasi-relative entropy, which coincides with the usual fidelity at \(t=\frac{1}{2}\). Observe that \[\operatorname{tr}(B^{\frac{1-t}{2t}}AB^{\frac{1-t}{2t}})^{t}{>}0\] for all \(t\in(0,\infty)\) if \(\operatorname{supp}(A)\subset\operatorname{supp}(B)\). It has been proved in [10] that \(\mathcal{F}_{t}(A|B)\) is jointly concave in \(A\) and \(B\) if \(t\in[\frac{1}{2},1)\) and jointly convex if \(t\in(1,\infty)\). Some extremal characterizations of \(\mathcal{F}_{t}(A|B)\) can be found in [6]. For \(t\in(0,1)\cup(1,\infty)\) the sandwiched Renyi relative entropy is defined as \[S_{t}(A|B)=\frac{\ln\mathcal{F}_{t}(A|B)}{t-1}. \tag{9}\] The next theorem gives the log-convexity of \(\mathcal{F}_{t}(A|B)\) in \([\frac{1}{2},1]\). For that, we need a matrix version of Riesz-Thorin interpolation. **Lemma 2**.: _Let \(S=\{z\in\mathbb{C}:0\leq\text{Re}(z)\leq 1\}\) and \(\Phi:S\to M_{n}(\mathbb{C})\) be a bounded continuous function which is holomorphic in the interior of \(S\). Let \(p_{0},p_{1}\geq 1\). For \(\theta\in[0,1]\) let \(p_{\theta}\) given by_ \[\frac{1}{p_{\theta}}=\frac{1-\theta}{p_{0}}+\frac{\theta}{p_{1}}.\] _If_ \[\sup_{t\in\mathbb{R}}||\Phi(\theta+it)||_{p_{\theta}}{>}0\] _for all \(\theta\in[0,1]\), the map_ \[\theta\to\sup_{t\in\mathbb{R}}||\Phi(\theta+it)||_{p_{\theta}}\] _is log-convex on \([0,1]\)_ Here, for a given matrix \(X\) and \(r\geq 1\), \(||X||_{p}\) denotes the Schatten-\(p\) norm of \(X\) given by \[||X||_{p}=(\operatorname{tr}\!|X|^{p})^{1/p}.\] A proof of Lemma 2 can be found in [2] where interpolation theory is used to show the joint convexity of \(\mathcal{F}_{t}(A|B)\) when \(t{>}1\). A detailed account of complex interpolation theory can be found in [22] and [25]. **Theorem 7**.: _The function \(t\to\ln\mathcal{F}_{t}(A|B)\) is convex on \([\frac{1}{2},1]\)_ Proof.: Let \(t\in[\frac{1}{2},1]\). Then \[F_{t}(A|B)\] \[=\operatorname{tr}\!(B^{\frac{1-t}{2t}}AB^{\frac{1-t}{2t}})^{t}\] \[=\operatorname{tr}\!|A^{\frac{1}{2}}B^{\frac{1-t}{2t}}|^{2t}\] \[=\big{|}\big{|}A^{\frac{1}{2}}B^{\frac{1-t}{2t}}\big{|}\big{|}_{ 2t}^{2t}.\] Let \(h:[\frac{1}{2},1]\to\mathbb{R}\) be given by \(h(t)=\ln F_{t}(A|B)\). Define maps \(\xi:[\frac{1}{2},1]\to[0,1]\) and \(g:[0,1]\to\mathbb{R}\) by \[\xi(t)=\frac{1-t}{t}\] and \[g(\theta)=\ln\big{|}\big{|}A^{\frac{1}{2}}B^{\frac{\theta}{2}}\big{|}\big{|}_{ \frac{2}{1+\theta}}\] respectively, and note that \[h(t)=\ln F_{t}(A|B)=2t\,g(\xi(t))=2t\,g(\frac{1-t}{t}).\] We now show that the convexity of \(h\) on \([\frac{1}{2},1]\) is equivalent to the convexity of \(g\) on \([0,1]\). Assume first that \(g\) is convex. We have to show that for \(t,s\in[\frac{1}{2},1]\), \[h\big{(}\frac{t+s}{2}\big{)}\leq\frac{h(t)+h(s)}{2}\] which is equivalent to \[g\big{(}\frac{2}{t+s}-1\big{)}\leq\frac{t}{t+s}g\big{(}\frac{1-t}{t}\big{)}+ \frac{t}{t+s}g\big{(}\frac{1-s}{s}\big{)}.\] But this is the same as \[g\big{(}\frac{t}{t+s}(\frac{1-t}{t})+\frac{s}{t+s}(\frac{1-s}{s})\big{)}\leq \frac{t}{t+s}g\big{(}\frac{1-t}{t}\big{)}+\frac{t}{t+s}g\big{(}\frac{1-s}{s} \big{)}\] which follows directly from the convexity of \(g\). Conversely, assume \(h\) is convex and observe that \[g(\theta)=\frac{(1+\theta)h\big{(}\frac{1}{1+\theta}\big{)}}{2}\] for all \(\theta\in[0,1]\). A similar argument shows that \(g\) is convex. Consider the strip \(S=\{z\in\mathbb{C}:0\leq\text{Re}(z)\leq 1\}\) and the map \(\Phi:S\to M_{n}(\mathbb{C})\) given by \[\Phi(z)=A^{\frac{1}{2}}B^{\frac{z}{2}}.\] It is easy to observe that \(\Phi\) is continuous and bounded on \(S\), and holomorphic in its interior. Consider the family of Schatten norms \[p_{\theta}=\frac{2}{1+\theta}\] for \(\theta\in[0,1]\). Then, for all \(y\in\mathbb{R}\), \(\theta\in[0,1]\), \[\big{|}\big{|}\Phi(\theta+iy)\big{|}\big{|}_{p_{\theta}}\] \[=\big{|}A^{\frac{1}{2}}B^{\frac{\theta}{2}}B^{iy}\big{|}\big{|}_ {p_{\theta}}\] \[\leq\big{|}\big{|}A^{\frac{1}{2}}B^{\frac{\theta}{2}}\big{|}_{p_ {\theta}}\] \[=\big{|}\big{|}\Phi(\theta)\big{|}\big{|}_{p_{\theta}}\] as \(B^{iy}\) is a partial isometry for all \(y\in\mathbb{R}\). Since \[g(\theta)=\ln\big{|}\big{|}\Phi(\theta)\big{|}\big{|}_{p_{\theta}}\] for all \(\theta\in[0,1]\), Lemma 2 implies the convexity of \(g\), thereby proving the convexity of \(h\). An immediate corollary follows : **Corollary 4**.: _The sandwiched Renyi relative entropy \(S_{t}(A|B)\) given by equation (9) is monotonically increasing in \(t\) on the interval \([\frac{1}{2},1)\)_ Another proof of monotonicity can be found in [18]. **Theorem 8**.: _Let \(A,B\) be density matrices with \(\text{supp}(A)\subset\text{supp}(B)\). Then_ \[\lim_{t\to 1-}S_{t}(A|B)=D(A|B)\] Proof.: By the Araki-Lieb-Thirring inequalities(see section IX.2 of [3]), \[\text{tr}(A^{t}B^{1-t})\] \[=\text{tr}(B^{\frac{1-t}{2}}A^{t}B^{\frac{1-t}{2}})\] \[\leq\text{tr}(B^{\frac{1-t}{2t}}AB^{\frac{1-t}{2t}})^{t}\] \[\leq\text{tr}(B^{1-t}A^{2t}B^{1-t})^{1/2}\] \[=\text{tr}|A^{t}B^{1-t}|\] for all \(t\in[\frac{1}{2},1]\). Therefore, \[\frac{\text{tr}|A^{t}B^{1-t}|}{t-1}\leq\frac{\text{tr}(B^{\frac{1-t}{2t}}AB^{ \frac{1-t}{2t}})^{t}}{t-1}\leq\frac{\text{tr}(A^{t}B^{1-t})}{t-1} \tag{10}\] for all \(t\in[\frac{1}{2},1)\). Using Theorem 5 and (10), we are done. **Remark**.: Muller-Lennert et. al. have also given a proof of Theorem 8 in [18]. We now look at a Renyi generalization of the relative entropy difference. Let \(A,B\in M_{n}(\mathbb{C})\) be density matrices with \(\text{supp}(A)\subset\text{supp}(B)\) and let \(\phi:M_{n}(\mathbb{C})\to M_{k}(\mathbb{C})\) be a CPTP map. Consider the Petz recovery map \(\mathcal{R}_{\phi,B}:M_{k}(C)\to M_{n}(\mathbb{C})\) given by \[\mathcal{R}_{\phi,B}(Y)=B^{1/2}\phi^{*}(\phi(B)^{-1/2}Y\phi(B)^{-1/2})B^{1/2}\] for all \(Y\in M_{k}(\mathbb{C})\). Observe that \(\mathcal{R}_{\phi,B}(\phi(B))=B\). It has been proved ([11, 21]) that \(R_{\phi,B}(\phi(A))=A\) if and only if \[D(A|B)=D(\phi(A)|\phi(B))\] By Stinespring's theorem, we have a positive integer \(m\) and an isometry \(V:\mathbb{C}^{n}\to\mathbb{C}^{m}\otimes\mathbb{C}^{k}\) such that \[\phi(X)=\operatorname{tr}_{2}VXV^{*}\] for all \(X\in M_{n}(\mathbb{C})\) where \(\operatorname{tr}_{2}\) denotes the second partial trace. Following [29] and [23] a Renyi generalization of the relative entropy difference can be defined as \[\tilde{\Delta}_{t}(A,B,\phi)=\frac{2t}{t-1}\ln\big{|}\big{|}(I_{m}\otimes\phi( A)^{\frac{1-t}{2t}}\phi(B)^{\frac{t-1}{2t}})VB^{\frac{1-t}{2t}}A^{\frac{1}{2}} \big{|}\big{|}_{2t}\] for \(t\in[\frac{1}{2},1)\cup(1,\infty)\). Note that \[\big{|}\big{|}(I_{m}\otimes\phi(A)^{\frac{1-t}{2t}}\phi(B)^{\frac{t-1}{2t}}) VB^{\frac{1-t}{2t}}A^{\frac{1}{2}}\big{|}\big{|}_{2t}^{2t}\] \[=\operatorname{tr}\!\left(A^{\frac{1}{2}}B^{\frac{1-t}{2t}}\phi^{*}(\phi(B)^{ \frac{t-1}{2t}}\phi(A)^{\frac{1-t}{t}}\phi(B)^{\frac{t-1}{2t}})B^{\frac{1-t}{2 t}}A^{\frac{1}{2}}\right)^{t} \tag{11}\] for all \(t\in[\frac{1}{2},1)\). At \(t=\frac{1}{2}\) this coincides with \[F(A|\mathcal{R}_{\phi,B}(\phi(A)))=\operatorname{tr}(A^{1/2}\mathcal{R}_{\phi,B}(\phi(A))A^{1/2})^{1/2}\] which is the fidelity between \(A\) and \(\mathcal{R}_{\phi,B}(\phi(A))\). Hence, the expression in \(t\) in (11) can be seen as a parametrized version of this fidelity. It was proved in [23] by Seshadreesan et. al. and in [29] by Wilde that \[\lim_{t\to 1}\tilde{\Delta}_{t}(A,B,\phi)=D(A|B)-D(\phi(A)|\phi(B)).\] In view of the previous results obtained, it is natural to wonder if the map \[t\to 2t\,\ln\big{|}\big{|}(I_{m}\otimes\phi(A)^{\frac{1-t}{2t}}\phi(B)^{ \frac{t-1}{2t}})VB^{\frac{1-t}{2t}}A^{\frac{1}{2}}\big{|}\big{|}_{2t}\] is convex on \([\frac{1}{2},1]\), which, by our argument in the proof of theorem 7, is equivalent to log-convexity of the map \[\theta\to\big{|}\big{|}(I_{m}\otimes\phi(A)^{\frac{\theta}{2}}\phi(B)^{- \frac{\theta}{2}})VB^{\frac{\theta}{2}}A^{\frac{1}{2}}\big{|}\big{|}_{\frac{2} {1+\theta}} \tag{12}\] on \([0,1]\). Had this been true, both the monotonicty of \(\tilde{\Delta}_{t}(A,B,\phi)\) and inequality (4), as conjectured by Seshadreesan et. al. in [23], would have followed. However, as the following example demonstrates, inequality (4) does not hold in general, thus disproving the conjecture as well as log-convexity of the map in (12) : **Example.** Let \[A=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}\end{pmatrix}\] and \[B=\begin{pmatrix}\frac{3}{4}&-\frac{1}{4}\\ -\frac{1}{4}&\frac{1}{4}\end{pmatrix}\] be two density matrices. \(A\) is a rank one projection and \(B\) is positive definite. Let \(\phi:M_{2}(\mathbb{C})\to M_{2}(\mathbb{C})\) be the pinching along the main diagonal, so that \[\phi(A)=\begin{pmatrix}\frac{1}{2}&0\\ 0&\frac{1}{2}\end{pmatrix}\] and \[\phi(B)=\begin{pmatrix}\frac{3}{4}&0\\ 0&\frac{1}{4}\end{pmatrix}.\] Computaions using GNU Octave then reveal that \[D(A|B)-D(\phi(A)|\phi(B))\approx 1.5191\] while \[-2\ln F(A|\mathcal{R}_{\phi,B}(\phi(A)))\approx 1.5349.\] Inequality (4) fails even when \(A\) is a \(2\times 2\) pure state and \(\phi\) is a pinching. However, we note that in the particular case that \(A\) is pure and \(\phi\) is the spectral pinching along \(A\), the inequality is true. **Theorem 9**.: _Let \(A\in M_{n}(\mathbb{C})\) be a rank one projection and \(\phi\) be the spectral pinching along \(A\). Then, for any positive definite density matrix \(B\),_ \[-2\ln[F(A|\mathcal{R}_{\phi,B}(\phi(A)))]\leq D(A|B)-D(\phi(A)|\phi(B)).\] Proof.: Let \(A=x\otimes x^{*}\) for some unit vector \(x\). By (12), the inequality follows if the function \(g:[0,1]\to\mathbb{R}\) given by \[g(\theta)=||A\phi(B)^{-\theta/2}B^{\theta/2}A||_{\frac{2}{1+\theta}}\] is log-convex. Let \[B=\sum_{j}\sigma_{j}v_{j}\otimes v_{j}^{*}\] be the spectral decomposition of \(B\) and \(\lambda=\langle Bx,x\rangle\). Observe that \[g(\theta)=\lambda^{-\theta/2}\langle B^{\theta/2}x,x\rangle\] \[=\sum_{j}\left(\frac{\sigma_{j}}{\lambda}\right)^{\theta/2}| \langle x,v_{j}\rangle|^{2}\] which is log-convex by Lemma 1. **Remarks**.: Even though inequality (4) fails to be true with the Petz recovery map, interpolation techniques have been used by Wilde in [29] and Junge et. al. in [12] to refine the data processing inequality and show the existence of recovery maps for which the analogous versions of inequality (4) hold. In particular, it has been shown in [12] that there exists a recovery map \(\mathcal{R}^{\prime}_{\phi,B}\) such that inequality \[-2\ln[F(A|\mathcal{R}^{\prime}_{\phi,B}(\phi(A)))]\leq D(A|B)-D(\phi(A)|\phi(B)) \tag{13}\] holds for density matrices \(A\) and \(B\) and a quantum channel \(\phi\). The map \(\mathcal{R}^{\prime}\) is given by \[\mathcal{R}^{\prime}_{\phi,B}(Y)=\int_{\mathbb{R}}\mathcal{R}^{t}_{\phi,B}(Y)d \mu(t)\] for a Borel probability measure \(\mu\) on \(\mathbb{R}\), where \(\mathcal{R}^{t}_{\phi,B}\) is the rotated Petz recovery map defined in equation (6). Observe that \[\int_{\mathbb{R}}\mathcal{R}^{t}_{\phi,B}d\mu(t)\] is a recovery map which depends only on \(B\) and \(\phi\). The significance of (13) is that given a state \(B\) and a channel \(\phi\), we have a universal recovery map \(\mathcal{R}^{\prime}\), depending only on \(B\) and \(\phi\), such that it approximately recovers any state \(A\) for which the relative entropy difference \(D(A|B)-D(\phi(A)|\phi(B))\) is small. **Acknowledgement.** I thank my PhD supervisor Prof. Tanvi Jain for her valuable comments and suggestions during the preparation of this paper. I also thank Prof. Rajendra Bhatia for his insightful comments on improving the exposition.
2303.01526
Semantic Attention Flow Fields for Monocular Dynamic Scene Decomposition
From video, we reconstruct a neural volume that captures time-varying color, density, scene flow, semantics, and attention information. The semantics and attention let us identify salient foreground objects separately from the background across spacetime. To mitigate low resolution semantic and attention features, we compute pyramids that trade detail with whole-image context. After optimization, we perform a saliency-aware clustering to decompose the scene. To evaluate real-world scenes, we annotate object masks in the NVIDIA Dynamic Scene and DyCheck datasets. We demonstrate that this method can decompose dynamic scenes in an unsupervised way with competitive performance to a supervised method, and that it improves foreground/background segmentation over recent static/dynamic split methods. Project Webpage: https://visual.cs.brown.edu/saff
Yiqing Liang, Eliot Laidlaw, Alexander Meyerowitz, Srinath Sridhar, James Tompkin
2023-03-02T19:00:05Z
http://arxiv.org/abs/2303.01526v2
# Semantic Attention Flow Fields for Dynamic Scene Decomposition ###### Abstract We present SAFF: a dynamic neural volume reconstruction of a casual monocular video that consists of time-varying color, density, scene flow, semantics, and attention information. The semantics and attention let us identify salient foreground objects separately from the background in arbitrary spacetime views. We add two network heads to represent the semantic and attention information. For optimization, we design semantic attention pyramids from DINO-ViT outputs that trade detail with whole-image context. After optimization, we perform a saliency-aware clustering to decompose the scene. For evaluation on real-world dynamic scene decomposition across spacetime, we annotate object masks in the NVIDIA Dynamic Scene Dataset. We demonstrate that SAFF can decompose dynamic scenes without affecting RGB or depth reconstruction quality, that volume-integrated SAFF outperforms 2D baselines, and that SAFF improves foreground/background segmentation over recent static/dynamic split methods. Project webpage: [https://visual.cs.brown.edu/saff](https://visual.cs.brown.edu/saff) ## 1 Introduction Given a casually-captured monocular RGB video, decomposing its depicted real-world dynamic scene into foreground objects and background is an important task in computer vision, with downstream applications in segmentation and beyond such as in video editing. Ideally, we would also reconstruct the geometry and appearance over time, including its frame-to-frame correspondence. Previous methods have made great progress in these directions but there are many challenges. Some works assume that there is no object motion in the scene [4, 17, 37, 38, 52], or take input from multi-view cameras [17, 52], or do not explicitly reconstruct the underlying 3D structure of the scene [4, 8]. For objects, some works rely on masks or user input to aid segmentation [8, 16, 48], or use task-specific training datasets [8]. Sometimes, works assume a number of object slots ahead of time [4, 24]. Given the challenges, many works train and test on synthetic data [15, 44]. To overcome the challenges, we integrate low-level reconstruction cues with high-level pretrained information--bottom-up and top-down--into neural volumes. Specifically, we consider the added value of embedded semantic and saliency (attention) information for dynamic scene decomposition. Our semantic attention flow fields (SAFF) build upon the neural scene flow fields approach [21]. This takes a frame interpolation approach rather than an explicit canonicalization [41] or latent hyperspace [32] approach, which allows it to more robustly apply to natural scenes. For optimization, we supervise two network heads with pretrained DINO-ViT [6] semantic features and attention. To extract higher-fidelity information from low-resolution DINO-ViT output, we build a semantic attention pyramid that trades detail with whole-image context. Having optimized a SAFF representation for a dynamic scene, we perform a saliency-aware clustering on rendered feature images to describe objects and their background. Given the volume reconstruction, the clustering generalizes to novel spacetime views. To evaluate SAFF's dynamic scene decomposition capacity, we expand the NVIDIA Dynamic Scene Dataset [51] by manually annotating object masks across input and hold-out views. We demonstrate that SAFF outperforms 2D DINO-ViT baselines and is comparable to a state-of-the-art 2D video segmentation method ProposeReduce [23] on our data. Existing monocular video dynamic volume reconstruction methods typically separate static and dynamic parts, but these often do not represent meaningful foreground. We show improved foreground segmentation over NSFF and the current D\({}^{2}\)NeRF [46] method. We also show that our method maintains time-varying color and geometry reconstruction quality. We state our contributions: 1. A method to recover a neural field representation that encodes semantics, attention, and radiance within time-varying flowed density. 2. A demonstration that saliency as attention can be volume integrated for high-quality unsupervised segmentation. 3. For monocular video, to identify and overcome the problem of low-resolution semantic and attention information and to extract higher-fidelity semantics and saliency from low-resolution DINO-ViT features through pyramid and volume integration over spacetime. 4. Evaluation of the representation for dynamic scene decomposition via saliency-aware semantic feature clustering upon the real-world NVIDIA Dynamic Scene Dataset, which we augment with hand-labeled object masks. ## 2 Related Work Decomposing a scene into regions of interest is a long-studied task in computer vision [5], including class, instance, and panoptic segmentation in high-level or top-down vision, and use of low-level or bottom-up cues like motion. Recent progress has considered images [4, 12, 24], videos [15, 16], layer decomposition [28, 50], and in-the-wild databases using deep generative models [30]. One example, SAVi++ [8], uses slot attention [24] to define 2D objects in real-world videos. Providing first-frame segmentation masks achieves stable performance, with validation on the driving Waymo Open Dataset [39]. Our work attempts 3D scene decomposition for a casual monocular video without initial masks (Tab. 1). Scene Decomposition with NeRFsNeural Radiance Fields (NeRF) [47] have spurred new scene decomposition research through volumes. ObSuRF [38] and uORF [52] are unsupervised slot attention works that bind a latent code to each object. Unsupervised decomposition is also possible on light fields [37]. For dynamic scenes, works like NeuralDiff [43] and D\({}^{2}\)NeRF [46] focus on foreground separation, where foreground is defined to contain moving objects. Other works like N3F [42] and occlusions-4d [13] also decompose foregrounds into individual objects. N3F requires user input to specify which object to segment, and occlusions-4d takes RGB point clouds as input. Our work attempts to recover a segmented dynamic scene from a monocular RGB video without added markup or masks. Neural Fields Beyond RadianceResearch has begun to add non-color information to neural volumes to aid decomposition via additional feature heads on the MLP. iLabel [54] adds a semantic head to propagate user-provided segmentations in the volume. PNF [20] and Panoptic-NeRF [10] attempt panoptic segmentation within neural fields, and Object-NeRF integrates instance segmentation masks into the field during optimization [48]. Research also investigates how to apply generic pretrained features to neural fields, like DINO-ViT. DINO-ViT for Semantics and SaliencyDINO-ViT is a self-supervised transformer by Caron et al. [6]. After pretraining on a large dataset, DINO-ViT can extract generic semantic information from an image input. Amir et al. [1] use DINO-ViT features with \(k\)-means clustering to achieve co-segmentation across a video. Seitzer et al. [35] combine slot attention and DINO-ViT features for object-centric learning on real-world 2D data. TokenCut [45] performs normalized cuts on DINO-ViT features for foreground segmentation on natural images. Deep Spectral Segmentation [26] show that graph Laplacian processing of DINO-ViT features provides unsupervised foreground segmentation, and Selfmask [36] shows that these features can provide object saliency masks. Our approach considers these clustering and saliency findings for the setting of 3D decomposition from monocular video. DINO-ViT FieldsConcurrent works have integrated DINO-ViT features into neural fields. DFF [17] distills features for dense multi-view static scenes with user input for segmentation. N3F [42] expands NeuralDiff to dynamic scenes, and relies on user input for segmentation. AutoLabel [3] uses DINO-ViT features to accelerate segmentation in static scenes given a ground truth segmentation mask. Other works use DINO-ViT differently. FeatureRealisticFusion [25] uses DINO-ViT in an online feature fusion task, focusing on propagating user input, and NeRF-SOS [9] uses 2D DINO-ViT to process NeRF-rendered multi-view RGB images of a static scene. In contrast to these works, we consider real-world casual monocular videos, recover and decompose a 3D scene, then explore whether saliency can avoid the need for masks or user input in segmenting objects. ## 3 Method For a baseline dynamic scene reconstruction method, we begin with NSFF from Li [21] (Sec. 3.1), which builds upon NeRF [27]. NSFF's low-level scene flow frame-to-frame approach provides better reconstructions for real-world casual monocular videos than deformation-based methods [41, 32]. We modify the architecture to integrate higher-level semantic and saliency (or attention) features (Sec. 3.2). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & Dynamic & Mono- & Real & 3D & No seg. & Learning & Adaptive & Object- \\ & (video) & cular & world & & & clue & & \\ \hline ProposeReduce [23] & ✓ & ✓ & ✓ & ✕ & ✓ & T & ✓ & ✓ \\ SAVi++[8] & ✓ & ✓ & ✓ & ✕ & Mask & T & ✕ & ✓ \\ \hline ObSuRF [38] & ✕ & ✕ & ✕ & ✓ & ✓ & ✕ & ✕ & ✕ & ✓ \\ uORF [52] & ✕ & ✕ & ✕ & ✕ & ✓ & ✓ & ✕ & ✕ & ✓ \\ \hline DFF [17] & ✕ & ✕ & ✓ & ✓ & User & P & ✓ & ✓ \\ N3F [42] & ✓ & ✓ & ✓ & ✓ & User & P & ✓ & ✓ \\ \hline NSFF [21] & ✓ & ✓ & ✓ & ✓ & Mask & ✕ & N/A & N/A \\ D\({}^{2}\)NeRF [46] & ✓ & ✓ & ✓ & ✓ & ✓ & ✕ & N/A & N/A \\ \hline SAFF (this paper) & ✓ & ✓ & ✓ & ✓ & ✓ & P & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparing related work in segmentation and scene decomposition shows the unstudied area of dynamic 3D segmentation without explicit segmentation clues. We investigate whether saliency provides similar clues for monocular video. Supplemental material has an expanded table. Learning: Large-scale training data: T: Supervised task-specific data. P: Generic features (e.g., ImageNet). X: No features used. After optimizing a SAFF for each scene, we perform saliency-aware clustering to produce segmentations (Sec. 3.4). All implementation details are in our supplemental material. **Input** Our method takes in a single RGB video over time \(i\) as an ordered set of images \(I\in\mathcal{I}\) and camera poses. We use COLMAP to recover camera poses [34]. From all poses, we define an NDC-like space that bounds the scene, and a set of rays \(\mathbf{r}\in\mathcal{R}\), one per image pixel with color \(\hat{\mathbf{c}}^{\dagger}\). Here, \(\hat{\cdot}\) denotes a 2D pixel value in contrast to a 3D field value, and \(\cdot^{\dagger}\) denotes an input value in contrast to an estimated value. From pretrained networks, we estimate single-frame monocular depth \(\hat{d}^{\dagger}\) (MiDaSv2 [33]), optical flow \(\hat{\mathbf{p}}^{\dagger}_{i}\) (RAFT [40]), and semantic features \(\hat{\mathbf{s}}^{\dagger}\) and attention \(\hat{\mathbf{a}}^{\dagger}\) (DINO-ViT [6]) after important preprocessing (Sec. 3.3). ### Initial Dynamic Neural Volume Representation The initial representation comprises a static NeRF \(F^{\text{st}}_{\boldsymbol{\theta}}\) and a dynamic NeRF \(F^{\text{dy}}_{\boldsymbol{\theta}}\). The static network predicts a color \(\mathbf{c}\), density \(\sigma\), and blending weight \(v\) (Eq. (1)), and the dynamic network predicts time-varying color \(\mathbf{c}_{i}\), density \(\sigma_{i}\), scene flow \(\mathbf{f}_{i}\), and occlusion weights \(w_{i}\) (Eq. (2)). In both network architectures, other than position \(\mathbf{x}\), direction \(\boldsymbol{\omega}\) is added to a late separate head and only conditions the estimation of color \(\mathbf{c}\). \[F^{\text{st}}_{\boldsymbol{\theta}}:(\mathbf{x},\boldsymbol{ \omega}) \rightarrow(\mathbf{c}^{\text{st}},\sigma^{\text{st}},v) \tag{1}\] \[F^{\text{dy}}_{\boldsymbol{\theta}}:(\mathbf{x},\boldsymbol{ \omega},i) \rightarrow(\mathbf{c}^{\text{dy}}_{i},\sigma^{\text{dy}}_{i}, \mathbf{f}_{i},w_{i}) \tag{2}\] To produce a pixel's color, we sample points at distances \(t\) along the ray \(\mathbf{x}_{t}=\mathbf{x}-\boldsymbol{\omega}t\) between near and far planes \(t_{n}\) to \(t_{f}\), query each network, then integrate transmittance \(T\), density \(\sigma\), and color along the ray from these samples [27]. For brevity, we omit evaluating at \(\mathbf{x}_{t},\boldsymbol{\omega}\), _e.g._, \(\sigma^{\text{st}}(\mathbf{x}_{t})\) is simply \(\sigma^{\text{st}}\); \(\mathbf{c}^{\text{st}}(\mathbf{x}_{t},\boldsymbol{\omega})\) is simply \(\mathbf{c}^{\text{st}}\). We produce a combined color from the static and dynamic colors by multiplication with their densities: \[\sigma_{i}\mathbf{c}_{i}=v\sigma^{\text{st}}\mathbf{c}^{\text{st}}+(1-v) \sigma^{\text{dy}}_{i}\mathbf{c}^{\text{dy}}_{i} \tag{3}\] Given that transmittance integrates density up to the current sampled point under Beer-Lambert volume attenuation, the rendered pixel color for the ray is computed as: \[\hat{\mathbf{c}}_{i}=\int_{t_{n}}^{t_{f}}T_{i}\sigma_{i}\mathbf{c}_{i}\,dt \ \ \text{where}\ \ T_{i}=\exp\left(-\int_{t_{n}}^{t}\sigma_{i}\,dt\right) \tag{4}\] To optimize the volume to reconstruct input images, we compute a photometric loss \(L_{\hat{\mathbf{c}}}\) between rendered and input colors: \[\mathcal{L}_{\hat{\mathbf{c}}}=\frac{1}{|\mathcal{R}|}\sum_{\mathbf{r}_{i}\in \mathcal{R}}||\hat{\mathbf{c}}_{i}(\mathbf{r}_{i})-\hat{\mathbf{c}}_{i}^{ \dagger}(\mathbf{r}_{i})||_{2}^{2} \tag{5}\] Scene FlowDefining correspondence over time is important for monocular input as it lets us penalize a reprojection error with neighboring frames \(j\in\mathcal{N}\), e.g., where \(j=i+1\) or \(j=i-1\). We denote \(i\to j\) for the projection of frame \(i\) onto frame \(j\) by scene flow. \(F^{\text{dy}}_{\boldsymbol{\theta}}\) estimates both forwards and backwards scene flow at every point to penalize a bi-direction loss. Thus, we approximate color output at time step \(j\) by flowing the queried network values at time step \(i\): \[\hat{\mathbf{c}}_{i\to j}=\int_{t_{n}}^{t_{f}}T_{i\to j}\sigma_{i \to j}\mathbf{c}_{i\to j}\,dt \tag{6}\] Reprojection must account for occlusion and disocclusion by motion. As such, \(F^{\text{dy}}_{\boldsymbol{\theta}}\) also predicts forwards and backwards scene occlusion weights \(w_{i+1}\) and \(w_{i-1}\in[0,1]\), where a point with \(w_{i+1}=0\) means that occlusion status is changed one step forwards in time. We can integrate \(w\) to a pixel: \[\hat{w}_{i\to j}=\int_{t_{n}}^{t_{f}}T_{i\to j}\sigma_{i\to j}w_{j}\,dt \tag{7}\] Figure 1: **Overview. From a casual monocular video, SAFF builds a neural field of scene-flow-corresponded 3D density, radiance, semantics, and attention (b). This is guided by depth and optical flow priors, plus semantic attention pyramids that trade fine detail with whole-image context (a). We can render new spacetime views of any channel, and use saliency-aware clustering to decompose objects and background (c).** Then, this pixel weight modulates the color loss such that occluded pixels are ignored. \[\mathcal{L}_{\hat{\mathbf{c}}_{i\to j}}=\frac{1}{|\mathcal{R}||\mathcal{N}|}\sum_{ \mathbf{r}_{i}\in\mathcal{R}}\sum_{j\in\mathcal{N}}\hat{w}_{i\to j}(\mathbf{r}_{ i})||\hat{\mathbf{c}}_{i\to j}(\mathbf{r}_{i})-\hat{\mathbf{c}}_{j}^{\dagger}( \mathbf{r}_{j})||_{2}^{2}\] Prior lossesWe use the pretrained depth and optical flow map losses to help overcome the ill-posed monocular reconstruction problem. These losses decay as optimization progresses to rely more and more on the optimized self-consistent geometry and scene flow. For geometry, we estimate a depth \(\hat{d}_{i}\) for each ray \(\mathbf{r}_{i}\) by replacing \(\mathbf{c}_{i}\) in Eq. (4) by the distance \(t\) along the ray. Transform \(z\) estimates a scale and shift as the pretrained network produces only relative depth. \[\mathcal{L}_{\hat{d}}=\frac{1}{|\mathcal{R}|}\sum_{\mathbf{r}_{i}\in\mathcal{ R}}||\hat{d}_{i}-z(\hat{d}_{i}^{\dagger})||_{1} \tag{9}\] For motion, projecting scene flow to a camera lets us compare to the estimated optical flow. Each sample point along a ray \(\mathbf{x}_{i}\) is advected to a point in the neighboring frame \(\mathbf{x}_{i\to j}\), then integrated to the neighboring camera plane to produce a 2D point offset \(\hat{\mathbf{p}}_{i}(\mathbf{r}_{i})\). Then, we expect the difference in the start and end positions to match the prior: \[\mathcal{L}_{\hat{\mathbf{p}}}=\frac{1}{|\mathcal{R}||\mathcal{N}|}\sum_{ \mathbf{r}_{i}\in\mathcal{R}}\sum_{j\in\mathcal{N}(i)}||\hat{\mathbf{p}}_{i}( \mathbf{r}_{i})-\hat{\mathbf{p}}_{i}^{\dagger}(\mathbf{r}_{i})||_{1} \tag{10}\] The combined objective is then: \[\mathcal{L}_{\text{NSFF}}=\mathcal{L}_{\hat{\mathbf{c}}}+\lambda_{\hat{ \mathbf{c}}_{i\to j}}\mathcal{L}_{\hat{\mathbf{c}}_{i\to j}}+\lambda_{\hat{d}} \mathcal{L}_{\hat{d}}+\lambda_{\hat{\mathbf{p}}}\mathcal{L}_{\hat{\mathbf{p}}} \tag{11}\] Additional regularizations encourage occlusion weights to be close to one, scene flow to be small, locally constant, and cyclically consistent, and blending weight \(v\) to be sparse. ### Semantic Attention Flow Fields Thus far, we have only integrated low-level or _bottom-up_ features into the field to represent a video. However, high-level or _top-down_ features are also useful in defining objects and helping down-stream tasks like segmentation. For example, static/dynamic blend \(v\) estimates whether the volume appears to be occupied by some moving entity, but this is not the same as objectness. As such, we extract 2D semantic features and attention (or saliency) values from a pretrained DINO-ViT network, then optimize the SAFF such that unknown 3D semantic and attention features over time can be projected to recreate their 2D complements. This helps us to ascribe semantic meaning to the volume and to identify objects. To estimate semantic features \(\mathbf{s}\) and attention \(\mathbf{a}\) at 3D points in the volume at time \(i\), we add two new heads to both the static \(F_{\mathbf{\theta}}^{\text{st}}\) and the dynamic \(F_{\mathbf{\theta}}^{\text{dy}}\) networks: \[F_{\mathbf{\theta}}^{\text{st}}:(\mathbf{x},\mathbf{\omega}) \rightarrow(\mathbf{c}^{\text{st}},\sigma^{\text{st}},v,\mathbf{s}^ {\text{st}},\mathbf{a}^{\text{st}}) \tag{12}\] \[F_{\mathbf{\theta}}^{\text{dy}}:(\mathbf{x},\mathbf{\omega},i) \rightarrow(\mathbf{c}_{i}^{\text{dy}},\sigma_{i}^{\text{dy}}, \mathbf{f}_{i},w_{i},\mathbf{s}_{i}^{\text{dy}},\mathbf{a}_{i}^{\text{dy}}) \tag{13}\] As semantic features have been demonstrated to be somewhat robust to view dependence [1], in our architectures both heads for \(\mathbf{s}\), \(\mathbf{a}\) are taken off the backbone before \(\mathbf{\omega}\) is injected. To render semantics and attention from the volume, we replace the color term \(\mathbf{c}\) in Eq. (4) with \(\mathbf{s},\mathbf{a}\): \[\sigma_{i}\mathbf{s}_{i} =v\sigma^{\text{st}}\mathbf{s}^{\text{st}}+(1-v)\sigma_{i}^{ \text{dy}}\mathbf{s}_{i}^{\text{dy}} \tag{14}\] \[\sigma_{i}\mathbf{a}_{i} =v\sigma^{\text{st}}\mathbf{a}^{\text{st}}+(1-v)\sigma_{i}^{\text {dy}}\mathbf{a}_{i}^{\text{dy}}\] (15) \[\hat{\mathbf{s}}_{i} =\int_{t_{n}}^{t_{f}}T_{i}\sigma_{i}\mathbf{s}_{i}\,dt\ \ \ \text{and}\ \ \hat{\mathbf{a}}_{i}=\int_{t_{n}}^{t_{f}}T_{i}\sigma_{i}\mathbf{a}_{i}\,dt \tag{16}\] To encourage the flow to respect semantic features and attention over time, we penalize complementary flow losses: \[\mathcal{L}_{\hat{\mathbf{s}}_{i\to j}}=\frac{1}{|\mathcal{R}|| \mathcal{N}|}\sum_{\mathbf{r}_{i}\in\mathcal{R}}\sum_{j\in\mathcal{N}}\hat{w}_{ i\to j}(\mathbf{r}_{i})||\hat{\mathbf{s}}_{i\to j}(\mathbf{r}_{i})- \hat{\mathbf{s}}_{j}^{\dagger}(\mathbf{r}_{j})||_{2}^{2}\] \[\mathcal{L}_{\hat{\mathbf{a}}_{i\to j}}=\frac{1}{|\mathcal{R}|| \mathcal{N}|}\sum_{\mathbf{r}_{i}\in\mathcal{R}}\sum_{j\in\mathcal{N}}\hat{w}_{ i\to j}(\mathbf{r}_{i})||\hat{\mathbf{a}}_{i\to j}(\mathbf{r}_{i})- \hat{\mathbf{a}}_{j}^{\dagger}(\mathbf{r}_{j})||_{2}^{2} \tag{17}\] Finally, to supervise the two extra heads, we add respective losses on the reconstruction of the 2D semantic and attention features from projected 3D volume points: \[\mathcal{L}_{\hat{\mathbf{s}}} =\frac{1}{|\mathcal{R}|}\sum_{\mathbf{r}_{i}\in\mathcal{R}}|| \hat{\mathbf{s}}_{i}(\mathbf{r}_{i})-\hat{\mathbf{s}}_{i}^{\dagger}(\mathbf{r}_{i })||_{2}^{2} \tag{18}\] \[\mathcal{L}_{\hat{\mathbf{a}}} =\frac{1}{|\mathcal{R}|}\sum_{\mathbf{r}_{i}\in\mathcal{R}}|| \hat{\mathbf{a}}_{i}(\mathbf{r}_{i})-\hat{\mathbf{a}}_{i}^{\dagger}(\mathbf{r}_{i })||_{2}^{2} \tag{19}\] Unlike depth and scene flow priors, these are not priors--there is no self-consistency for semantics to constrain their values. Thus, we _do not_ decay their contribution. While decaying avoids disagreements between semantic and attention features and color-enforced scene geometry, it also leads to a loss of useful meaning (please see supplemental). Thus our final loss becomes: \[\mathcal{L}_{\text{SAFF}}=\mathcal{L}_{\text{NSFF}}+\lambda_{\hat{ \mathbf{s}}_{i\to j}}\mathcal{L}_{\hat{\mathbf{s}}_{i\to j}}+\lambda_{\hat{ \mathbf{a}}_{i\to j}}\mathcal{L}_{\hat{\mathbf{a}}_{i\to j}}\] \[+\lambda_{\hat{\mathbf{s}}}\mathcal{L}_{\hat{\mathbf{s}}}+\lambda_{ \hat{\mathbf{a}}}\mathcal{L}_{\hat{\mathbf{s}}} \tag{20}\] ### Semantic Attention Pyramids When thinking about scenes, we might argue that semantics from an ideal extractor should be scale invariant, as distant objects have the same class as close objects. We might also argue that saliency (or attention features) may not be scale invariant, as small details in a scene should only be salient when viewed close up. In practice, both extracted features vary across scale and have limited resolution, e.g., DINO-ViT [6] produces one output for each \(8\times 8\) patch. But, from this, we want semantic features and saliency for every RGB pixel that still respects scene boundaries. Thus far, work on _static_ scenes has ignored the input/feature resolution mismatch [17] as multi-view constraints provide improved localization within the volume. For monocular video, this approach has limitations [42]. Forming many constraints on dynamic objects requires long-term motion correspondence--a tricky task--and so we want to maximize the resolution of any input features where possible without changing their meaning. One way may be through a pyramid of semantic and attention features that uses a sliding window approach at finer resolutions. Averaging features could increase detail around edges, but we must overcome the practical limit that these features are not stable across scales. This is especially important for saliency: unlike typical RGB pyramids that must preserve energy in an alias-free way [2], saliency changes significantly over scales and does not preserve energy. Consider a feature pyramid \(\mathcal{P}\) with loss weights per level: \[\mathcal{L}_{\mathcal{P}\mathbb{s}}=\sum_{i\in\mathcal{P}}\lambda_{\mathbf{s}} ^{i}\mathcal{L}_{\mathbf{s}}^{i}\ \ \ \ \ \mathcal{L}_{\mathcal{P}\mathbb{a}}=\sum_{i\in\mathcal{P}}\lambda_{\mathbf{s}} ^{i}\mathcal{L}_{\mathbf{a}}^{i} \tag{21}\] Naively encouraging scale-consistent semantics and whole-image saliency, e.g., \(\lambda_{\mathbf{s}}\!=\!\left\{\nicefrac{{1}}{{3}},\nicefrac{{1}}{{3}}, \nicefrac{{1}}{{3}}\right\}\) with \(\lambda_{\mathbf{s}}\!=\!\left\{1,0,0\right\}\), empirically leads to poor recovered object edges because the balanced semantics and coarse saliency compete over where the underlying geometry is. Instead, we weight both equally \(\lambda_{\mathbf{s}}=\lambda_{\mathbf{\hat{a}}}=\left\{\nicefrac{{1}}{{9}}, \nicefrac{{4}}{{9}},\nicefrac{{4}}{{9}}\right\}\). Even though the coarse layer has smaller weight, it is sufficient to guide the overall result. This balances high resolution edges from fine layers and whole object features from coarse layers while reducing geometry conflicts, and leads to improved features (Fig. 2). Of course, any sliding window must contain an object to extract reliable features for that object. At coarse levels, an object is always in view. At fine levels, an object is only captured in _some_ windows. Objects of interest tend to be near the middle of the frame, meaning that boundary windows at finer pyramid levels contain features that less reliably capture those objects. This can cause spurious connections in clustering. To cope with this, we relatively decrease finer level boundary window weights: We upsample all levels to the finest level, then increase the coarsest level weight towards the frame boundary to \(\lambda_{\mathbf{\hat{a}}}=\lambda_{\mathbf{\hat{a}}}=\left\{\nicefrac{{1}}{{3} },\nicefrac{{1}}{{3}},\nicefrac{{1}}{{3}}\right\}\). ### Using SAFF for Saliency-aware Clustering We now wish to isolate salient objects. Even in dynamic scenes, relevant objects may or many not move, meaning that analysis of dynamic elements is not sufficient (cf. [46]). One existing method is to predict segmentation end-to-end [23]. However, end-to-end learning requires priors about the scene provided by supervision. Even when pretrained on large-scale data, the approach could fail given unseen test distributions. To achieve scene-specific decompositions based on the semantics present in each video, we expand the 2D clustering of Amir _et al_. [1] to cope with volumes; this allows segmenting novel spatio-temporal views. Even though DINO-ViT features are trained on images, they have been shown to represent saliency over time in videos [6]. Some works optimize a representation with a fixed number of clusters, e.g., via slot attention [24] in NeRFs [38, 52]. Instead, we cluster using elbow \(k\)-means, letting us adaptively find the number of clusters after optimization. This is more flexible than baking an anticipated number of slots (sometimes with fixed semantics), and lets us cluster and segment in novel spatio-temporal viewpoints. Finally, we cluster on rendered volume 2D projections over time. Given a volume reconstruction, intuitively we can cluster directly in 3D over time. While promising, experimentally this was less successful than clustering on volume projections (Supplemental C.2). This is because the monocular input with narrow baseline makes it challenging to precisely localize anything in 3D, including features (we discuss this further later). However, it still allows us to render new views of objects or manipulate their spacetime (Fig. 3). Figure 2: **Semantics and saliency improve by both volume integration and by our pyramid.** On _Balloon NBoard_, resolution is increased and unwanted saliency is softened. Semantics are visualized as most significant three PCA dimensions; specific colors are less meaningful. MethodFor \(N\) input poses, we render semantics (\(N\times H\times W\times 64\)) and saliency (\(N\times H\times W\times 1\)) from the SAFF, and treat each pixel as a feature point. Then, we cluster all pixels together using elbow \(k\)-means to produce an initial set of separate regions. For each cluster \(c\), for each image, we calculate the mean attention of all pixels within the cluster \(\bar{\mathbf{a}}_{c}\). If \(\bar{\mathbf{a}}_{c}>0.07\), then this cluster is salient for this image. Finally, all images vote on saliency: if more than \(70\%\) agree, the cluster is salient. Salient objects may still be split into semantic parts: _e.g_., in Fig. 3, the person's head/body are separated. Plus, unwanted background saliency may exist, _e.g_., input \(\hat{\mathbf{a}}^{\dagger}\) is high for the teal graphic on the wall. As such, before saliency voting, we merge clusters whose centroids have a cosine similarity \(>0.5\). This reduces the first problem as heads and bodies are similar, and reduces the second problem as merging the graphic cluster into the background reduces its _average_ saliency (Fig. 3). For novel space-time views, we render feature images from the volume, then assign cluster labels to each pixel according to its similarity with stored centroids from the input views. All clusters not similar to the stored salient clusters are marked as background. To isolate an object from the volume, we sample 3D points along each input ray, then ascribe the label from the semantically-closest centroid. We set non-salient label points to have zero density. ## 4 Experiments We show the impact of adding semantic and saliency features through reconstruction, scene decomposition, and foreground experiments. Please also see our supplemental videos. Data: Dynamic Scene Dataset (Masked)We use NVIDIA's Dynamic Scene Dataset [51] of eight sequences. Each sequence comprises of 12 cameras simultaneously capturing video at 24 time steps. We manually annotate object masks for view and time step splits; this data will be released. Please see our supplemental material for examples. Then, we define three data splits per sequence: 1. [leftmargin=*] 2. _Input_: A monocular camera that moves position for every timestep is simulated from the input sequences; we use Yoon _et al_.'s input sequences [51]. 3. _Fix Cam 0_ (hold out): We fix the camera at position 0 as time plays, requiring novel view and time synthesis. \(\{(\mathrm{cam}_{0},\mathrm{time}_{i}),i\in[1,2,...23]\}\). 4. _Fix Time 0_ (hold out): We fix time at step 0 as the camera moves, requiring novel view and time synthesis. \(\{(\mathrm{cam}_{i},\mathrm{time}_{0}),i\in[1,2,...11]\}\). MetricsTo assess clustering performance, we use the Adjusted Rand Index (ARI; \([-1,1]\)). This compares the similarity of two assignments without label matching, where random assignment would score \(\approx\)0. For foreground segmentation, we compute IoU (Jaccard), and for RGB quality we use PSNR, SSIM, and LPIPS. ### Comparisons including ablations We compare to methods that operate on monocular videos and do not require user input or initial masks. While very recent neural volume works use features, none meet these conditions, e.g., related N3F [42] requires user input. SAFF (ours)We optimize upon the input split of each scene, and perform clustering to obtain object segmentations. To produce a foreground, we merge all salient objects. -- **w/ pyr**\(\lambda_{\hat{\mathbf{a}}}=\{1,0,0\}\)Pyramid with only coarse saliency (Sec. 3.3) and balanced semantic weight across levels. -- **w/o pyr** No pyramid (Sec. 3.3); we optimize with features and saliency extracted from the input image only. -- **w/o merge** With pyramid, but we remove cluster merging inside the saliency-aware clustering algorithm. -- **w/ blend**\(v\)To compare generic dynamic segmentation to saliency segmentation, we use the static/dynamic weight Figure 3: **Saliency-aware clustering improves decomposition. On _Dynamic Face_, the head and body are semantically and saliently different, but are mutually different from the background. This allows us to segment objects cleanly and manipulate a time-varying field of the object.** instead of volume saliency to segment foreground objects. We set every pixel below the 80% \(v\) quantile in each image to be background, or otherwise foreground. **-- w/ post process** We add a step after the saliency-aware clustering to refine edges using a conditional random field (please see supplemental material for details). This gains significantly from the depth estimated via volume reconstruction, producing sharp and detailed edges. **NSEF**[21] This method cannot produce semantic clusterings. While saliency and blend weight \(v\) have different meanings, if we compare our \(v\) to NSEF's, then we can see any impact of a shared backbone with attention heads upon the static/dynamic separation. We disable the supervised motion mask initialization in NSEF as our method does not use such information. **D\({}^{2}\)NeRF**[46] This method also cannot produce semantic clusterings. Over HyperNeRF [32], it adds a shadow field network and further losses to try to isolate objects into the dynamic NeRF over the separate static one. The paper also compares to NSEF without motion mask initialization. **DINO-ViT (2D) [1]** We ignore the volume and pass 2D semantic and attention features into the clustering algorithm. This cannot apply to novel viewpoints. Instead, we evaluate the approach upon _all_ multi-view color images--input and hold-out--whereas other methods must render hold-out views. With pyramid processing (Sec. 3.3). **-- w/o pyr** No pyramid; upsample to input RGB size. **ProposeReduce (2D) [23]** As a general comparison point, we apply this state-of-the-art 2D video segmentation method. For object segmentation, we use a ProposeReduce network that was pretrained with supervision on YouTube-VIS 2019 [49] for instance segmentation. For foreground segmentation, we use the same method but with weights pretrained on UVOS [5], which is data intended specifically for unsupervised foreground segmentation. In both cases, as ProposeReduce is only a 2D method, we provide the method with hold-out images for splits with novel views rather than our approach that must render novel views at hold-out poses. ### Findings View synthesis and depthFirst, we evaluate whether RGB view synthesis performance is affected by adding more heads to the MLP. We find that it is not affected (Tab. 2). D\({}^{2}\)NeRF's hyper-spacetime deformation has trouble reconstructing images on this dataset, producing distorted dynamic objects or failing to freeze time. For scene geometry over time (depth), we produce similar results to NSEF (cf. in our supplement). Dynamic scene decompositionSecond, we ask the relevant methods to separate the background and each foreground object individually (Tab. 3). The baseline 2D DINO-ViT method produces reasonable results, with our pyramid approach in 2D increasing performance. But, being only 2D, this fails to produce a consistent decomposition across novel spacetime views _even_ when given ground truth input RGB images. This shows the value of the volume integration for \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Input} & \multicolumn{3}{c}{Fix Cam 0} & \multicolumn{3}{c}{Fix Time 0} \\ & L \(\blacktriangledown\) & S \(\blacktriangle\) & P \(\blacktriangle\) & L & S & P & L & S & P \\ \hline D\({}^{2}\)NeRF & 0.115 & 0.790 & 23.91 & 0.228 & 0.565 & 18.04 & 0.344 & 0.309 & 13.85 \\ NSEF w/o masks & 0.070 & 0.805 & 23.92 & 0.100 & 0.762 & 21.68 & 0.302 & 0.386 & 14.92 \\ SAFF (ours) & 0.070 & 0.805 & 23.92 & 0.100 & 0.762 & 21.70 & 0.302 & 0.386 & 14.93 \\ \hline \hline \end{tabular} \end{table} Table 2: **SAFF does not hurt image quality. Adding semantics and attention on the same backbone produces the same image quality as NSEF [21]. Metrics: L is LPIPS (\([0,1]\), lower is better), S is SSIM (\([0,1]\), higher is better), P is PSNR (\([0,\infty]\), higher is better).** Figure 4: **SAFF object segmentations show balanced quality and apply to novel spacetime views (e). Basic DINO-ViT produces low-quality segmentations and misses objects. A state-of-the-art 2D video learning method [23] sometimes has edge detail (_Umbrella_, legs) but othertimes misses detail and objects (_Balloon NBoard_). Our approach balances these while recovering a 3D scene representation (Tab. 3).** constraining the solution. Next, as a supervised method, ProposeReduce can produce good results (Fig. 4), but sometimes misses salient objects or fails to join them behind occluding objects, and only sometimes produces better edges than our method without post-processing as it tends to produce over-smooth edges. As ProposeReduce is a 2D method, it benefits from being given ground truth images in hold-out sets. Instead, our approach must render hold-out views via the volume reconstruction. This produces more consistent segmentations through spacetime manipulations--this is the added value of volume integration through learned saliency and attention heads. Ablated components show the value of our pyramid step, its coarse-saliency-only variant, and the cluster merge and image post processing steps. Qualitatively, we see good detail (Fig. 4); post-processing additionally improves edge quality and removes small unwanted regions. these different approaches, we compare to a result from slot-attention based SAVi++ [8]. This method trains on thousands of supervised MOVi [11] sequences with per-object masks, whereas we tangentially use generic pre-trained features and gain better edges from volume integration. While expensive, combining these two approaches could give accurate instance-level scene objects. Second, DINO-ViT saliency may attend to unwanted regions. In Figure 6, bottom, we might think that the static pillars could be isolated using scene flow information. But, often our desired subjects do not move (cf. people in _Umbrella_ or _Balloon NBoard_). In applications or data that can assume that salient objects are dynamic, we can exploit SAFF's 4D scene reconstruction to reject static-but-salient objects by also merging clusters via scene flow: First, we project f over each timestep into each input camera pose--this simulates optical flow with a static camera. Clusters are marked as salient per image if mean flow magnitude per cluster \(|\bar{\mathbf{p}}|>0.07\)_and_ mean attention \(\bar{\mathbf{a}}_{c}>0.07\). Finally, as before, a cluster is globally salient if 70% of images agree (Fig. 6f). Third, while a SAFF is a 3D representation over time, one question is why clustering in projected 2D space gives better performance than a 3D clustering approach (Supplemental C.2). Given monocular input with narrow baselines, while volume clustering is possible, the scene reconstruction is only of surfaces and the geometry can be noisy. Consider that depth integrated a long a ray can still be accurate even though geometry at specific points in 3D may be inaccurate or 'fluffy'. This is is contrast to dynamic scenes captured with multi-camera setups or static scenes captured with wide baselines. Thus, taking advantage of separation in 3D for monocular casually-captured videos is harder than it appears. We demonstrate that our approach still benefits from volume integration without explicit 3D clustering. With respect to evaluation, it is difficult to collect ground truth segmented 3D data for dynamic real world scenes (none exist to our knowledge); this is an area for future work. Finally, we manage varying saliency over scales through the pyramid and volume approach. Given the concept of saliency, ideally it could be requested from the volume at different scales. This also is an area for future consideration. AcknowledgementsThe authors thank the computer vision community in New England for feedback, and acknowledge funding from NSF CNS-2038897 and an Amazon Research Award. Eliot was supported by a Randy F. Pausch '82 Computer Science Undergraduate Summer Research Award at Brown University.
2308.03043
3D-EX : A Unified Dataset of Definitions and Dictionary Examples
Definitions are a fundamental building block in lexicography, linguistics and computational semantics. In NLP, they have been used for retrofitting word embeddings or augmenting contextual representations in language models. However, lexical resources containing definitions exhibit a wide range of properties, which has implications in the behaviour of models trained and evaluated on them. In this paper, we introduce 3D- EX , a dataset that aims to fill this gap by combining well-known English resources into one centralized knowledge repository in the form of <term, definition, example> triples. 3D- EX is a unified evaluation framework with carefully pre-computed train/validation/test splits to prevent memorization. We report experimental results that suggest that this dataset could be effectively leveraged in downstream NLP tasks. Code and data are available at https://github.com/F-Almeman/3D-EX .
Fatemah Almeman, Hadi Sheikhi, Luis Espinosa-Anke
2023-08-06T07:59:12Z
http://arxiv.org/abs/2308.03043v2
# 3D-ex: A Unified Dataset of Definitions and Dictionary Examples ###### Abstract Definitions are a fundamental building block in lexicography, linguistics and computational semantics. In NLP, they have been used for retrofitting word embeddings or augmenting contextual representations in language models. However, lexical resources containing definitions exhibit a wide range of properties, which has implications in the behaviour of models trained and evaluated on them. In this paper, we introduce 3D-ex, a dataset that aims to fill this gap by combining well-known English resources into one centralized knowledge repository in the form of \(<\)term, definition, example\(>\) triples. 3D-ex is a unified evaluation framework with carefully pre-computed train/validation/test splits to prevent memorization. We report experimental results that suggest that this dataset could be effectively leveraged in downstream NLP tasks. Code and data are available at [https://github.com/F-Almeman/3D-EX](https://github.com/F-Almeman/3D-EX). ## 1 Introduction Lexicographic definitions have played an important role in NLP. For example, definitions, and more specifically, term-hypernym pairs occurring in them, constitute a core component in applications such as taxonomy learning (Navigli et al., 2011; Velardi et al., 2013; Espinosa-Anke et al., 2016), knowledge base construction (Delli Bovi et al., 2015), or for augmenting language models (LMs) (Joshi et al., 2020; Chen et al., 2022). For this reason, numerous works have proposed methods to extract definitions from corpora (definition extraction, or DE) (Navigli and Velardi, 2010; Espinosa-Anke and Schockaert, 2018; Spala et al., 2020). However, DE, traditionally framed as a sentence classsification problem, plateaus quickly in terms of its applicability to real-world settings for a number of reasons, namely: (1) it is tied to a reference corpus; (2) it does not handle flexible contexts (e.g., definitional information appearing across several sentences); and (3) incorporating monolithic sentence-level definitional knowledge into LMs during pretraining is not straightforward. A complementary task to the above is definition modeling (DM), a promising direction both from resource creation and NLP standpoints. DM is the task of automatically generating human-readable lexicographic definitions or glosses given some input. From its inception, where Noraset et al. (2017) trained a bidirectional LSTM on \(\langle t,d\rangle\) pairs, where \(t\) is an input term, and \(d\) is its corresponding definition, more recent contributions in this area have leveraged contextualized representations by augmenting \(t\) with some context \(c\)(Ni and Wang, 2017; Gadetsky et al., 2018; Ishiwatari et al., 2019; Reid et al., 2020; Bevilacqua et al., 2020). A crucial prerequisite for enabling, among others, successful DM systems is having access to datasets that combine terms, definitions, and _good dictionary examples_(Kilgarriff et al., 2008; Kosem et al., 2019; Frankenberg-Garcia et al., 2019). In lexicographic resources, these good dictionary examples are written by professional lexicographers or domain experts, and often adhere to some style guidelines. This makes these sentences a valuable contextual resource for understanding the meaning of words, sometimes complementing knowledge gaps that may still exist even after reading a concept's definition. DM is, arguably, one of the most recent direct NLP application of lexical resources. We therefore argue for the need of a centralized repository that could be used to train and test DM systems, explore out-of-domain generalization, and most importantly, act as a unified test bed for lexical semantics tasks. In this paper, we fill this gap by introducing 3D-Ex, a dataset that unifies a diverse set of English dictionaries and encyclopedias. Our results suggest that, indeed, 3D-Ex is a valuable resource for testing generative models in lexicographic contexts due to its varied sources, which makes it hard to memorize, and is also helpful for augmenting competitive baselines in downstream tasks. ## 2 Related work Lexical resources have a long-standing tradition in lexical semantics Camacho-Collados et al. (2018). Given the breadth of the area, we will review some of the most prominent existing resources, and then focus on how these resources have been leveraged in NLP tasks. ### Lexical resources Arguably, the best known lexical resource in NLP is WordNet (WN) Miller (1995), and as Hovy et al. (2013) described it, "the list papers using WN seems endless". Other resources which have complemented or augmented WN in the NLP space include knowledge bases such as Yago Suchanek et al. (2008), DBPedia Auer et al. (2007), BabelNet Navigli and Ponzetto (2012) or WikiData Vrandecic and Krotzsch (2014)1. Traditional dictionaries have also played an important role in NLP, we review these in Section 3, as they constitute the backbone of 3D-Ex. Footnote 1: Note that all these resources include definitions, unlike other resources designed for different purposes such as commonsense reasoning (e.g., ConceptNet Speer et al. (2012)). ### Applications in NLP Lexical resources in general, and dictionaries in particular, have played a critical role in recent years for improving (knowledge-rich and organic) NLP systems. For instance Faruqui et al. (2014) retrofitted word embeddings using semantic relations; Joshi et al. (2020) and Chen et al. (2022) used definitional information to augment pretrained LMs; and Delli Bovi et al. (2015), Espinosa-Anke et al. (2016) and Xu et al. (2022) used definitions for generating knowledge bases. In parallel, a generative avenue mostly revolving around DM has garnered substantial interest, where earlier works used LSTMs Noraset et al. (2017); Gadetsky et al. (2018); Ishiwatari et al. (2019), and later contributions shifted to LMs Bevilacqua et al. (2020); Huang et al. (2021); August et al. (2022). These works used DM models for downstream tasks like word sense disambiguation (WSD) Navigli (2009), word-in-context classification Pilehvar and Camacho-Collados (2019) or specificity-controlled glossary writing. Other works have explored complementary spaces, e.g., exemplification modeling (i.e., generating suitable dictionary examples given a word-definition pair) or full-fledged dictionary writing Barba et al. (2021); de Schryver and Joffe (2023); Sierra et al. (2023). ### Datasets Let us review the datasets we integrate into 3D-ex and how they have been applied either in lexicography or downstream NLP tasks. WordNet:WN is an electronic lexical database for English that organises words in groups of synonyms called _synsets_Miller (1995); Fellbaum (2013). Each synset is described by its definition, surface forms (lemmas), examples of usage (where available), and the relations between synsets, e.g., hypernymy (is-a), meronymy (is-part) or troponymy (manner-of). WN's primary use in NLP is as a sense inventory Agirre and Edmonds (2007); Zhang et al. (2022); Pu et al. (2023). Cha:CHA Chang and Chen (2019) is an online dataset of words, definitions and dictionary examples from the Oxford Dictionary. It can be considered as a corpus of "traditional" dictionary definitions, and has been leveraged for DM by Bevilacqua et al. (2020) and for benchmarking the quality of WN's examples Almeman and Espinosa-Anke (2022). Wikipedia:Wikipedia is an online encyclopedia that is created by various contributors on the web (Yano and Kang, 2016). In this work we used a dataset that is built by Ishiwatari et al. (2019) from Wikipedia and Wikidata and each entry consists of a phrase, description, and example. This dataset is used to evaluate DM approaches that combine distributional and lexical semantics using continuous latent variables Reid et al. (2020). Urban:Urban Dictionary is a crowd-sourced dictionary for terms that are not typically captured by traditional dictionaries Wilson et al. (2020). In this work we used URBAN dataset that was created from Urban dictionary by Reid et al. (2020) as a corpus of uncommon and slang words. Wiktionary:Wiktionary is a freely available web-based dictionary that provides detailed information on lexical entries such as definitions, examples of usage, pronunciation, translations, etc. (Bajcetic and Declerck, 2022). It has been used as a resource for WSD (Meyer and Gurevych, 2011; Matuschek and Gurevych, 2013), especially for retrieving WSD examples which augment labeled data for rare senses (Blevins et al., 2021) and for non-English tasks (Henrich et al., 2012; Segonne et al., 2019). Webster's Unabridged:Webster's Unabridged is a version of Webster's dictionary (Webster, 1900) served by the Project Gutenberg initiative (Various, 2009). It describes English words by providing definitions and notes (where needed). Hei++:Hei++ is a dataset that associates human-made definitions with adjective-noun phrases. Since there is no publicly available dataset to evaluate the quality of definition generation models on free phrases, Hei++ is built by Bevilacqua et al. using the test split of the HeiPLAS dataset (Hartung, 2015). MultiRD:The MultiRD dataset was created by (Zhang et al., 2019) to evaluate a multi-channel reverse dictionary model that has multiple predictors to predict attributes of target words from given input queries. This dataset uses the English dictionary definition dataset created by Hill et al. (2016) as the training set and three test sets: a _seen_ definition set, an _unseen_ definition set, and a description set that includes pairs of words and human-written descriptions. For each entry, it also includes morphemes, lexical names and sememes. CODWOE:The CODWOE (Comparingictionaries and Word embeddings) SemEval 2022 shared task (Mickus et al., 2022) aimed to compare two types of semantic descriptions, namely dictionary glosses and word embedding representations. This task was applied to multiple languages, and one dataset per language was provided. Each dataset contains a list of examples and, subsequently, each example contains the following key fields: identifier (includes the word), gloss, and embedding-related information. Sci-definition:Sci-definition is a dataset constructed for the task of generating definitions of scientific terms with controllable complexity (August et al., 2022). The definitions are drawn from MedQuAD (Abacha and Demner-Fushman, 2019) and Wikipedia Science Glossaries2. For each term, 10 journal abstracts are provided from S2ORC (Lo et al., 2020) to allow models to incorporate related scientific knowledge (Fan et al., 2019; Clark et al., 2018). Footnote 2: [https://en.wikipedia.org/wiki/Category:Glossaries_of_science](https://en.wikipedia.org/wiki/Category:Glossaries_of_science). ## 3 Building 3D-EX: Data Cleaning A prerequisite for unifying the above resources into 3D-EX, is to perform a number of preprocessing steps. This process includes: lower-casing; removing special tokens and any noisy characters such as the tab sign; removing entries where their definitions have more than 10% of non alphanumeric characters; removing entries that have null values either in words or definitions; removing entries where examples are the same as defined terms, and removing duplicate entries within each dataset or split. ### Dataset-specific cleaning While the above steps are applied to all datasets, each individual resource in 3D-EX undergoes a specific preprocessing set of steps: Urban:since Urban dictionary is built by end-users who are not trained lexicographers, we found that it has number of noisy definitions (typically, too short, or containing a high proportion of emoticons, exclamation marks, and so forth). To handle them, we built a binary classifier based on RoBERTa-base (Liu et al., 2019) where 4,000 positive examples are randomly sampled from Wiktionary, CHA and WN, and 2,000 negative examples are randomly sampled from Urban. This classifier, which obtains almost perfect accuracy, is then applied to the entirety of the Urban dataset, leaving 3D-EX only with Urban entries that are similar to those in more traditional resources, both in content and, more importantly, in style. Table 1 lists examples of this filtering process, where we can see Urban-specific properties such as colloquialisms (phrasal verbs, personal pronouns, lack of punctuation marks or high proportion of slang/unknown words). Wiktionary:Since some definitions in Wiktionary include the time where words were coined (e.g., "first attested in the late 16th century" or "from 16 c"), we deleted them using regular expressions. MultiRD:we removed (again, using regular expressions) uninformative definitions such as "see synonyms at" and "often used in the plural". **Sci-definition:** in order to construct the **Sci-definition** dataset as <term, definition, example> triples, we took the following steps: from each abstract, we extracted sentences that include the target term, which would act as examples. From these examples, we excluded sentences only containing lists of keywords (typically found in abstracts), and also any example with more than 10% non alphanumeric characters (similarly to our approach to cleaning definitions in Section 3). ### Unification and splitting Tables 2 and 3 show summary statistics for each dataset. It is desirable to keep a reference to the original source (dictionary or glossary) for each entry, however, we noticed that there are <term, definition, example> duplicates across datasets. This is why the final 3D-ex resource contains the source field as an array containing the sources where that entry was found. Furthermore, in terms of splitting 3D-ex for experimentation, it is well known that an issue in word/phrase classification datasets can occur due to a phenomenon known as "lexical memorization" (Levy et al., 2015), where supervised models tend to associate prototypical features to word types. This has been typically been addressed by releasing two splits, one random, and one known as "the lexical split", where all instances of a given term do not appear across splits (Vulic et al., 2017; Apidianaki and Soler, 2021; Espinosa-Anke et al., 2022). We follow this practice and release 3D-ex with a Random and a Lexical split. Tables 4 and 5 show examples of entries in 3D-ex and dataset statistics after unification in terms of unique instances across both splits, respectively. Finally, to shed some light on how similarities are distributed across datasets, we investigate cosine similarities of their SBERT embeddings, and compute similarities between terms and definitions, and between definitions and terms (see Figure 1). An immediate finding by inspecting these similarities is that Hei++, a carefully curated dataset used to evaluate multiword DM systems, is the one showing the highest similarity between terms and definitions (Figure 0(a)), this is likely because, first, entries in Hei++ are rather specific, and do not include generic and frequently used terms. This, along with, also, a rather detailed definition, makes their similarity rather high. On the opposite end of the spectrum we unsurprisingly find Urban dictionary, although it remains for future work to explore whether Urban Dictionary's definitions are indeed dissimilar to their corresponding terms, or because they are so rare that their embeddings are of lower quality. Interestingly, we also find that Sci-definition also exhibits high similarity between terms and definitions. Concerning definitions and examples (Figure 0(b)), Sci-definition is again the one with the highest similarity scores, and interestingly, Wiktionary is the dictionary with the lowest aggregate similarity, which suggests that examples in Wiktionary could be purposefully written to cover different topics than their definitions. As with the case of Urban Dictionary, a careful semantic analysis of these dictionaries remains for future work. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Term** & **Definition** & **Example** & **E.** \\ \hline baby & a way to describe a beat & Dave calls his beat-up & 1 \\ berlley & up old car you wish was a & & Neon his baby Bentley & 1 \\ Bentley & & & & \\ \hline pang & pangers pinger pangang & 1H Marissa, it’s Frank Re-card calling. & 1H be in the neighborhood later on, and 1 was wondering if maybe you wanted to get some pang pang & 1 \\ \hline suckafish & the correct term for one who you think is a sucker, closer, or anything else & & & \\ \hline farblegath & a lot of random garbage & The signal was disrupted, producing a lot of fanble-graph & 0 \\ \hline citixify & the process of modifying or altering a computer application for the purpose of publishing the application to using Ctrix Presentation Server & In order to properly publish that Java-based application, had to trickify is so it would run in a seamless window & 0 \\ \hline excellent & when something rocks and is excellent & Dade, that new haircart is excellent & 0 \\ \hline \end{tabular} \end{table} Table 1: Examples of Urban entries that were removed vs. retained (labels 1 vs. 0 in column **F**). Figure 1: Histograms with SBERT-based cosine similarities of the datasets in 3D-ex. ## 4 Experiments and Results In order to test the usefulness of 3D-ex, we perform an intrinsic set of experiments where we "stress test" the dataset for artifacts, indirect data leakage (near-synonyms), potential for memorization, etc. This, we argue, is an important step to guarantee 3D-ex can be used for testing lexical semantics models based on it. ### Source classification In the task of _source classification_, the goal is to, given a \(<\)term,definition\(>\) instance, predict its original source. We posit that this is an important experiment to determine which sources are more unique (i.e., easier to classify), and which seem to conflate different lexicographic features (e.g., writing style, coverage or any other artifact). To this end, we fine-tune roberta-base Liu et al. (2019) for 3 epochs on the training set of 3D-ex. Note that this is a 9-way multilabel classification problem, since for a given \(<\)term,definition\(>\) tuple, there may be more than one associated source. We report the results of this experiment in Table 6. We can see how the lexical split is substantially harder than the random split. ### Reverse dictionary Reverse dictionary (or concept finder) is a helpful application for copywriters, novelists, translators seeking to find words or ideas that might be "on the tip of their tongue" Hill et al. (2016). It is also reflection of the interactions between a speaker and the mental lexicon Zock (2004); Zock et al. (2010). More relevant to NLP, however, reverse dictionary datasets can be seen as benchmarks for evaluating representation learning methods, as there are works that have used definitions as, e.g., the sole source for learning word embeddings Bosc and Vincent (2017) or for debiasing them Kaneko and Bollegala (2021). This task is a ranking problem in which, given a definition, the task is to retrieve a ranked list of the most relevant words, and it has a long-standing tradition in computational semantics Bila et al. (2004); Dutoit and Nugues (2002); El-kahlout and Olfazer (2004); Glassman et al. (1992); Thorat and Choudhari (2016). To establish a set of baseline results on this task, we report results from several embedding models on the random and lexical test sets. Note that while these baselines are unsupervised, we only report results on the test sets to accommodate future experiments by supervised systems. In terms of evaluation, we report _Mean Reciprocal Rank_ (MRR), which rewards the position of the first correct result in a ranked list of outcomes: \[\text{MRR}=\frac{1}{|Q|}\sum_{i=1}^{|Q|}\frac{1}{rank_{i}}\] where \(Q\) is a sample of experiment runs and \(rank_{i}\) \begin{table} \begin{tabular}{l r r r r r} \hline \hline & **orig.** & **\#entries** & **cl. \#terms** & **cl. \# \textless{}T,D\textgreater{}** & **cl. \#\textless{}T,D,E\textgreater{}** \\ \hline **WordNet** & 44,351 & 20,435 & 36,095 & 44,241 \\ **CHA** & 785,551 & 30,841 & 75,887 & 752,923 \\ **Wikipedia** & 988,690 & 162,809 & 167,569 & 960,097 \\ **Urban** & 507,638 & 119,016 & 145,574 & 145,896 \\ **Wiktionary** & 145,827 & 76,453 & 85,905 & 140,190 \\ **CODWOE** & 63,596 & 25,861 & 45,065 & 63,137 \\ **Sci-definition** & 8,263 & 5,281 & 6,251 & 166,660 \\ \hline **Webster’s Unabridged** & 159,123 & 89,234 & 143,782 & - \\ **MultiRD** & 901,200 & 50,460 & 671,505 & - \\ **Hei++** & 713 & 713 & 713 & 713 & - \\ \hline **3D-EX** & & 438,956 & 1,327,342 & 2,268,225 \\ \hline \hline \end{tabular} \end{table} Table 2: Dataset statistics before (orig.) and after (cl.) preprocessing, and in terms of unique entries involving terms (**T**), definitions (**D**), examples (**E**). Aggregated statistics are provided between two sets, datasets with examples (top) and without (bottom). The last row is related to 3D-EX dataset. \begin{table} \begin{tabular}{l r r r|r r|r r} \hline \hline & \multicolumn{3}{c|}{**Trem length**} & \multicolumn{3}{c|}{**Definition length**} & \multicolumn{3}{c}{**Example length**} \\ \hline & min. & max. & avg. & min. & max. & avg. & min. & max. & avg. \\ \hline WeekNet & 1 & 1 & 1 & 1 & 52 & 7.50 & 1 & 46 & 5.77 \\ GTA & 1 & 1 & 1 & 1 & 71 & 10.31 & 2 & 141 & 17.86 \\ Wikipedia & 1 & 16 & 18.4 & 1 & 33 & 60.12 & 2 & 40 & 18.70 \\ Urban & 1 & 31 & 1.47 & 1 & 32 & 10.01 & 2 & 42 & 11.45 \\ Wiktionary & 1 & 10 & 12.2 & 1 & 100 & 9.24 & 2 & 288 & 26.52 \\ COCOWG & 1 & 1 & 1 & 11 & 146 & 108 & 1 & 214 & 22.26 \\ Sci-definition & 1 & 11 & 7.0 & 2 & 94 & 18.09 & 7 & 265.72 \\ Webster’s Unabridged & 1 & 3 & 10.0 & 1 & 90 & 93.9 & - & - \\ MultiRD & 1 & 1 & 1 & 144 & 11.72 & - & - & - \\ Hoi++ & 2 & 2 & 2 & 3 & 28 & 8.12 & - & - \\ \hline \hline \end{tabular} \end{table} Table 3: Length statistics per dataset after cleaning. refers to the rank position of the _first_ relevant outcome for the \(i\)th run. MRR is commonly used in Information Retrieval and Question Answering, but has also shown to be well suited for lexical semantics tasks such as collocation discovery Wu et al. (2010); Rodriguez-Fernandez et al. (2016). We evaluate the performance of traditional sentence encoding SBERT Reimers and Gurevych (2019) models, namely all-MiniLM-L6-v2, all-distilroberta-v1 and all-mpnet-base-v2. We also evaluate Instructor Su et al. (2022), an instruction-based encoder that can generate text embeddings tailored to any task given the appropriate prompt. Instructor works by optionally providing the type of the target text (e.g., "a Wikipedia sentence") and the task (e.g., "document retrieval"), to ultimately build a prompt such as "Represent this Wikipedia sentence for \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c|}{**Random Split**} & \multicolumn{3}{c}{**Lexical Split**} \\ \hline & prec. & rec. & f1 & prec. & rec. & f1 \\ \hline WordNet & 0.73 & 0.23 & 0.35 & 0.33 & 0.05 & 0.09 \\ CHA & 0.65 & 0.48 & 0.55 & 0.64 & 0.47 & 0.54 \\ Wiktionary & 0.80 & 0.53 & 0.64 & 0.65 & 0.33 & 0.44 \\ Wikipedia & 0.98 & 0.97 & 0.98 & 0.97 & 0.97 & 0.97 \\ Urban & 0.94 & 0.87 & 0.91 & 0.97 & 0.66 & 0.79 \\ CODWOE & 0.93 & 0.55 & 0.69 & 0.92 & 0.42 & 0.58 \\ Sci-definition & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 \\ Webster’s Unabridged & 0.82 & 0.70 & 0.76 & 0.75 & 0.63 & 0.68 \\ MultiRD & 0.89 & 0.90 & 0.89 & 0.84 & 0.91 & 0.88 \\ Hei++ & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline Average & 0.77 & 0.62 & 0.68 & 0.71 & 0.54 & 0.60 \\ \hline \hline \end{tabular} \end{table} Table 6: Results in the source classification experiment, reported both for the Random and Lexical splits of 3D-EX. \begin{table} \begin{tabular}{l l l} \hline \hline **Term** & **Definition** & **Example** & **source** \\ \hline emergent & coming into existence & an emergent republic & WordNet \\ \hline word & an (order; a request or instruction); an expression of will & he sent word that we should strike camp before winter & Wiktionary \\ \hline central london & innermost part of london, england & westminster is an area of central london within the city of westminster, part of the west end, on the north bank of the river thames & Wikipedia \\ \hline ejac-flashback & when a picture or video is familiar to you & duel’ve just had a ejac-flashback that chick was last nights wank material & Urban \\ \hline notice & a displayed sheet or placard giving news or information & look out for the notice of the samarium-tans information evening in the end of september & CHA \\ \hline worship & to participate in religious ceremonies & we worship at the church down the road & CODWOE \\ \hline accessory & navicular bone is a small bone located in the middle of the foot & the accessory navicular bone is one of the most common accessory ossicles, which sometimes become symptomatic & Sci-definition \\ \hline able & having sufficient power, strength, force, skill, means, or resources of any kind to accomplish the object & - & Webster’s Unabridged \\ \hline abbreviation & an abbreviation is a shorter way to write a word or phrase & - & MultiRD \\ \hline skew picture & an inaccurate or partial representation of a situation & - & - \\ \hline \hline \end{tabular} \end{table} Table 4: Examples of entries available in 3D-EX. \begin{table} \begin{tabular}{l c c c c} \hline \hline & **Random split** & \multicolumn{3}{c}{**Lexical split**} \\ \hline & **train** & **validation** & **test** & **train** & **validation** & **test** \\ \hline WordNet & 26,603 & 8,788 & 8,850 & 27,053 & 8,573 & 8,793 \\ CHA & 451,191 & 38,034 & 52,321 & 157,847 & 143,499 \\ Wiktionary & 84,111 & 28,127 & 27,952 & 98,607 & 29,176 & 28,323 \\ Wikipedia & 575,554 & 197,697 & 186,846 & 505,964 & 240,781 & 213,379 \\ Urban & 87,429 & 29,142 & 29,325 & 91,299 & 297,283 & 2481 \\ CODWOE & 37,774 & 12,755 & 12,608 & 93,737 & 12,609 & 13,166 \\ Sci-definition & 101,129 & 31,766 & 33,765 & 106,175 & 35,966 & 24,519 \\ \hline Webster’s Unabridged & 84,802 & 28,213 & 28,221 & 93,423 & 30,198 & 19,696 \\ MultiRD & 384,295 & 127,580 & 128,178 & 404,114 & 125,072 & 112,948 \\ Hei++ & 426 & 152 & 135 & 428 & 143 & 142 \\ \hline \hline \end{tabular} \end{table} Table 5: Breakdown of 3D-ex unique entries per split type (random and lexical) and per split. Note that unique entries consist of \(<\)term,def.,example,source\(>\) (first 6 rows) or \(<\)term,def.,source\(>\) (bottom 3 rows). retrieving relevant documents". For our use case, we test three variants of Instructor for encoding both words and definitions: (1) no instruction; (2) providing a generic description of the target text (i.e., "the sentence" and "the word"); and (3) providing a domain-specific description of the target texts (i.e., "the dictionary definition" and "the dictionary entry"). We show the results of the SBERT models in Table 7, and the Instructor results in Table 8. We can see that even without any instruction prepended to the embedder, the Instructor model outperforms vanilla SBERT models, and that, interestingly, the best results overall in both splits (random and lexical) are obtained by providing a generic description of target words, and in the random split it is better to not include instructions for the definitions, while in the lexical split the best performing configuration involves providing detailed instructions for embedding the 3D-ex definitions. As a final piece of analysis, we perform experiments on both test sets with the best performing model (based on the split type) to see which sources are harder to solve in the task of reverse dictionary. From Table 9, it can be seen that Wikipedia and Urban are the most challenging resources for this task, which could be attributed to either or both dataset size and large number of very similar definitions and terms, as opposed to for instance Hei++ or Sci-definition, which are meant to capture unique terms. These are, by nature, more unique when compared to the rest of the lexicon, an insight we revealed when exploring dataset-specifc similarities in Figure 1. ## 5 Conclusions and future work In this paper we have introduced 3D-EX, a dataset that unifies different encyclopedias and dictionaries into one single resource. We have conducted an in-depth analysis of the dataset across several splits (random vs lexical), as well as dictionary source classification and reverse dictionary experiments. Our results suggest that this dataset is both challenging for representation learning methods and promising as a resource for augmenting lexical semantics systems. It has also helped us unveil semantic properties in the different dictionaries and encyclopedias we have integrated into 3D-EX. For the future, we would like to further explore the potential of 3D-EX for downstream NLP tasks, incorporating more resources, and exploring multilingual variants. An additional avenue would be to explore the interaction of unorthodox dictionaries like Urban with traditional lexicographic resources in the context of controlled technical/jargon DM. Finally, leveraging 3D-EX as a resource for pretraining LMs, similarly to the DictBERT approach Chen et al. (2022), could help inform LMs with new, domain-specific and/or colloquial terms. \begin{table} \begin{tabular}{c r r} \hline \hline Model & Random & Lexical \\ \hline all-distilroberta-v1 & 8.41 & 11.38 \\ all-MiniLM-L6-v2 & 9.40 & 13.75 \\ all-mpnet-base-v2 & 10.98 & 15.34 \\ \hline \hline \end{tabular} \end{table} Table 7: Reverse Dictionary results of the SBERT models on the reverse dictionary task in the two 3D-ex test sets. \begin{table} \begin{tabular}{l r r} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Random}} & \multicolumn{2}{c}{word} \\ \cline{2-3} & no & gen. & dict. \\ \hline \multirow{3}{*}{definition} & no & 14.18 & **14.71** & 14.56 \\ & gen. & 13.64 & 14.07 & 14.06 \\ & dict. & 14.19 & 14.59 & 14.57 \\ \hline \multirow{3}{*}{Lexical} & \multicolumn{2}{c}{word} \\ \cline{2-3} & no & gen. & dict. \\ \hline \multirow{3}{*}{definition} & no & 19.16 & 20.25 & 20.02 \\ & gen. & 18.70 & 20.04 & 19.86 \\ \cline{1-1} & dict. & 19.64 & **20.82** & 20.60 \\ \hline \hline \end{tabular} \end{table} Table 8: MRR Results on Reverse Dictionary leveraging Instructor Embeddings when using no instruction (no), generic (gen.) or tailored to the task (dict.). \begin{table} \begin{tabular}{l r r} \hline \hline Dataset & Random & Lexical \\ \hline WordNet & 32.97 & 42.27 \\ Wiktionary & 50.65 & 53.05 \\ Wikipedia & 9.25 & 9.19 \\ Urban & 18.47 & 17.49 \\ CODWOE & 39.74 & 46.89 \\ CHA & 30.82 & 35.86 \\ Sci-definition & 82.38 & 82.53 \\ Webster’s Unabridged & 30.53 & 34.11 \\ MultiRD & 16.69 & 27.41 \\ Hei++ & 96.79 & 94.49 \\ \hline \hline \end{tabular} \end{table} Table 9: Breakdown of the reverse dictionary results in terms of MRR for the two test sets (random and lexical) in 3D-EX. ## Ethics and Broader Impact Statement This paper is concerned with the automatic building of a dataset by combining publicly available information in the web. As a result, there could be potential for the presence of incorrect or harmful information in this derived dataset, especially if crowdsourced; however, we encourage collaborative efforts from the community to help address these risks. Specifically, vulgar, colloquial, or potentially harmful information in Urban Dictionary, which the authors of this paper do not endorse.
2310.16162
Brainchop: Next Generation Web-Based Neuroimaging Application
Performing volumetric image processing directly within the browser, particularly with medical data, presents unprecedented challenges compared to conventional backend tools. These challenges arise from limitations inherent in browser environments, such as constrained computational resources and the availability of frontend machine learning libraries. Consequently, there is a shortage of neuroimaging frontend tools capable of providing comprehensive end-to-end solutions for whole brain preprocessing and segmentation while preserving end-user data privacy and residency. In light of this context, we introduce Brainchop (http://www.brainchop.org) as a groundbreaking in-browser neuroimaging tool that enables volumetric analysis of structural MRI using pre-trained full-brain deep learning models, all without requiring technical expertise or intricate setup procedures. Beyond its commitment to data privacy, this frontend tool offers multiple features, including scalability, low latency, user-friendly operation, cross-platform compatibility, and enhanced accessibility. This paper outlines the processing pipeline of Brainchop and evaluates the performance of models across various software and hardware configurations. The results demonstrate the practicality of client-side processing for volumetric data, owing to the robust MeshNet architecture, even within the resource-constrained environment of web browsers.
Mohamed Masoud, Pratyush Reddy, Farfalla Hu, Sergey Plis
2023-10-24T20:17:06Z
http://arxiv.org/abs/2310.16162v1
# Brainchop: Next Generation Web-Based Neuroimaging Application ###### Abstract Performing volumetric image processing directly within the browser, particularly with medical data, presents unprecedented challenges compared to conventional backend tools. These challenges arise from limitations inherent in browser environments, such as constrained computational resources and the availability of frontend machine learning libraries. Consequently, there is a shortage of neuroimaging frontend tools capable of providing comprehensive end-to-end solutions for whole brain preprocessing and segmentation while preserving end-user data privacy and residency. In light of this context, we introduce Brainchop ([http://www.brainchop.org](http://www.brainchop.org)) as a groundbreaking in-browser neuroimaging tool that enables volumetric analysis of structural MRI using pre-trained full-brain deep learning models, all without requiring technical expertise or intricate setup procedures. Beyond its commitment to data privacy, this frontend tool offers multiple features, including scalability, low latency, user-friendly operation, cross-platform compatibility, and enhanced accessibility. This paper outlines the processing pipeline of Brainchop and evaluates the performance of models across various software and hardware configurations. The results demonstrate the practicality of client-side processing for volumetric data, owing to the robust MeshNet architecture, even within the resource-constrained environment of web browsers. Volumetric segmentation, MeshNet, MRI, 3D dilated CNN. ## I Introduction Extracting brain tissue from structural Magnetic Resonance Imaging (MRI) volumes and subsequent segmentation into gray and white matter, or more elaborate brain atlases, is essential to brain imaging analysis pipelines. Fostering the advancement of automatic medical image segmentation is vital to improving the precision and efficacy of clinical diagnoses. Clinical applications such as surgical planning, detection of brain atrophy, and visualization of anatomical structures heavily rely on MRI segmentation. However, for numerous researchers and radiologists, especially those in developing countries, establishing neuroimaging pipelines poses technological barriers. Offering these pipelines through browser-based platforms can contribute to democratizing computational approaches in these contexts. Nevertheless, leveraging the browser for neuroimaging applications entails confronting multiple challenges, including limitations in memory and computational resource management. Consequently, there exists a shortage of web-based neuroimaging tools capable of providing fast and reliable volumetric brain segmentation while maintaining strict end-user data privacy and residency. Despite the better accuracy and training convergence achieved by volumetric segmentation models compared to sub-volume and 2D segmentation models[12], the existing tools for segmentation in the browser either lack volumetric inference or need back-end support. While backend-based medical image applications raise privacy issues surrounding hosting or accessing raw user data, hybrid methods involving distributed deep learning processing between the client and the cloud have not yielded practical solutions regarding medical data privacy, which flags the importance of investigating in-browser tools as potential alternatives capable of resolving the data privacy issue with low latency, as it enables the direct execution of deep learning models on the client side. By "client-side" and "browser inference," we refer to the entirety of the computational task being executed on the user side, eliminating the need to transfer data to remote servers for processing. However, despite the recent advancements in deep learning frameworks in JavaScript, such as TensorFlow.js and its model deployment and conversion techniques, tasks such as volumetric segmentation for MRI images, which typically entail substantial computational workloads, remain challenging for inference within the browser resource-constrained environment. This work represents our innovative online pipeline Brainchop ([http://www.brainchop.org](http://www.brainchop.org)), designed to facilitate brain image processing and segmentation. Notably, Brainchop stands out as the first in-browser tool that enables scientists and clinicians to perform volumetric analysis of structural MRI utilizing pre-trained deep learning models, all without necessitating technical proficiency or the setup of AI solutions. It delivers valuable attributes, including data privacy, enhanced accessibility, scalability, low latency, user-friendly operation, elimination of installation requirements, and seamless cross-platform functionality while preserving MRI data privacy. Building upon our previous work [1], this paper delves into a meticulous analysis of Brainchop's performance characteristics across various models and resource configurations. ## II Methodology Brainchop, an open-source front-end application, is developed to enable MRI data resampling, preprocessing, segmentation, and postprocessing in the browser (Fig.1). Notably, it can process whole brain volume in a single pass for segmentation by using the lightweight and reliable MeshNet model [3]. Meshnet, as a variant of dilated convolutions [4], incorporates a volumetric option that enhances the accuracy of MRI inference while maintaining modest computational requirements. The MeshNet segmentation models are trained in Pytorch using the Human Connectome Project (HCP) dataset [5] and a processed FreeSurfer segmentation. Subsequently, the pre-trained models are converted to TensorFlow.js [6] to enable in-browser inference. Branichop is designed to support T1-weighted MRI volume segmentation, with input expected in Nifti format [7]. As a preprocessing step to obtain accurate results, the T1 image should be shaped to \(256^{3}\) and resampled to 1 mm isotropic voxels. This preprocessing task can be conveniently performed with Brainchop using micronvert.js, which employs Pyodide [8] to deploy the "conform" function from FastSurfer [9]. This function is responsible for reshaping, scaling, and resampling the raw T1 image data. Additionally, Brainchop integrates standard medical image preprocessing techniques to eliminate noisy voxels from the input and enhance MRI volume intensities, thus facilitating efficient in-browser inference with optimal results. To ensure the quality of the segmentation output, a 3D connected components algorithm is implemented within the pipeline postprocessing stage to filter out noisy voxels and regions resulting from the inference stage. Both the input MRI data and the resultant segmentation can be viewed using Papaya [10]. Additionally, Brainchop incorporates a 3D volume rendering functionality powered by Three.js [11] library, enabling users to subjectively verify the accuracy of volumetric segmentation and enhance their visualization experience. All these functions are provided in a user-friendly interface that features simplicity, privacy preservation, and efficiency. ## III Model Training MeshNet is a feed-forward 3D convolutional neural network with dilated kernels. We trained a model of nine layers to segment brain tissue into Gray White Matter (GWM) labels, as illustrated in Fig. 2. Each layer incorporates 3D dilated convolutions with a specific padding setting and dilation factor carefully chosen to modify the receptive field for best capturing a broader range of contextual information from the input data without significantly increasing the number of network parameters. The volumetric dilated convolution can Fig. 1: The Brainchop high-level architecture allows converting pre-trained models in PyTorch and Keras to Tensorflow.js, enabling their importation into the Brainchop models list. The input MRI data can be handled in two ways: it can be passed as a complete volume to the inference model or divided into subvolumes to overcome memory limitations in web browsers. In the latter case, the inference output is generated by merging the subvolumes. It is important to note that the inference process may introduce 3D noisy regions, which can be attributed to biases, variances, and irreducible errors such as data noise. We have developed a 3D connected components algorithm to address this issue that effectively filters out these noisy regions. be formulated as follows: \[(k\text{*}_{l}f)_{(x,y,z)}\!=\!\sum_{\bar{x}=-a}^{a}\sum_{\bar{y}=-b}^{b}\sum_{ \bar{z}=-c}^{c}k(\bar{x},\bar{y},\bar{z})f(x\!-\!l\bar{x},y\!-\!l\bar{y},z\!-\!l \bar{z}) \tag{1}\] Where \(a\), \(b\), \(c\) are kernel \(k\) bounds on \(x\), \(y\) and \(z\) axis, respectively, and \(l\) is the dilation factor specifying gaps between the kernel elements for configuration of the receptive field. Additionally, to enhance the performance and robustness of the network, each layer incorporates additional techniques, such as 3D batch normalization, ReLU activation, and 3D dropout regularization, as outlined in Table-I. More information about the MeshNet training tutorial is given in Section-V. The tutorial shows the inference of both Full-Volume and Sub-Volumes and an implementation of a custom DataLoader to handle large MRI volumes and streamline the training of the MeshNet networks. ### _DataLoader Implementation_ We implemented a custom DataLoader using the DataLoaderClass to facilitate data loading and preprocessing. This DataLoader effectively handles the following: #### Iii-A1 Data Loading Using the nibabel library [13], it loads the corresponding images and labels. This step ensures seamless integration of the dataset into our experiments. #### Iii-A2 Subvolumes Generation (optional) Leveraging the CubeDivider class, the DataLoader partitions the loaded images and corresponding labels into sub-cubes. This subdivision optimizes memory utilization throughout the training process. #### Iii-A3 Data Preparation The DataLoader reshapes the subvolumes to match the desired input size for our neural network. Additionally, the labels are converted to one-hot encoding, simplifying multi-class classification tasks. #### Iii-A4 Data Batching To optimize training, the DataLoader organizes the preprocessed subvolumes into batches to accelerate the training process. ### _Metrics_ We use Dice metrics and Cross Entropy loss for model training in our experiment to increase the model efficiency #### Iii-A1 Dice Metrics During the model training, Dice scores for label maps are computed using binary masks and logical operations to quantify the intersection of pixels between the label maps. The Dice score is calculated using the formula below: \[DICE=\frac{2|X\cap Y|}{|X|+|Y|} \tag{2}\] Where \(X\) is the predicted mask and \(Y\) is the ground truth one. The Dice score quantifies the degree of overlap between the predicted and true labels, assigning a value of 1 when the segmentation results are identical. #### Iii-A2 CrossEntropy Loss It is a commonly used loss function in machine learning for tasks such as classification. It measures the dissimilarity between predicted probabilities and true labels, measuring how well a model performs. \[Cross\ Entropy\ Loss=-\sum(y\cdot\log(p)) \tag{3}\] * \(y\) is the true label or target value. * \(p\) is the predicted probability assigned by the model to the corresponding class or category The advantage of MeshNet architecture is its compact size and a minimal number of parameters, making it suits for in-browser inference. Meanwhile, the model can still achieve a competitive Dice score compared to the classical U-Net model, as shown in Table-2. ## IV Results Despite the high diversity of computational resources available on the user side, the overall success rate of Brainchop is around 82%, as shown in Fig. 3, and this percentage is expected to increase with the annual advances in computational resources. Multiple volumetric segmentation tasks are available with Brainchop using our pre-trained P Fig. 3: Brainchop shows a success rate of 82% based on 1336 access instances. Fig. 2: MeshNet model architecture. TensorFlow,js for in-browser inference with WebGL backend. The tasks included brain masking, gray matter white matter (GWM) segmentation, and brain atlas models for 50 cortical regions and 104 cortical and subcortical structures. A list of the models and their performance is given in Table-IV. By conducting user research and collecting anonymized telemetry data, Brainchop demonstrated a high usability rate among the scientific community, with 1336 hits from its first release in May 2022 till the end of May 2023. A brief description of selected columns of the telemetry data and their unique values are given in Table-III. A sample of the data is available in the tool's public repository for exploration. We conducted a comprehensive analysis of the collected dataset, comprising both categorical and numerical variables, while focusing on analyzing the factors that affect the tool success rate. For exploring the tool Status column as the outcome, we established it as a binary variable indicating whether a tool succeeded or failed during the performed task. As a preprocessed step, the data is cleaned by excluding extreme outliers, and features with correlation coefficients of high similarities (\(Threshold>0.95\)) are pruned, such as those related to the heap size. The selected dataset features (columns) and use cases (rows) are free of missing values, allowing us to proceed with the analysis without the need for imputation. The label encoder is used for categorical data encoding, while the one-hot encoding is utilized with regression models to capture each categorical value's effect independently. The power analysis of the collected data is performed using the Chi-Square test for independence to determine the significant relation between the selected features and tool status. The overall statistical power of the collected telemetry data was 0.963 for a desired significance level of 0.05, which reflects the adequate sample size of the data to correctly reject the null hypothesis if the alternative hypothesis is true. **Statistical analysis:** Statistical significance for null hypothesis testing was defined using \(95\%\) confidence intervals (\(P<0.05\)). In order to enhance Brainchop's performance, multiple interventions such as patching (sub-volumes) and cropping for input data are applied. The inference models provided include full-volume and sub-volume (Failsafe) models to meet the high diversity of existing computational resources and their possible limitations. The fail status shown in Table-V is mainly caused by limited GPU memory space, as evidenced by the higher success rate of sub-volume models versus full-volume models in Table-V. However, the main drawback of the patching approach (sub-volume models) is its slow inference time, less accuracy, and the overhead cost of the merging step compared to full-volume inference, as shown in Fig. 4. Estimating the patching effect accurately in the light of causal analysis requires identifying the patching intervention as a treatment and isolating its effect from other potential or significant confounders. To determine the covariates that may confound the relationship between the patching effect and the tool success rate, we conducted a potential confounder analysis using the Chi-Square test for with a significance level 0.05. The list of potential confounders is filtered based on their p-values calculated by the Ordinary Least Squares(OLS) regression model. The results show the significance of cropping (i.e., Input Shape) in influencing the patching effect, besides the other less significant confounders. To make the patching treatment independent of the cropping confounding variable and estimate its effect, we used regression adjustment that shows a patching effect of 10.4% on the success rate independent of the cropping effect. To demonstrate the isolated patching effect, we used the exclusion of samples technique shown in Table-VI to create homogeneous groups without cropping. This allows for a comparison between Sub-Volume and Full-Volume, removing the influence of the cropping effect on the tool success rate. From Table-VI, the cropping effect with full volume is more significant than the patching effect on the success rate. To validate the result, a multivariable analysis is applied to investigate the effect of the two treatments, cropping and patching, on the tool success rate. By including both treatments simultaneously, we can find their independent effects on the tool success rate while accounting for potential confounding or interactions. The results show the estimated effect of each treatment on the success rate such that the cropping estimated coefficient is 0.0932, indicating that, on average, a one-unit increase in the cropping variable is associated with a 9.32% increase in the tool success rate, holding patching effect constant. For the patching effect, it shows an estimated coefficient of 5.97%. However, the exclusion approach results in a reduced sample size that needs careful consideration to avoid biased estimation or lower statistical power. A more robust technique that avoids such a drawback and, meanwhile, considers the other less significant confounds in our analysis is the Randomized Controlled Trial (RCT). In that technique, a random assignment of the patching treatment is used while ensuring the randomization of other confounds across the treatment group to control the effect of those confounds. In that context, Inverse Probability of Treatment Weighting (IPTW) [14] can help reduce confounding bias in a dataset when estimating causal effects by attempting to mimic the characteristics of a randomized controlled trial. By reweighting the observations based on the estimated probabilities of treatment assignment, IPTW aims to balance the covariate distributions between treated and control groups, reducing the confounding bias. The Average Treatment Effect (ATE) on the entire sample can be estimated such that \(ATE=p(Outcome=1|do(Treatment=1)-p(Outcome=1|do(Treatment=0)\). In our case, it is the probability of success rate when we apply the treatment (e.g., patching or cropping) versus the probability when we do not. The estimations of the patching effect using IPTW show an increase in the Brainchop success rate by 6.23% due to patching the MRI into subvolumes, an increase in the inference time by 24.31 seconds, as can also be evident from Fig. 4, and show almost no change in the postprocessing time with a slight decrease of 0.04 second. The Atlas models (i.e., 50 and 104 labels) are memory-hungry. Consequently, volumetric cropping is an essential step for the MRI by using the brain masking model, which Fig. 4: Brainchop overall processing performance. (Left) Inference performance per model. (Right) The total samples box-plot for the preprocessing, subvolumes merging and postprocessing. is applied to exclude the surrounding background from the MRI, resulting in a substantial reduction in volume size and a decrease in the allocated memory, thus helping in making the parcellation possible in the browser. As presented in Table VII for full volume inference, the Chia-Square test for the Status-Cropping contingency table indicates a statistically significant association between the success rate of Brainchop and the cropping effect (p-value \(2^{-09}\)). The statistical power analysis of Table-VII sample size indicates a probability of 99.9% to reject the null hypothesis correctly. Estimating the cropping effect using IPTW shows an increase in the Brainchop success rate by 18.12% due to cropping the MRI input volume, a decrease in the inference time by 5.26 seconds, and a decrease in the postprocessing time by 6.83 seconds. Table-VIII shows that larger texture sizes can reduce memory fragmentation errors and increase the success rate. Performing the Chi-Square test shows a statistically significant association between the tool success rate and texture size (p-value 0.0024) in full volume inference with a statistical power of 0.934 to reject the null hypothesis correctly. When increasing the texture size to 32768, the texture size effect on Brainchop performance shows an increase in the tool success rate by 18.13%, a decrease in the inference time by 2.3 seconds, and a decrease in the postprocessing time by 5.70 seconds. Our results also show a marginal rise in the mean heap size and the number of logical CPU cores within the successful instances compared to failed ones, which can be explained as an increase in the browser's capability to handle concurrent tasks more efficiently by using the web workers in parallel with the main browser thread. Such an approach can prevent bottlenecks, enhance asynchronous tasks, and reduce main thread blocking due to resource-intensive computations. Fig. 5: Cohort Analysis - Success Rate by GPU Card Per Month Fig. 6: Cohort Analysis - Success Rate by Model Full volume inference requires careful consideration to retain the efficiency of in-browser processing without memory leaks or loss of the WebGL context. In order to mitigate memory leakage and effectively handle the substantial memory requirements while simultaneously minimizing instances of failure, an inference strategy was adopted. This approach entails the progressive utilization of the MeshNet model on a layer-by-layer basis, coupled with the strategic disposal of the MRI tensor from the preceding layer. This tactic was implemented to alleviate memory-related challenges. **Limitations:** Despite the statistical power of the telemetry data, applying stratification analysis may lack sufficient subgroups due to the high diversity of computational resource configuration. The success rate over time by GPU in Fig. 5 and by Models in Fig. 6 are mutually dependent such that a model success rate depends on the GPU in use and vice versa. For the brain masking model(Fast), although having a high success rate for its moderated number of parameters, it only shows an average success rate when used as a pre-model for cropping input data before applying Atlas models, which raises the need for further investigation. In general, Brainchop demonstrates a high success rate and processing speed for volumetric segmentation in the browser, with potential for further improvement. The success rate percentage is expected to increase with the continual advances in computational resources supported by a consistent tendency to increase the gap between the successful and failed tasks, as shown in Fig. 7. ## V Code Availability Brainchop source code is publicly available on GitHub ([https://github.com/neuronearual/brainchop](https://github.com/neuronearual/brainchop)). The Pytorch training pipeline is also provided in a Google Colab. A sample of the teleary dataset is accessible with the Wiki step-by-step documentation. ## VI Conclusion Through our meticulous analysis, we have unveiled valuable insights into Brainchop's overall performance. Our analysis determined a statistically significant correlation between patching, cropping, texture size, and both the timing and success rate of Brainchop. Notably, Brainchop has exhibited a high success rate of 82%. This accomplishment and its potential for further enhancement underscore its promise as a browser-based neuroimaging solution. Our findings also highlight the need to refine the cropping techniques for better outcomes. Additionally, a more in-depth exploration into the current limitations of the tool holds the potential to provide further insights, which in turn can inform efforts to optimize Brainchop performance. In summation, our analysis has not only shed light on the drivers of tool success rates but also provided metrics that can assist frontend tools in performing volumetric segmentation, thus enhancing the user experience significantly while maintaining data privacy. ## Acknowledgment The authors would like to thank Kevin Wang and Alex Fedorov for discussions and pre-trained MeshNet models This work was funded by the NIH grant RF1MH121885. Additional support from NIH R01MH123610, R01EB006841 and NSF 2112455.
2310.09042
Improving power-grid systems via topological changes, or how self-organized criticality can help stability
Cascade failures in power grids occur when the failure of one component or subsystem causes a chain reaction of failures in other components or subsystems, ultimately leading to a widespread blackout or outage. Controlling cascade failures on power grids is important for many reasons like economic impact, national security, public safety and even rippled effects like troubling transportation systems. Monitoring the networks on node level has been suggested by many, either controlling all nodes of a network or by subsets. This study identifies sensitive graph elements of the weighted European power-grids (from 2016, 2022) by two different methods. Bridges are determined between communities and "weak" nodes are selected by the lowest local synchronization of the swing equation. In the latter case we add bypasses of the same number as the bridges at weak nodes, and we compare the synchronization, cascade failure behavior by the dynamical improvement with the purely topological changes. The results are also compared if bridges are removed from networks, which results in a case similar to islanding, and with the addition of links at randomly selected places. Bypassing was found to improve synchronization the best, while the average cascade sizes are the lowest with bridge additions. However, for very large or small global couplings these network changes do not help, they seem to be useful near the synchronization transition region, where self-organization drives the power-grid. Thus, we provide a demonstration for the Braess' Paradox on continent-sized power grid simulations and uncover the limitations of this phenomenon. We also determine the cascade size distributions and justify the power-law tails near the transition point on these grids.
Géza Ódor, István Papp, Kristóf Benedek, Bálint Hartmann
2023-10-13T12:08:53Z
http://arxiv.org/abs/2310.09042v2
# Improving power-grid systems via topological changes ###### Abstract Cascade failures in power grids occur when the failure of one component or subsystem causes a chain reaction of failures in other components or subsystems, ultimately leading to a widespread blackout or outage. Controlling cascade failures on power grids is important for many reasons like economic impact, national security, public safety and even rippled effects like troubling transportation systems. Monitoring the networks on node level has been suggested by many, either controlling all nodes of a network or by subsets. This study identifies sensitive graph elements of the weighted European power-grids (from 2016, 2022) by two different methods. Bridges are determined between communities and "weak" nodes are selected by the lowest local synchronization of the swing equation. In the latter case we add bypasses of the same number as the bridges at weak nodes, and we compare the synchronization, cascade failure behavior by the dynamical improvement with the purely topological changes. The results are also compared if bridges are removed from networks, which results in a case similar to islanding, and with the addition of links at randomly selected places. Bypassing was found to improve synchronization the best, while the average cascade sizes are the lowest with bridge additions. However, for very large or small global couplings these network changes do not help, they seem to be useful near the synchronization transition region, where self-organization drives the power-grid. Thus, we provide a demonstration for the Braess' Paradox on continent-sized power grid simulations and uncover the limitations of this phenomenon. We also determine the cascade size distributions and justify the power-law tails near the transition point on these grids. ## I Introduction Blackouts and other failures frequently occur in stressed electrical power systems with low operational margins. Therefore, they have to adapt to changes in the use of electrical energy. Earlier power-grids were not originally designed for deregulated markets that appeared in the 1990s, these grids transmitted large amounts of electrical power across interconnections. As both the grid and the operators were unable to handle fast-developing disturbances, the result was a global increase in major blackouts. Nowadays, power industry has been addressing decarbonization needs with a large integration of renewable generation and electrification, which introduces strong fluctuations. Furthermore, power systems are significantly affected by the increasing number of climate change induced extreme weather conditions [1]. Thus, it is necessary to redesign the transmission and distribution grids to address these changes. Modelling blackouts and other failures is a great challenge, which has been attempted by various approximations. Earlier ones used a direct current (DC) approach, similar to sand-piles [2; 3; 4] or fiber boundle like [5; 6] models. These provided heavy tailed, power-law (PL) statistics of blackout sizes similar to observations [2; 7] via self-organized criticality (SOC) [8].That is generated by the competition of supply and demand, tuning power systems to a critical point, where the PL-s occur. However, SOC is not the only possible mechanism suggested to describe PL-s. The highly optimized tolerance (HOT) model is also proposed [9], but it is probably more appropriate to describe certain types of failures without cascades [10]. The spectral analysis of outage duration times suggests that there exist cases for which HOT is more applicable. It has been proposed [9] that the competition of service capacity and failures can also lead to SOC in a reaction-diffusion type model, leading to PL distributed repair times. City size PL distribution, corresponding to power load, has also been hypothesized to be a possible reason for the PL outage cascades [11]. Later, alternating current (AC) models appeared, by solving the swing equation [12] equivalent to the second order Kuramoto equation [13], set up for the phases of the voltages with the addition of some threshold criterion of line failures [14; 15; 16; 17; 18; 19]. As it is difficult to solve these nonlinear equations for large systems, linearization has also been used frequently [20]. Another major challenge is the heterogeneity of power-grids systems, which has been considered in various ways, see discussion in [21]. Predicting, controlling and avoiding blackouts [15; 22] and helping to design more error prone networks and methods has been the subject of many other studies, see for example [23]. Most of them use the above approximations and try to isolate most vulnerable points, by a frequency analysis of solitary nodes [24]. Here we contribute to the modelling of cascades based on the complete Kuramoto equations, without any linearization, by numerical simulations on large European high voltage (HV) power-grids. We investigate effects of changing network topology on the synchronization and cascade failure dynamics in comparison of several ways. The first path is purely static, based on the network community analysis and aims to determine the effects of addition or removal of bridges between communities. The second path is dynamic, it uses the solution of the swing equations and identifies the nodes of lowest local synchrony. Bypassing these nodes, via the addition of edges of the same number as the bridge links, we can compare the methods. It seems that generally it is possible to get some gain over the purely static topological extensions. We also compare these results with the random addition of links. Power-grid extensions require large investments and are supposed to make the system operation more robust. Yet, counter-intuitively, increasing the capacity of existing lines or adding new lines may also reduce the overall system performance and even promote blackouts due to Braess' paradox [25; 26]. Braess' paradox was theoretically modeled [27; 28; 29; 30; 31; 32; 33], but has not yet been proven in realistically scaled power grids. Very recently a topological theory has been provided that reveals the key mechanism and predicts Braessian grid extensions from the network structure and a linearized power flow DC approximation [34]. Now, we extend this study by our dynamic AC analysis, suggesting that the Braess's paradox does not show up near the synchronization transition, where self organization drives the power-grid system. Recently, a similar conclusion has been drawn concerning the usefulness of islanding [19], i.e. improved stability following failure cascades in the neighborhood of the synchronization transition and deficiency, away from the transition region. It was also argued in recent years that (N-1) congestions are a direct consequence of the topological and reactance structure of the power grid and how these interact with the loadability of the lines. As highlighted in [35], the imbalances in the reactance structure of the grid are a leading cause of congestions, and using the technique of shadow capacity analysis, strengthening of the grid can be carried out in a way to avoid poor powerflow relationships after an outage. These findings are also underpinned by the results of the present paper. ## II Methods and models ### Solving the massive Kuramoto synchronization equations The time evolution of power-grid synchronization is described by the swing equations [36], set up for mechanical elements (e.g. rotors in generators and motors) with inertia. It is formally equivalent to the second-order Kuramoto equation [12], for a network of \(N\) oscillators with phases \(\theta_{i}(t)\). Here we use a more specific form [19; 21; 24; 37], which includes dimensionless electrical parametrization and approximations for unknown ones: \[\ddot{\theta}_{i}+\alpha\ \dot{\theta}_{i}=P_{i}+\frac{P_{i}^{max}}{I_{i}\ \omega_{G}}\ \sum_{j=1}^{N}W_{ij}\ \sin\left(\theta_{j}-\theta_{i}\right)\,. \tag{1}\] In this equation \(\alpha\) is the damping parameter, which describes the power dissipation, or an instantaneous feedback [38], we keep \(K:=P_{i}^{max}\) as a global control parameter, related to the maximum transmitted power between nodes, \(I_{i}=I\) inertia and \(\omega_{G}\) system frequency are kept constants in the lack of our knowledge; and \(W_{ij}\), is the adjacency matrix of the network, which contains admittance elements, calculated from impedances as described in [21]. The quenched external drive, denoted by \(P_{i}:=\omega_{i}^{0}\), which is proportional to the self-frequency of the \(i\)-th oscillator, carries a dimension of inverse squared time \([1/s^{2}]\), and describes the power in/out of a given node, when Eq. (1) corresponds to the swing equation (phases without amplitudes) of an AC power circuit. Here, as commonly done with the first-order Kuramoto model, the self-frequencies are drawn from a zero-centered Gaussian random variable, as the rescaling invariance of the equation allows to transform it within a rotating frame. For simplicity, one can assume that \(\omega_{i}(0)\) is drawn from the same distribution as \(\omega_{i}^{0}\) and numerically set \(\omega_{i}(0)=\omega_{i}^{0}\), amounting to taking \([s]\)=1. In our present study, the following parameter settings were used: the dissipation factor \(\alpha\) is chosen to be equal to 0.4 to meet expectations for power grids, with the \([1/s]\) inverse time physical dimension assumption, but we also tested the \(\alpha=3.0\) case, which can describe a system with stabilizing linear feedback [19; 37]. To solve the differential equations, in general we used the adaptive Bulirsch-Stoer stepper [39], which provides more precise results for large \(K\) coupling values than the fourth-order Runge-Kutta method.The nonlinearity introduces chaotic 'noise', even without stochasticity, thus a de-synchronization transition occurs by lowering \(K\). The solutions also depend on the actual quenched \(\omega_{i}^{0}\) self-frequency realization. To obtain reasonable fluctuations of the averages of measured quantities, we needed strong computing resources, using parallel codes running on GPU HPC machines. To obtain stronger synchronization solutions, the initial state was set to be phase synchronized: \(\theta_{i}(0)=0\), but due to the hysteresis, one can also investigate other uniform random distributions like: \(\theta_{i}(0)\in(0,2\pi)\). The initial frequencies were set to be: \(\theta_{i}(0)=\omega_{i}^{0}\). To characterize the phase transition properties, both the phase order parameter \(R(t)\) and the frequency spread \(\Omega(t)\), called the frequency order parameter, were studied. We measured the Kuramoto phase order parameter: \[z(t_{k})=r(t_{k})\exp\left[i\theta(t_{k})\right]=1/N\sum_{j}\exp\left[i\theta_ {j}(t_{k})\right]. \tag{2}\] Sample averages for the phases \[R(t_{k})=\langle r(t_{k})\rangle \tag{3}\] and for the variance of the frequencies \[\Omega(t_{k},N)=\frac{1}{N}\langle\sum_{j=1}^{N}(\overline{\omega}(t_{k})- \omega_{j}t_{k}))^{2}\rangle \tag{4}\] were determined, where \(\overline{\omega}(t_{k})\) denotes the mean frequency within each respective sample at time step \(t_{k}=1+1.08^{k}\), \(k=1,2,3...\). Sample averages were calculated for solutions with hundreds of independent self-frequency realizations for each control parameter, while for determinig PDF-s of the failure cascades about 20.000 samples were used to estimate the histograms. ### Cascade failure simulations We have extended the numerical solution of the Kuramoto equations with a threshold dynamics, such that in case of an overflow of power on the edges, we removed them during the simulation of a cascade failure. This method is similar to the one published in [14; 19]. Following a thermalization, which is started from a phase ordered state and line-cuts are not allowed, we perturbed the system by removing a randomly selected link, in order to simulate a power failure event. Following that, if the ensuing power flow on a line between neighboring nodes was greater than a threshold: \[F_{ij}=|\sin(\theta_{j}-\theta_{i})|>T\,, \tag{5}\] so that that line is regarded as overloaded, we removed this link from the graph permanently and measured the total number of line failures \(N_{f}\) of the simulated black-out cascades of each realizations, corresponding to different \(\omega_{i}(0)\) self frequency values. Finally, we applied histogramming to determine the PDFs of \(N_{f}\). In the vicinity of criticality, one usually expects power-law distributions of the form \[p(N_{f})\sim N_{f}^{-\tau}\,, \tag{6}\] thus we plotted our results on the log-log scale. ### The power-grid networks In this study, various modifications of European power-grids introduced in a previous work [21] were investigated. These are the EU16 (European 2016), EU22 (European 2022) graphs, for which the backbone of the used network data is from the SciGRID project, which relies on the statistics of ENTSO-E and data obtained from OpenStreetMap (.osm) files. These contain information on the topology, geographical coordinates of nodes, and lengths, types, voltage levels of cables. Since acquiring data from.osm files is not always possible, the resulting data set may be incomplete. To resolve the problem, we made assumptions in [21] to substitute the missing data in order to obtain fully weighted networks. We used the giant component of the networks, giving \(N=13\,420\) nodes linked with \(L=17\,749\) edges for EU16 grid and \(N=7411\) nodes connected by \(L=10\,298\) for the EU22 network. We have also performed graph theoretical analysis on them to determine graph invariants and their community structure. The resulting graphs are summarized in the Table 1. As we can see, the EU22 network has a lower number of communities, nodes and links. Other graph measures, like the degree and cable lengths distributions also suggest that the EU22 is incomplete, but still provides an excellent possibility to study the effects of the network topology on the synchronization dynamics [21] ### Creation of bridges between communities Detecting communities in networks aims to identify groups of nodes in the network that are more densely connected to each other than to the rest of the network. While several clustering methods exist, they split into hierarchical and non-hierarchical methods. Hierarchical methods build a hierarchy of communities by recursively \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Community} & Size & \(\langle k\rangle\) & Size & \(\langle k\rangle\) \\ & (EU22) & (EU22) & (EU16) & (EU16) \\ \hline 1 & 924 & 2.72 & 4285 & 2.83 \\ 2 & 479 & 2.70 & 2526 & 2.66 \\ 3 & 2016 & 2.84 & 1527 & 2.67 \\ 4 & 698 & 3.06 & 1461 & 2.72 \\ 5 & 595 & 2.94 & 1455 & 2.69 \\ 6 & 1059 & 2.66 & 966 & 2.77 \\ 7 & 1237 & 2.68 & 638 & 2.57 \\ 8 & 16 & 2.81 & 289 & 2.06 \\ 9 & 332 & 2.18 & 277 & 2.99 \\ 10 & 55 & 2.74 & 26 & 3.07 \\ 11 & - & - & 22 & 3.31 \\ 12 & - & - & 6 & 2.66 \\ \hline \hline \end{tabular} \end{table} Table 1: Community sizes and average degrees for different data-sets, for the resolution \(\Gamma=10^{-4}\). We refer to sizes here as number of nodes in the respecting community and provide their average degree. dividing the network into smaller and smaller subgroups, while non-hierarchical methods directly assign nodes to communities. For detecting the community structure, we chose the hierarchical Louvain [40] method for its speed and scalability. This algorithm runs almost in linear time on sparse graphs, therefore, it can be useful on generated test networks with increased size. It is based on modularity optimization. The modularity quotient of a network is defined by [41] \[Q=\frac{1}{N\langle k\rangle}\sum_{ij}\left(A_{ij}-\Gamma\frac{k_{i}k_{j}}{N \langle k\rangle}\right)\delta(g_{i},g_{j}), \tag{7}\] the maximum of this value characterizes how modular a network is. Here \(A_{ij}\) is the weighted adjacency matrix, containing the admittances calculated in [21]. Furthermore, \(k_{i}\), \(k_{j}\) are the weighted node degrees of \(i\) and \(j\) and \(\delta(g_{i},g_{j})\) is 1, when nodes \(i\) and \(j\) were found to be in the same community, or 0 otherwise. \(\Gamma\) is the resolution parameter, which allows a more generalised community detection, merging together smaller communities. In network analysis, a bridge (Br) refers to a link or an edge that connects nodes from different communities or components of a network. Bridges are crucial, because they establish connections between otherwise separate parts of a network, facilitating the flow of information or influence between different communities. Removing bridges can lead to a fragmentation of the network into isolated components. In the EU16 network, we selected 1250 bridges of all communities from the "true" communities detected, where we optimized the modularity at \(\Gamma=1\), with Leiden [42] algorithm we did not find better results, separation was worse with 449 communities, connected by 1281 bridges. In the case of the EU22 network, for \(\Gamma=1\) we found 94 communities, connected by 507 bridges. To increase stability, we applied simple duplication of bridges. Alternatively, we also tried to remove almost all bridges between the communities, which leads to an "islanded" graph, where cascade failures are more localized. To avoid working with a fully disconnected network, we removed bridges randomly, starting with the biggest number of communities. We continued removing bridges until the network still remained connected (Br-). Results of these network decompostions are published in [21]. ### Creation of bypasses at weak nodes of the local Kuramoto solution We performed dynamic stability analysis of the network, by identifying weak nodes via the local order parameter, defined as \[r_{i}(t)=\frac{1}{N_{\mathrm{i.neigh}}}\left|\sum_{j}^{N_{\mathrm{i.neigh}}}A_ {ij}e^{i\theta_{j}(t)}\right|. \tag{8}\] This method is a bit more precise than just finding the solitary nodes of outstanding frequencies [24], as it is based on a large ensemble average and considers interactions with the neighboring nodes. Having the weak nodes of the grid identified, we propose a way to improve the stability by creating some extra links, called _bypass_es (Bp), which interconnect the critical points of the network, hence making the graph more robust. While there are several ways to achieve this, we present one of the most simple ones: by creating so-called triangles or doubling links between weak neighbors. Both methods are used in actual power grid development to increase the redundancy of supply. In graph-theoretical language, a _triangle_ is composed of three nodes, each connected by links to the other two. In a mathematical sense, triangles are used to calculate the _global clustering coefficient_, which characterizes the robustness of the network: \[C=\frac{3\times\mathrm{number\;of\;triangles}}{\mathrm{number\;of\;all\; triplets}}, \tag{9}\] The motivation behind creating triangles is to increase the robustness of the network by enhancing the clustering coefficient. Using Eq. (8), we group the nodes into different synchronization categories, and by selecting the worst synchronized group, we can implement a so-called "bypass algorithm". The algorithm does the following: it goes through the list of the worst synchronized nodes. For each weak node, it checks the neighborhood. If in the Figure 1: Here we show the (red) nodes that have been selected with the bypass cutoff method and happen to be on the (black) bridge edges between communities. These represent 10% of selected nodes by this method. neighborhood there is a weak node, it doubles the link between them. The doubled link inherits parameters of the original one. If there is no other weak node in the neighborhood, it chooses the two closest neighbor nodes and connects them with a new link, creating a triangle. The new link inherits the average parameter values of the links connecting to the selected weak node. These newly added links are expected to increase the resilience of the network, since if one of the routes gets cut, there will be alternative paths for the energy transfer through the weak nodes, avoiding overloads and line failures. On the other hand, it is possible to have regions of parameters when this interconnectedness turns into a disadvantage and causes larger cascade failures. Based on simulation results, this typically happens at lower K values (see Fig. 6) or larger \(\alpha\) values in the still relatively small K region (see Fig. 10) The results for the bypassing are shown in Fig. 3, with the yellow links, denoting the newly added components to the original network edges, colored by black. Choosing the bins of the local synchronization, i.e. the \(\log{(1-r)}\) values properly, we can set by a threshold how many new links we add to the network. This makes the method more general and comparable with the bridge and community analysis, where the topology of the network is given, and we cannot control the newly added components, except by modifying the modularity resolution \(\Gamma\) of the community detection. ## III Results ### Comparison of topological changes To see the differences between the new links, which were added by the static bridges and the dynamically determined bypasses, we have plotted their overlaps in Fig. 1 and differences in Fig.4. Red links correspond to links where bypasses are added but no bridges, while the black ones to the opposite We can see that the red links are concentrated in the middle of Europe, dividing East and West, and in the UK. The black ones are mainly on the Iberian Peninsula, France and Ukraine among smaller communities, obtained by \(\Gamma=1\). ### Results for phase, frequency and cascade sizes #### iii.2.1 EU16 results We have calculated the synchronization stability measures following a thermalization process with \(t_{Th}=2000\) and after that, by allowing cascade failures for \(t_{Cut}=1000\) iterations. We started the systems from phase synchronized states, for different global coupling \(K\) values by solving the swing equations (1), to achieve higher synchronization as in the case of the second order Kuramoto equation, an initial condition dependent, hysteretic behavior occurs. Thus if the simulation starts from a state with random phases, the solver arrives at lower \(R\) and higher \(\Omega\) steady state values, and the transition point shifts to higher \(K\)-s. We utilized the adaptive Bulrich-Stoer stepper, because the synchronization transition happens at large \(K\) values. Averaging has been done for \(100-1000\) initial random Gaussian self Figure 3: New links are added to the EU-HV 2016 network, denoted by yellow lines, using the bypass algorithm, where low local synchronization is obtained by the solving Eq.(1). \(r_{i}\) is encoded by the colors. Red dots are the weakest nodes. Figure 2: Sketch of the bypass algorithm on a small sample. We colored with red the badly synchronized nodes, obtained by the local Kuramoto order parameter \(r\), with green the two closest neighbors, connecting to the weakest node. Going through the red (weakest) nodes we perform the following "bypass algorithm": either create a triangle with the help of the two closest nodes, which are not weak (green triangle, with one blue edge), if in the neighborhood there is no other weak node. If there are two neighboring weak nodes, we double their connecting edge, (grey link doubled with a blue). The blue links mark the newly added edges to the network. frequency distributions as well as via temporal averaging in the last decades of the steady states. Fig.5 shows that the Kuramoto order parameter \(R\) increases slowly from zero to \(\simeq 1\) for the weighted, randomly supplemented, bridged, bypassed and truncated cases. The highest synchronization values are obtained around \(K_{c}\simeq 6000\), in the original, weighted case, where the fluctuations have a peak, marking the neighborhood of a SOC state. The gain in \(R\) is the best \(\simeq 60\%\) for the bypassed case, which was obtained by strengthening the weakest points in the graph, as discussed before. But the bridged network also shows a considerable increase in \(R\) near \(K_{c}\): \(\simeq 50\%\). We have not found such an improvement for the global frequency spreading order parameter \(4\)\(\Omega\), as shown on Fig. 9 in the Supplementary Material. The differences are small among the modified networks and the original one for the whole \(K\) parameter space scanned. But this may not mean that local improvements are not possible, i.e. the zero centered Gaussian initial \(\omega(i,0)\)-s can split to multi centered distributions by slightly shifted peaks, as the empirical data of [24; 43] show. Maybe, this kind of topological supplementation is not as effective as islanding of certain weak domains. We have also compared the average cascade sizes using \(T=0.99\) and found \(\simeq 50\%\) smaller blackouts near \(K_{c}\simeq 6000\) between the original and bridged cases, as shown in Fig. 6 in the Supplementary Materials. Here we used \(t_{max}=1000\) time steps for the maximum size of observation of cascades following the initialization and initial random line cuts. The bridged case provides the smallest cascade sizes, but the bypassed case also improves the results with respect to the original network in the region \(20<K<20.000\). However, far from the synchronization transition region, i.e. for \(K<20\) or \(K>20.000\), there is no such benefit. In fact, the original networks perform better for small global couplings (total transmitted power). This is the Braess's Paradox, which is even more visible for \(\alpha=3\) in Fig. 10 in the Supplementary Material. But in real power-grids, due to an SOC mechanism, systems operate near the synchronization transition, where this phenomenon does not seem to occur. Probability distributions of the \(N_{f}\) are also shown in the inset of Fig. 6 slightly above \(K_{c}\), at \(K=7000\) and \(K=7500\), where occurrence of heavy tails can be observed, similarly to an unweighted network [19]. We fitted the tails by PL-s for \(N>100\), resulting in a decay exponent: \(\tau_{t}\simeq 2.6\), somewhat larger than for unweighted networks [19; 38]. These findings have been investigated further for \(\alpha=3\) as in previous publications [19; 38], corresponding to larger dissipation, or equivalently to instantaneous feedback mechanisms. Fig. 10 in the Supplementary Material depicts effects of network extensions are much more pronounced than by \(\alpha=0.4\). Again, the bypassed case provides the best performance for phase synchronization stability, after thermalization or after the end of the cascade. The cascade size distribution exhibits fat tails at \(K\simeq K_{c}=12000\) and at the threshold \(T=0.99\), which can be fitted by a PL for \(N>3\) with an exponent \(\tau_{t}\simeq 2.6\), similarly to \(\alpha=0.4\). In the original network we can see some improvement, similar to what is seen on the unweighted EU16 network [19], which we attributed to islanding effects. We have also simulated an almost complete islanding, Figure 5: Comparison of numerical solutions of \(R\) and its variance \(\sigma(R)\) at the end of the thermalization in the steady state, for the original (EU16), randomly extended (Ran), bridged (Br) and bypassed (Bp) EU16 networks at \(\alpha=0.4\). Figure 4: Here we show the difference between the set of (red) nodes, links selected with the bypass method for EU16 network and the set of black nodes, links that are on the bridges. The edges were inserted between the nodes selected by the bypass method, increasing the network’s modularity score. by removing most of the 1250 bridges without cutting the network's full integrity. AsThe remaining, islanded network shows very low levels of \(R\) in the steady state. As the Kuramoto order parameter changes, the addition of bridges move the \(\sigma(R)\) peaks towards lower couplings, meaning the global synchronization occurs at lower couplings. We can also see improvements in the frequency spread results in Fig.11, as shown in the Supplementary Material, except for the bridge removal. The best global \(\Omega\)-s can be achieved by the bridge additions, followed by the bypass technique. We do not find improvements following the cascades, the diluted networks exhibit larger \(\Omega\)-s. But the average cascade sizes can be improved a lot by the addition of bypasses or bridges 10. The bridges seem to be the most efficient for \(\langle N\rangle\), followed by the bypasses in the synchronization transition region: \(30<K<20.000\). Below \(K=30\) we can see crossing lines, corresponding the change of the tendencies as for \(\alpha=0.4\). Rather large cascades appear for the truncated network, so this kind of islanding truncation method does not increase network stability. #### iv.2.2 EU22 results To test the robustness of the results, we repeated the analysis done in the previous section for the EU22 network. Here we show the results for \(a=3.0\) only, for the \(a=0.4\) case they are similar, but with smaller deviations between different networks solutions As the EU22 network is smaller than the EU16, the differences between the \(R(K)\) results are smaller. Thus, we also show \(\Delta(R)=R(K)_{mod.}-R(K)_{org.}\) in the inset of Fig. 7. Again the bypassed network is the most stable, with about a 10% increase near the transition point \(K_{c}\simeq 125\). The synchronization transition points can be read-off by the peaks of \(\sigma(R)\) on the graph, they do not seem to depend too much on the network version. The bypassed network result is followed by the bridged case, with a 5% maximum increase in \(R(K_{c})_{mod.}\), while the random edge addition hardly provides any improvement. By removing bridges between communities, except a few to maintain single connectedness, the remaining network becomes very unstable, as indicated by the magenta diamond symbols. The advance of the bypassed network over the others becomes more visible in the case of \(\Omega\) at \(\alpha=3.0\) (see Fig. 12 in the Supplementary Material). The frequency deviations remain small, and the bridge removal increases the spread. Figure 6: Comparison of dynamic simulation results of the average cascade sizes \(\langle N_{f}\rangle\) at \(T=0.99\) for the original, randomly extended, bridged and bypassed EU16 networks, using \(\alpha=0.4\). Note the crossings of lines for small and large \(K\)-s, corresponding to Braess’s paradox. Inset: PDF of line-cuts for \(T=0.99\), \(K=7000\) (circles) and \(K=7500\) (boxes), dashed line: PL fit for \(N_{f}>10\). Figure 7: Comparison of numerical results of \(R\) and \(\sigma(R)\) at the end of the thermalization in the steady state for the original (EU222), two randomly extended versions (Ran1, Ran2), bridged (Br) and bypassed (Bp) EU22 networks, using \(\alpha=3.0\). The inset shows the \(R\) deviations on different networks from the original. Improvement is the best near the synchronization transition \(K_{c}\simeq 150\), especially for bypasses. The average cascade sizes also show similar trends for the EU16 grids, but now the bypassed setting proves to be the best. The worst scenario arises if bridges are removed (Fig.8). ## IV Conclusions This study extended former large scale EU power-grid and cascade simulations [19] by edge weights, as described in [21]. Different network topology optimization strategies were compared for the EU16 and EU22 HV power-grids. The addition of the same number of new edges provided the best improvements of the global phase synchronization by dynamically obtained bypasses at the weak nodes. The efficiency of this enhancement is followed by the static bridge additions. For the cascade sizes, the bridge method proved to be the winner for the EU16 network. These improvements are effective in the middle range couplings, near the synchronization point, where these systems presumably tune themselves by self-organization. This is similar to our findings for the usefulness of islanding [19]. What can be the reason behind this phenomenon? Clearly phase chaoticity is maximal near the synchronization transition, which can help redistributing the power from local overloads and increase the stability against cascade failures. Investigating further the effects of such 'noise' would be the target of further research. Our present results also provide a possible range of control parameters, where the Braess's paradox may take place. Further network modification methods, like the introduction of DC lines[44] or consideration of different voltage amplitudes could also be interesting research directions. We have confirmed again, as in [19; 38], that near the synchronization point, PL distributed cascade sizes occur, in agreement with the historical data. We have also verified the usefulness of the network improvement results with the random link additions and the bridge removals, which provided clearly much worse stability and cascade size behaviors. This work provides possibilities of further generalization and may help designers for improving the next generation of power-grids. One of the most striking conclusions is that systems benefit from SOC not only by optimizing resource allocation, but an increased stability against failures and network changes appears in the optimal control parameter ranges. Another valuable finding is that while the addition of links is generally considered to strengthen the structure of the grid, such additions not only have to consider the topological parameters, but the electric characteristics of power lines as well. While improper placement of new lines may help to optimize power flows through the grid, in the case of outages they could also be the driving force towards cascading failures. The understanding of these mechanisms requires the use of heterogeneous and weighted network representations, which will stay in the focus of our future work as well. ###### Acknowledgements. Support from the Hungarian National Research, Development and Innovation Office NKFIH (K128989) and from the ELKH grant SA-44/2021 is acknowledged. We thank KIFU for the access to the national supercomputer network, Jeffrey Kelling for developing the GPU HPC code and Shengfeng Deng for his comments and the maintenance of our local computing resources.
2304.02088
Quantum networks with neutral atom processing nodes
Quantum networks providing shared entanglement over a mesh of quantum nodes will revolutionize the field of quantum information science by offering novel applications in quantum computation, enhanced precision in networks of sensors and clocks, and efficient quantum communication over large distances. Recent experimental progress with individual neutral atoms demonstrates a high potential for implementing the crucial components of such networks. We highlight latest developments and near-term prospects on how arrays of individually controlled neutral atoms are suited for both efficient remote entanglement generation and large-scale quantum information processing, thereby providing the necessary features for sharing high-fidelity and error-corrected multi-qubit entangled states between the nodes. We describe both the functionality requirements and several examples for advanced, large-scale quantum networks composed of neutral atom processing nodes.
Jacob P. Covey, Harald Weinfurter, Hannes Bernien
2023-04-04T19:34:13Z
http://arxiv.org/abs/2304.02088v1
# Quantum networks with neutral atom processing nodes ###### Abstract Quantum networks providing shared entanglement over a mesh of quantum nodes will revolutionize the field of quantum information science by offering novel applications in quantum computation, enhanced precision in networks of sensors and clocks, and efficient quantum communication over large distances. Recent experimental progress with individual neutral atoms demonstrates a high potential for implementing the crucial components of such networks. We highlight latest developments and near-term prospects on how arrays of individually controlled neutral atoms are suited for both efficient remote entanglement generation and large-scale quantum information processing, thereby providing the necessary features for sharing high-fidelity and error-corrected multi-qubit entangled states between the nodes. We describe both the functionality requirements and several examples for advanced, large-scale quantum networks composed of neutral atom processing nodes. ## I Introduction and grand vision The development of large-scale quantum networks [1; 2; 3] will usher in an era of novel applications of quantum technology, which include cryptographically-secured communication [4], distributed or blind quantum computing [5], and sensor and clock networks with precision approaching the fundamental quantum limit [6; 7]. Such a network will consist of a mesh of quantum nodes, which we refer to as "quantum processing units" (QPUs), interconnected with quantum links capable of efficient distribution of quantum states over the whole system (see Fig. 1A). The quantum network will in many ways operate analogously to the classical internet in which classical computers or sensors constitute each node, but it will also face unique challenges due to the fragility of quantum information and the inability to clone a quantum state for signal amplification [8]. In spite of significant progress in recent years, the realization of a large-scale network poses a number of challenges. First, the quantum systems must provide an optical interface that connects their states with quantum states of light to enable remote entanglement generation (REG) over a link (see Fig. 1B). If the distance between nodes exceeds a threshold, a quantum repeater scheme must be employed [9], in which entanglement is distributed between distant nodes by first sharing entanglement over intermediate links with quantum repeater (QR) stations that are then connected together (see Fig. 1A). Second, as the REG process to share entanglement over a link is stochastic, it does not always succeed and needs to be repeated until successful, which can take a significant amount of time. Therefore, quantum memories based on long-lived states are required to maintain the quantum states at the nodes and QR stations with high fidelity. Third, for connecting the links using entanglement swapping [10; 11], deterministic quantum logic operations are required at QRs and at the nodes (see Fig. 1C). Fourth, since the remote entanglement has limited fidelity that is even further reduced when connecting many intermediate links, "entanglement purification" across the entire link is required [12; 13; 14]. Purification is also a stochastic process; if it fails, the whole process on this part of the link has to be repeated (see Fig. 1C), thus requiring even longer storage times in the quantum memories - often approaching the second-scale. Eventually, active error correction [15; 16] will be required to enable the requisite long coherence times and to store the distributed states. Individual neutral atoms have the potential to implement many highly desirable features of quantum network nodes including efficient light-matter interfaces - potentially at telecom wavelengths [17; 18; 19] - based on optical cavities [3; 20], minute-scale coherence and memory times [21; 22; 23; 24], multi-qubit processing capabilities [25; 26; 27; 28], scalability to hundreds of qubits [29], and even high-fidelity mid-circuit readout [30; 31; 32]. Accordingly, we envision long-distance networks with QPUs and QRs as arrays of individually-controlled atoms within optical cavities (see Fig. 1B). In general, the QPUs and QRs could contain two types of qubits: communication qubits and data qubits, which are used to create remote Bell pairs and to process quantum information within the node, respectively. Here, we present a Perspective on the combination of recent advances on research with individual neutral atoms, from which near-term developments will constitute a large step towards realizing our vision for robust quantum networks. Although we focus only on neutral atoms, we note that many hardware platforms are actively being pursued for the realization of this vision. Nitrogen-vacancy centers in diamond is perhaps the most advanced plat
2306.14093
Broadband Diffractive Solar Sail
The transverse radiation pressure force and acceleration is compared for two parametrically optimized designs: prismatic and two-pillar metasurface gratings. The numerical results were cross-verified with both Maxwell stress tensor and modal analysis. Solar blackbody irradiance was assumed for wavelengths ranging from 0.33 [um] to the grating cutoff at 1.5 [um], encompassing 83% of the solar constant. This multi-objective optimizer study found that neither design comprised Si3N4 performed as well as those corresponding to a low refractive index, low mass density material. The predicted transverse acceleration of the optimized low-index metasurface grating is compared to that of a state-of-the-art reflective solar sail.
Prateek R. Srivastava, Ryan M. Crum, Grover A. Swartzlander Jr
2023-06-25T01:56:05Z
http://arxiv.org/abs/2306.14093v1
# Broadband Diffractive Solar Soil ###### Abstract The transverse radiation pressure force and acceleration is compared for two parametrically optimized designs: prismatic and two-pillar metasurface gratings. The numerical results were cross-verified with both Maxwell stress tensor and modal analysis. Solar blackbody irradiance was assumed for wavelengths ranging from 0.33 \(\mu\)m to the grating cutoff at 1.5 \(\mu\)m, encompassing 83% of the solar constant. This multi-objective optimizer study found that neither design comprised of Si\({}_{3}\)N\({}_{4}\) performed as well as those corresponding to a low refractive index, low mass density material. The predicted transverse acceleration of the optimized low-index metasurface grating is compared to that of a state-of-the-art reflective solar sail. ## 1 Introduction The in-space propulsion of sailcraft via solar radiation pressure was originally pioneered by in the 1920s by Tsander and Tsiolkovsoy [1, 2]. In contrast to rockets which both transport significant amounts of fuel mass and make discrete orbit-changing burns, solar sails can attain extraordinarily high velocities given a low mass and continuous acceleration. Space organizations such as NASA, JAXA, and the Planetary Society, have improved the technical readiness level of solar sails in recent years, culminating in an assortment of proposed space science missions [3]. The advent of solar sailing has stimulated advanced concepts that consider the mission objectives as part of the sail design. For example, missions having a spiral trajectory toward or away from the sun benefit from a sail having an optimal "lift" force perpendicular to the sun line. To achieve lift a traditional reflective sail must be tilted away from the sun; consequently the maximum lift cannot be achieved owing to the reduced illumination projected area. In contrast, optical scattering mechanisms like diffraction provide alternative means of transferring photon momentum to the sail in a preferred sun-facing orientation [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. The maximum transverse force on the sail occurs when sunlight is uniformly scattered at 90\({}^{\circ}\) with respect to the surface normal of a sun-facing sail. Figure 1: Schematic diagram of a solar sail with constituent (A) prism and (B) subwavelength pillar elements of period \(\Lambda\) and. The sail diffracts incident light \(\vec{k}_{i}\) by \(\theta_{m}\) into \(\vec{k}_{m}\) owing to \(\vec{\mathbb{R}}\), resulting in net radiation pressure force \(\vec{F}\). Theory To advance the understanding of diffractive sails we explore two designs: a triangular prismatic grating and a metasurface grating comprised of two pillars. Two material strategies are analyzed for each design. First we consider an arbitrary non-dispersive dielectric material having a refractive index \(n_{1}\) placed on a thin substrate of index \(n_{2}=1.5\). Finite difference time domain (FDTD) methods are used to account for internal and external reflections of both polarization component of light, and moreover, the angular scattering distribution across a broad band of optical frequencies. Likewise, we determine the angular scattering distribution when the grating and thin substrate are made with Si\({}_{3}\)N\({}_{4}\). The schematic illustration shown in Fig. 1 depicts a portion of a flat rigid infinitely periodic grating with period \(\Lambda\) in the \(x\), \(z\)-plane of incidence for a sun-facing configuration, comprised of either (A) prism elements or (B) pillars on a thin substrate. Structural flexing and non-normal incidence angle are beyond the scope of this baseline study. The grating period \(\Lambda=1.5\)\([\mu\)m], or equivalently the grating frequency \(\tilde{\nu}=c/\Lambda=200\)\([\)THz\(]\) was selected from a consideration of the spectral cut-off condition, the prism mass, and diffraction effects. The fraction of blackbody irradiance cut off from diffraction decreases with increasing value of \(\Lambda\), whereas the mass of a prism varies as \(\Lambda^{2}\). A large value of the transverse acceleration generally requires negligible spectral cut off and low mass, which combined with a diffraction analysis, provides a value of roughly \(\Lambda=1.5\)\([\mu\)m\(]\). Light is transmitted or reflected light into discreet diffraction angles \(\theta_{m}\) measured with respect to the back surface normal as depicted in Fig. 1. In the reference frame of the sail, the incident and scattered wavelengths are equal, and thus, the respective wave vectors may be expressed \(\vec{k}_{i}=k\hat{z}\) and \(\vec{k}_{m}=k\left(\cos\theta_{m}\ \hat{z}+\sin\theta_{m}\ \hat{x}\right)\), where \(k=2\pi/\lambda\). The diffraction angles are governed by the grating equation: \(\sin\theta_{m}=m\lambda/\Lambda\) assuming normal incidence. We note that \(\cos\theta_{m}=\pm\sqrt{1-\sin^{2}\theta_{m}}\), where \(+(-)\) corresponds to transmitted (reflected) light. The \(m^{\rm th}\) order photon momentum transfer efficiency imparted to the sail at the optical frequency \(\nu=c/\lambda\) may be expressed \(\vec{\eta}_{\nu,m}=(\vec{k}_{i}-\vec{k}_{m})/k=(1-\cos\theta_{m})\ \hat{z}-\sin \theta_{m}\ \hat{x}\), where \(c\) is the speed of light, and normal incidence is assumed. For a light source having a spectral irradiance distribution \(\tilde{I}(\nu)\) the net momentum transfer efficiency \(\vec{\eta}\) may be found by integrating over all frequencies and summing over all allowed diffraction orders for both polarization modes [12]. For an unpolarized source like the sun, we assume the spectral irradiance is equally divided into \(s\) and \(p\) polarization states. The net radiation pressure force on the sail may be expressed \(\vec{F}=F_{0}\vec{\eta}\), where \(F_{0}=I_{0}A/c\) where \(A\) is the sail area and \(I_{0}\) is the irradiance. For example the solar blackbody irradiance between \(\nu_{\rm min}\) and \(\nu_{\rm max}\) of a band-limited blackbody source a distance \(r\) from the sun may be expressed \[I_{0}=\frac{R_{S}^{2}}{r^{2}}\int_{\nu_{\rm min}}^{\nu_{\rm max}}\tilde{I}( \nu)d\nu=\frac{R_{S}^{2}}{r^{2}}\frac{2\pi h}{c^{2}}\int_{\nu_{\rm min}}^{\nu _{\rm max}}\frac{\nu^{3}\ d\nu}{\exp(h\nu/k_{B}T)-1} \tag{1}\] where \(R_{S}=6.957\times 10^{8}\)\([\)m\(]\) is the solar radius, \(h=6.626\times 10^{-34}[\)J\(\cdot\)s\(]\) is the Planck constant, \(k_{B}=1.381\times 10^{-23}[\)J\(/\)K\(]\) is the Boltzmann constant, and we assign \(T=5770.2\) as the effective absolute temperature of the sun. Below we assume \(r\) corresponds to \(1\)\([\)AU\(]\). The case \(\nu_{min,max}=0,\infty\) corresponds to the so-called solar-constant, \(I_{sun}=1360\)\([\)W\(/\)m\({}^{2}]\). Values of \(I_{0}\) are plotted in Fig. 2 as a function of the grating period for \(\nu_{min}=\tilde{\nu}=c/\Lambda\) and two different values of \(\nu_{max}\): \(\infty\) (blue line) and \(900\)\([\)THz\(]\) (red line). The case used for our FDTD model, \(\lambda_{min}=0.333\)\([\mu\)m\(]\) and \(\lambda_{max}=\Lambda=1.5\)\([\mu\)m\(]\) (\(\nu_{min}=200\)\([\)THz\(]\), and \(\nu_{max}=900\)\([\)THz\(]\)) includes up to four diffraction orders and spans \(83\%\) of the solar spectrum. Although wider bandwidths are of interest, FDTD run times become prohibitively long. Following Ref [12] the net radiation pressure force on the sail owing to a band-limited source may be expressed \[\vec{F}^{s,p}=\frac{A}{c}\int_{\nu_{min}}^{\nu_{max}}\sum_{m=M_{\nu}^{*}}^{M_{\nu} ^{*}}\tilde{I}_{m}^{s,p}(\nu)\;((1-\cos\theta_{m})\;\hat{z}-\sin\theta_{m}\; \hat{x})\;\mathrm{d}\nu \tag{2}\] where \(\tilde{I}_{m}^{s}(\nu)\) and \(\tilde{I}_{m}^{p}(\nu)\) respectively correspond to the value of the spectral irradiance scattered into the \(m^{\mathrm{th}}\) diffraction order for the \(s\) and \(p\) polarization states, and where \(\theta_{m}\) depends on frequency owing to the grating equation which may be expressed, \(\sin\theta_{m}=mc/\nu\Lambda\). The frequency-dependent cut-off mode numbers at the normal incident are given by \(M_{\nu}^{*}=\pm\mathrm{iNT}[\nu/\tilde{\nu}]\) (or equivalently \(\pm\mathrm{iNT}[\Lambda/\lambda]\)) where INT represents the integer value of the argument rounded toward zero. In a lossless system having no guided surface waves that extend to infinity, we Figure 3: FDTD Schematic: Unit cell of period \(\Lambda\) of (A) prism and (B) meta gratings with plane wave source (red line), field monitors (blue lines), and perfectly absorbing boundary layers (green areas). Figure 2: Fraction of integrated solar black body spectral irradiance for the range \(\nu_{min}=c/\Lambda\) to \(\nu_{max}\), where \(I_{sun}=1360\) [\(\mathrm{W/m^{2}}\)]. Insert: Black body spectral irradiance with range \(\nu_{min}=200\) [\(\mathrm{THz}\)] (\(\Lambda=1.5\) [\(\mu\)m]) and \(\nu_{max}=900\) [\(\mathrm{THz}\)] (\(0.83I_{sun}\)). Arrows: Range of maximum mode number \(M\). expect \[\tilde{I}(\nu)=\sum_{m=M_{\nu}^{+}}^{M_{\nu}^{+}}\left(\tilde{I}_{m}^{s}(\nu)+I_{m }^{p}(\nu)\right) \tag{3}\] In general \(\tilde{I}_{m}^{s}(\nu)\neq\tilde{I}_{m}^{p}(\nu)\) owing to polarization-dependent scattering. The Maxwell stress tensor \(\overline{\tilde{T}}_{\nu}\) may be evaluated at each frequency as an alternative method to evaluate the net force \(\tilde{F}\): \[\vec{F}^{s,p}=\int_{\nu_{\min}}^{\nu_{\max}}\vec{F}_{\nu}^{s,p}d\nu=\int_{\nu_ {\min}}^{\nu_{\max}}\left(\oint_{S}\overline{\tilde{T}}_{\nu,ij}^{s,p}\cdot d \vec{S}\ \right)d\nu \tag{4}\] where \(S\) is an arbitrary surface enclosing the sail, \(\mathrm{d}\vec{S}\) is the elemental area vector and \[\overline{\tilde{T}}_{\nu,ij}^{s,p}=\epsilon_{0}(E_{\nu,i}^{s,p}E_{\nu,j}^{s, p}-\frac{1}{2}|E_{\nu}^{s,p}|^{2}\delta_{ij})+\frac{1}{\mu_{0}}(B_{\nu,i}^{s,p}B_{ \nu,j}^{s,p}-\frac{1}{2}|B_{\nu}^{s,p}|^{2}\delta_{ij}) \tag{5}\] where \(\epsilon_{0}\) and \(\mu_{0}\) are respectively the vacuum permittivity and permeability, \(E\) and \(B\) are respectively electric and magnetic field amplitudes, and \(\delta_{ij}\) is the Kronecker delta function. For a structure that is periodic in the plane of incidence as depicted in Fig. 3 and extended over a distance \(L_{y}\) out of the plane, the only elemental areas that contribute to (4) are \(d\vec{S}_{z=\pm z_{0}}=\pm dx\ dy\ \hat{x}\pm dx\ dy\ \hat{z}\). The force exerted across the area \(L_{y}\times\Lambda\) of an infinitely period grating may therefore be expressed \[\begin{split}\vec{F}^{s,p}&=\int_{\nu_{\min}}^{\nu_ {\max}}\left(\int_{\Lambda L_{y}}\left((\overline{\tilde{T}}_{\nu,ij}^{s,p} \cdot d\vec{S})_{z=-z_{0}}+(\overline{\tilde{T}}_{\nu,ij}^{s,p}\cdot d\vec{S })_{z=+z_{0}}\right)\right)d\nu\\ &=L_{y}\int_{0}^{\Lambda}\left((-T_{xx}-T_{zz})_{z=-z_{0}}+(T_{ xx}+T_{zz})_{z=+z_{0}}\right)dx\end{split} \tag{6}\] where \(z_{0}\) is an arbitrary distance from the grating, and the final integral includes the frequency-integrated stress tensor components \(T_{xx}\) and \(T_{zz}\). ## 3 Numerical Methods We used the open source FDTD numerical solver MEEP [17] to solve Eq.s (4) - (6), making use of fast built-in "methods" like ForceSpectra to calculate forces in a specified ForceRegion. To cross-validate the force values we randomly compared them to values obtained using Eq.(2), this time using diffraction mode options in MEEP. In both cases, Bloch periodic boundary conditions were employed. The power spectrum of a broadband source in MEEP is defined as the distribution function GaussianSource(fcen, fwidth) where fcen and fwidth are respectively the center and width of the Gaussian distribution. Force calculations are made in the frequency domain and we scaled them to correspond to the solar blackbody spectral irradiance. The red line in Fig. 3 depicts a planar light source propagating in the \(\hat{z}\) direction. The blue lines represent so-called monitors where the electromagnetic fields \(\tilde{E}_{\nu}^{s,p}\) and \(\tilde{B}_{\nu}^{s,p}\) are evaluated for the determination of the Maxwell stress tensor, and where alternatively the spectral irradiance \(\tilde{I}_{m}^{s,p}\left(\nu\right)\) may be determined to evaluate Eq. (2). The green lines in Fig. 3 represent perfectly matched layers. The square numerical grid elements were set to \(\delta x=\delta z=20\) [nm]. The simulation ran until either \(E_{z}\) or \(H_{z}\) decayed to \(10^{-6}\) of the peak value. The focus of this study was to determine optimized parameters of the two structures depicted in Fig. 3, both having the same period \(\Lambda=1.5\) [\(\mu\)m]: (A) a prismatic grating and substrate having four optimization parameters \(n_{1}\), \(n_{2}\), \(h_{1}\), and \(t\); and (B) a metasurface comprised of two pillars and a substrate having nine optimization parameters \(n_{1}\), \(n_{2}\), \(h_{1}\), \(h_{2}\), \(w_{1}\), \(w_{2}\), \(x_{1}\), \(x_{2}\), and \(t\). We employed a multi-objective optimizer NSGA-II (with 40 agents, 40 offspring, 150 generations) [18] with the range of parameter values listed in Table 1. The objectives are to achieve the largest values of transverse force for both polarizations and to minimize the mass. A representative set of 40 solutions (called Pareto-optimal) were obtained. The same procedure was followed for silicon nitride (\(n_{\mathrm{Si_{3}N_{4}}}\)) structures, but in this case \(n_{1}=n_{2}\) and \(h_{1}=h_{2}\). Silicon nitride is relatively stable in a space environment, its optical properties are well characterized, and its lithographic fabrication techniques are mature. The refractive index \(n_{\mathrm{Si_{3}N_{4}}}\) varies from \(\sim 2.00\) at 200 [\(\mathrm{THz}\)] to \(\sim 2.15\) at 900 [\(\mathrm{THz}\)] [19]: \[n_{\mathrm{Si_{3}N_{4}}}^{2}-1=\frac{3.0249\lambda^{2}}{\lambda^{2}-0.135340 6^{2}}+\frac{40314\lambda^{2}}{\lambda^{2}-1239.842^{2}} \tag{7}\] A solar sail is typically used to achieve a spiral trajectory toward or away from the sun. In this case, the flight time may be minimized when the transverse (lift) component of acceleration \(F_{x}/M_{\mathrm{sc}}\) is a maximum, where \(M_{sc}=m_{\mathrm{sail}}+m_{\mathrm{pl}}\) is the total mass of the sailcraft, \(m_{\mathrm{sail}}\) is the mass of the diffractive sail material, \(m_{\mathrm{pl}}\) is the mass of the payload and structural support mechanisms, and \(F_{x}=F_{x}^{s}+F_{x}^{p}\). The transverse acceleration is optimized when both \(F_{x}^{s}\) and \(F_{x}^{p}\) are maximized and \(m_{\mathrm{sail}}\) is minimized. The sail mass of our two designs may be expressed \[m_{\mathrm{sail}}^{\mathrm{prism}} =\left(\frac{1}{2}\rho_{1}h+\rho_{2}t\right)N_{x}^{2}\Lambda^{2} =\left(\frac{1}{2}\rho_{1}h+\rho_{2}t\right)A \tag{8a}\] \[m_{\mathrm{sail}}^{\mathrm{meta}} =\rho_{1}(N_{x}w_{1}h_{1}+N_{x}w_{2}h_{2})N_{x}\Lambda+\rho_{2}N_ {x}^{2}\Lambda^{2}t\] (8b) \[=(\rho_{1}w_{1}h_{1}/\Lambda+\rho_{1}w_{2}h_{2}/\Lambda+\rho_{2}t)A\] where \(N_{x}\) is the number of grating periods across the sail, and \(A\) is the area of a square sail. Ignoring the payload mass (\(m_{pl}=0\)) and writing the transverse component of force \(F_{x}=I_{0}A\eta_{x}/c=m_{\mathrm{sail}}a_{x}\) we obtain the transverse acceleration for our unladen structures: \[a_{x}^{\mathrm{prism}}=\frac{I_{0}}{\alpha c}\ \frac{\eta_{x}}{\frac{1}{2}n_{1}h+n _{2}t} \tag{9a}\] \[a_{x}^{\mathrm{meta}}=\frac{I_{0}}{\alpha c}\ \frac{\eta_{x}}{n_{1}(w_{1} \mathfrak{f}_{1}+w_{2}\mathfrak{f}_{2})+n_{2}t} \tag{9b}\] where \(\mathfrak{f}_{1,2}=h_{1,2}/\Lambda\) is the fill factor, and for convenience we associate the refractive index and mass density with a proportionality factor \(\alpha\): \(\rho_{1,2}\equiv\alpha n_{1,2}\). Using the space qualified polyimide material CP1 [20] as an example, with a specific gravity s.g. = 1.54 and a mean refractive index of 1.57 we obtain \(\alpha=0.98\times 10^{3}\) [kg/m\({}^{3}\)]. For our silicon nitride structures we instead combine its specific gravity, s.g. = 3.17 [21] with the mean index, 2.02, to obtain \(\alpha=1.57\times 10^{3}\) [kg/m\({}^{3}\)]. As seen in Eq. 9 the transverse acceleration is independent of the sail area and is implicitly dependent on the grating period \(\Lambda\) via the efficiency factor \(\eta_{x}\) (which is found by numerically determining the transverse force \(F_{x}\) value). \begin{table} \begin{tabular}{c c} \hline \(\mathbf{x}\in\) & \([x_{1,2},w_{1,2},h_{1,2},n_{1,2},t]\) \\ \(\mathbf{max}:\) & \(F_{x}^{s}(\mathbf{x}),\ F_{x}^{p}(\mathbf{x})\) \\ \(\mathbf{min}:\) & mass(\(\mathbf{x}\)) \\ **such that** : & \(1.5\leq n_{1,2}\leq 3.5\) \\ **such that** : & \(-\Lambda/2\leq x_{1,2}\leq\Lambda/2\) \\ **such that** : & \(0\leq w_{1,2},h_{1,2}\leq\Lambda\) \\ **such that** : & \(0.1\mu m\leq t\leq 0.5\mu m\) \\ \hline \end{tabular} \end{table} Table 1: Multi-Objective Optimization Scheme: Nine variables, three objectives, and four constraints. Results & Analysis Forty representative Pareto-optimal solutions are plotted in Fig.4 for the two gratings having nine arbitrary parameters (A) and (B), and for the two gratings comprised of Si\({}_{3}\)N\({}_{4}\) (C) and (D). The net transverse radiation pressure force \(F_{x}\) is plotted against the total mass of the sail, \(m_{\rm sail}\). In all cases a trend in the data appears: Higher mass sails provide higher forces. To select the most optimal design for each structure we use the greatest value of the transverse acceleration \(a_{x}=F_{x}/m_{\rm sail}\) as the deciding factor (see straight line in Fig.4). The parameters for the Pareto-optimal solution that intersects this line are tabulated in Table 2 for the four different cases. We find that both the prismatic and metasurface structures having arbitrary refractive indexes are able to produce large values of \(F_{x}\), as is evident in Fig.4 for Case A and Case B. However, owing to the lower mass of the metasurface structure, its optimal acceleration \(a_{x}=1080\,\left[\mu{\rm m}/s^{2}\right]\) is 48% greater than that of the prism grating. The Si\({}_{3}\)N\({}_{4}\) structures, Case C and Case D, depict significantly less values of optimized acceleration. These values may be compared with a conventional aluminized polyimide sail [22] which is roughly 3 \(\left[\mu{\rm m}\right]\) thick and achieves a momentum transfer efficiency of roughly 90% of the ideal value of 0.77 : \(a_{x}=680\,\left[\mu{\rm m}/s^{2}\right]\). This comparison suggests that an optimized metasurface sail is a competitive alternative to a conventional reflective sail. However, amongst the many unknown fabrication, packaging, unfurling, and space weathering issues is whether a large robust metasurface grating can be fabricated on a thin (\(<1\,\left[\mu{\rm m}\right]\))substrate [14]. To better understand the spectral force characteristics of the four sails examined in this study, we plot the transverse spectral force distribution \(F_{\nu,x}=F_{\nu,x}^{s}+F_{\nu,x}^{p}\) in Fig. 5. The blue line represents the FDTD-obtained values corresponding to the Maxwell stress tensor calculations, whereas the circles represent the values corresponding to our FDTD modal analysis. The excellent agreement between these two approaches provides a level of cross-validation of the methods. Fluctuations of the value of \(F_{\nu,x}\) are indicative of pronounced diffractive variations of the transmitted and reflected light at different optical frequencies, as expected for a small period grating [4]. Also plotted in Fig. 5 are theoretical values of force for the ideal limit \(\eta_{x}=1\) (black line) and the ideal reflective sail \(\eta_{x}=0.77\) (red line): \(F_{x,\nu}=\eta_{x}\tilde{I}\nu A/c\). These results suggest that the diffractive sails explored in this study may equal or exceed the acceleration of a \begin{table} \begin{tabular}{c c c c c} \hline Parameters & A & B & C & D \\ \hline \(h_{1}\left[\mu{\rm m}\right]\) & 0.76 & 1.12 & 1.02 & 0.62 \\ \(h_{2}\left[\mu{\rm m}\right]\) & - & 1.26 & - & \(h_{1}\) \\ \(w_{1}\left[\mu{\rm m}\right]\) & - & 0.32 & - & 0.16 \\ \(w_{2}\left[\mu{\rm m}\right]\) & - & 0.16 & - & 0.24 \\ \(x_{1}\left[\mu{\rm m}\right]\) & - & 0.06 & - & 0.38 \\ \(x_{2}\left[\mu{\rm m}\right]\) & - & 0.44 & - & 0.1 \\ Prism Angle & 26.9\({}^{\circ}\) & - & 34.2\({}^{\circ}\) & - \\ \(n_{1}\) & 2.43 & 1.55 & Si\({}_{3}\)N\({}_{4}\) & Si\({}_{3}\)N\({}_{4}\) \\ \(n_{2}\) & 1.5 & 1.5 & Si\({}_{3}\)N\({}_{4}\) & Si\({}_{3}\)N\({}_{4}\) \\ \(t\left[\mu{\rm m}\right]\) & 0.1 & 0.1 & 0.1 & 0.11 \\ Force [nN] & 785 & 787 & 722 & 416 \\ mass [\(\times 10^{-3}\) kg] & 1.07 & 0.73 & 1.93 & 0.84 \\ \(a_{x}\) [\(\mu\)m/s\({}^{2}\)] & 731 & 1080 & 373 & 494 \\ \hline \end{tabular} \end{table} Table 2: Optimized parameters and cost function values for (A) prism and (B) meta gratings of arbitrary dispersionless materials, and (C) prism and (D) meta gratings for Si\({}_{3}\)N\({}_{4}\), each with period \(\Lambda=1.5[\mu{\rm m}]\), \(L_{y}=1\) [m], \(L_{x}=N\Lambda=1\) [m], \(A=L_{x}L_{y}\). reflective sail only if there is a small-mass advantage of the former. The prism and pillar designs suffer from the effects of external and internal reflections which can scatter light that opposes the desired transverse scattering direction. For example front surface reflections from the prism in Fig. 3 (A) have positive values of \(k_{x}\) which oppose the transmitted (refracted) rays. Those reflected rays carry 17% of incident beam power owing to Fresnel reflections. Less than two thirds of the incident radiation is refracted out the back surface owing to internal reflections and shadowing effects from the steep side facets. It is yet unknown whether the added mass of anti-reflection coatings would provide increased the transverse acceleration. Other unknowns that are beyond the scope of this paper include the practical limits of assumptions about the rigidity of the sail, the coherence properties of the incident sunlight, and whether the sail can be packaged and unfurled without changing its optical properties. ## 5 Conclusions We performed FDTD simulations coupled with a NSGA-II multi-objective optimizer to determine design parameters for four different grating structures, each having a period of 1.5 [\(\mu\)m] and a sail area of 1 [m\({}^{2}\)]. The small grating period was selected to satisfy a small desired mass and a marginal cutoff wavelength of the solar blackbody spectrum. Our optimization study included 3 objectives and up to 9 variables, as well as both s and p polarization. The transverse component of radiation pressure force was determined for a truncated solar black body radiator (200-900 [THz] or equivalently, 0.33 to 1.5 [\(\mu\)m]) at 1 [AU] for the purpose of two-orbit changing maneuvers in space. An optimized metasurface grating comprised of two pillars per period was found to provide 48% more transverse acceleration than an optimized prism grating owing to Figure 4: Pareto optimal solutions for (A) prismatic and (B) metasurface gratings having arbitrary refractive indexes, and for (C) prismatic and (D) metasurface gratings comprise of silicon nitride. A sun-facing square sail of area 1[m\({}^{2}\)] illuminated with a band-limited solar black body is assumed. The optimal transverse acceleration \(a_{x}\) for each case is determined from the slope of the straight line, and the corresponding design parameter values for the intersecting points are given in Table 1. the small mass of the former grating. We found that Silicon Nitride did not perform well for either the prism or two-pillar metasurface design. Although none of the structures provided radiation pressure force values exceeding those of an ideal flat reflective sail, the diffractive sail may nevertheless provide an acceleration advantage if the proposed sun-facing diffractive sail spacecraft has a total lower mass than a reflective sailcraft. The design of alternatives to flat reflective sails is an emerging area of research and we therefore believe continued exploration of diffractive designs such as hybrid reflective/transmissive structures will provide more efficient solar sails in the future. ## Funding National Aeronautics and Space Administration(NASA) (80NSSC19K0975, 80MSFC22F0165), Johns Hopkins University Applied Physics Lab (177864). ## Acknowledgment We are grateful to Charles (Les) Johnson and Andrew F. Heaton (NASA George C. Marshall Space Flight Center, Huntsville, AL), and to Amber L. Dubill (Johns Hopkins Applied Physics Laboratory) for discussions related to solar sailing. We also thank Rajesh Menon and Apratim Majumder (U. Utah, Salt Lake City, UT) for meta-material and FDTD modeling discussions. ## Disclosures The authors declare no conflicts of interest. ## Data availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2303.11735
Tensor networks for quantum machine learning
Once developed for quantum theory, tensor networks have been established as a successful machine learning paradigm. Now, they have been ported back to the quantum realm in the emerging field of quantum machine learning to assess problems that classical computers are unable to solve efficiently. Their nature at the interface between physics and machine learning makes tensor networks easily deployable on quantum computers. In this review article, we shed light on one of the major architectures considered to be predestined for variational quantum machine learning. In particular, we discuss how layouts like MPS, PEPS, TTNs and MERA can be mapped to a quantum computer, how they can be used for machine learning and data encoding and which implementation techniques improve their performance.
Hans-Martin Rieser, Frank Köster, Arne Peter Raulf
2023-03-21T10:46:56Z
http://arxiv.org/abs/2303.11735v1
# Tensor Networks for Quantum Machine Learning ###### Abstract Once developed for quantum theory, tensor networks have been established as a successful machine learning paradigm. Now, they have been ported back to the quantum realm in the emerging field of quantum machine learning to assess problems that classical computers are unable to solve efficiently. Their nature at the interface between physics and machine learning makes tensor networks easily deployable on quantum computers. In this review article, we shed light on one of the major architectures considered to be predestified for variational quantum machine learning. In particular, we discuss how layouts like MPS, PEPS, TTNs and MERA can be mapped to a quantum computer, how they can be used for machine learning and data encoding and which implementation techniques improve their performance. ## 1 Introduction Quantum computation is widely believed to set a new paradigm in computation. Utilizing quantum phenomena allows to solve certain problems[1] far more efficient than classical binary algorithms. This raises hope that quantum implementations of other tasks also may provide quantum advantages. One of the applications that could benefit from the access to the high dimensional Hilbert spaces of quantum computers is machine learning (ML). ML is a data driven approach for solving complex problems. An ML algorithm generates a model from training data that can be used to make predictions against previously unseen data. Quantum machine learning (QML) could advance learning by improved generalization to unknown data[2], higher noise robustness and the need for less training data[3], and provide a more natural approach to quantum data analysis circumventing intermediate measurements[4] or generally a better computational complexity scaling[5]. Promising candidates for QML architectures are tensor networks (TN). They provide a structured approach for handling large objects with tensor structure which carry high amounts of correlated information like quantum states. Initially developed to store and process physical states of many-body quantum systems in numerical simulations[6, 7], TNs also turned out to be useful for ML applications. Their approach to realize learning architectures is complementary to neural networks. As the TN description uses a (quantum) state and operator formulation, the transfer to a quantum computer can be done naturally. In this review, we focus on the application of TNs for QML. We will begin with a short introduction to the classical TN theory including optimization and ML approaches in Section 2. Then, we will discuss how to apply these concepts to a quantum computer in Section 3 and the encoding of data to quantum states for ML applications in Section 4. We will not cover many aspects of classical TNs in detail. For a deeper technical dive into TNs, the reader may refer to a general introduction[8] and the reviews on specific layouts[9, 10] or decomposition and optimization techniques[11, 12, 13]. Applications are many-body quantum systems[14], nonlinear system identification[15] and classical ML[16, 17]. The field of TN-QML is just developing, and notations and terminology vary throughout the community. Due to their origin in quantum theory, some authors call even ML with classical TNs "quantum machine learning"[18]. In our opinion, a more suitable term would be _quantum-inspired_ here. Furthermore, one can argue that variational quantum circuits (VQC)[19] require classical optimization and therefore are hybrid. In this article however, we will use the following convention: Methods fully evaluated without a quantum computer will be called _classical_. Methods developed for quantum computers that only require classical optimization of weights will be called _quantum_ TNs (QTN), as full quantum computation is still far out of reach. The term _hybrid_ will be used for methods that combine QML with a classical data processing structure, e.g. pre-training or data pre- and post-processing. ## 2 Classical Tensor Networks ### Introduction on Tensors and Tensor Networks TNs are a decomposition of large tensorial structures into several connected low rank tensors (see Figure 1 (a)). Tensors are multidimensional arrays and therefore generalizations of vectors and matrices. While a matrix has two indices a tensor may have an arbitrary amount of indices. Technically, tensors describe objects from and maps between tangent and cotangent spaces. A tensor may have regular (lower) and dual (upper) indices depending on whether this index refers to objects from a tangent space or from a cotangent space. Each of these spaces may have different dimensions. A single tensor may have both types of indices and therefore connect to both tangent and cotangent spaces. The rank of a tensor corresponds to its number of free indices. The electromagnetic field tensor \(F_{ab}\) from relativistic physics for example is a rank two tensor with four dimensional space time indices each and the Riemann curvature tensor from general relativity \(R^{a}{}_{bcd}\) is a rank four object, with one dual (\(a\)) and three regular (\(b\), \(c\), \(d\)) indices. Free regular indices can be contracted with free dual indices by summing over all dimensions of this index. Einstein sum convention is a convenient form to express this contraction: having the same index twice automatically implies a summation \[-\frac{1}{4\mu_{0}}F_{ab}F^{ab}=-\frac{1}{4\mu_{0}}\sum_{a,b=0}^{3}F_{ab}F^{ab}. \tag{1}\] As writing these tensors with indices can be very complex for larger problems, graphical notations like the Penrose diagrams have been developed to simplify the handling of tensor equations[11]. The tensors from before correspond to the diagrams The graphical notation actually is one strength of the TN paradigm, as it provides accessibility to high dimensional states: Each symbol is a tensor, its rank is given by the number of legs it has and the type of index determines the direction of the associated leg. Figure 1 (a) illustrates the idea behind the TN approach: A large tensor \(A^{abcd}\) which may represent some quantum state \(\langle\Psi|\) usually is hard to handle computationally. It requires large storage space and the manipulation of a large number of entries for each operation. Breaking down \(A\) into a network of smaller connected tensors improves computability when the internal Figure 1: Examples for common tensor network layouts. (a) a general irregular tensor network. One may use any tensor network structure to express the large tensor \(A^{abcd}\) or the wave function \(\langle\Psi|\). However, regular tensor networks provide benefits in terms of interpretability and universality. Both (b) matrix product states (MPS) and (c) projected entangled pair states (PEPS) share the same grid structure with different dimensionality. (d) tree tensor networks (TTN) and (e) multiscale entanglement renormalization ansatz (MERA) have a hierarchical structure, where MERA entangle between individual branches in contrast to TNN. structure of \(A\) matches the TN's layout. This requires a third kind of tensor index called _internal_ or virtual index that connects the constituents of the TN. We will denote this kind of index in greek letters. The dimension of internal indices is called bond dimension \(\chi\). It determines how strongly the constituent tensors are coupled and how much information is shared between them. TNs allow to apply local operations individually on each tensor node instead of having to evaluate the whole tensor at once. Tensors can be joined by contracting over connected indices or decomposed into several connected tensors. The most common technique for decompositions along a single direction is singular value decomposition (SVD), a generalization of diagonalization for arbitrary shaped tensors. Polar decomposition is faster than SVD, but does not allow for reducing bond dimensions easily. Tucker decomposition can be used for decomposing nodes within several directions at once [20]. The general idea behind tensor decomposition methods is to represent an arbitrary tensor with a specific set of constituent tensors. In SVD for instance, a tensor \(A^{\alpha}_{a\delta}\) is decomposed into a unitary matrix \(U^{\alpha}_{\beta}\), a diagonal singular value matrix \(\Sigma^{\beta}_{\gamma}\) and an isometric matrix \(V^{Y}_{a\delta}\) \[A^{\alpha}_{a\delta}=\raisebox{-14.226378pt}{\includegraphics[width=14.22637 8pt]{TNS}}=U^{\alpha}_{\beta}\Sigma^{\beta}_{\gamma}V^{\gamma}_{a\delta} \tag{2}\] where isometric tensors with known direction are given by triangles, unitaries and isometries with unknown orientation by squares and any other kind of tensor by a circle. Having access to the singular values in the diagonal matrix \(\Sigma\) allows for reducing bond dimensions by removing zero singular values. This also can be used for approximation removing the lowest singular values having the least contribution to the bond. Since tensor decompositions can be done in any direction on each bond and contracted to each side at any time, TNs are not unique but contain a gauge degree of freedom. One can make use of this property to bring the TN to a canonical form where the bonds form orthonormal Hilbert spaces [14] and the tensors are isometric or even unitary [8]. In many cases, it makes sense to bring the TN to such a canonical form where all tensor nodes are isometric. This has several advantages. First of all, isometric tensors automatically fulfil a normalization condition \(A^{a\mu}A^{\gamma}_{a\beta}=\delta^{\mu}_{\mu}\) which enables the application of optimization schemes (see Section 2.3). Second, it is mandatory for techniques that require directionality [21] or make use of the properties of isometries [22]. In particular, mapping a TN to a quantum circuit requires the tensors to be at least isometric (see Section 3). Applications in Quantum Computingare based on the original application of TNs: reducing the computational cost of storing and evaluating lowly entangled multi-particle quantum states. This comes in handy for quantum computer simulations both for execution [23, 24] and validation [25] of circuits as well as the estimation of errors [26]. Especially for short NISQ era algorithms, entanglement between many qubits usually is not too high and therefore circuit sizes well beyond the power of other simulation methods can be evaluated using TNs [27]. Additionally, TNs have been proposed to parallelize quantum simulations by cutting the system into several weakly entangled pieces and approximating the state of all but one piece by TNs [28]. Simulating a quantum computer may indeed be more resource efficient than using quantum hardware itself for a lot of low-entanglement applications [29]. This idea has been used already to develop quantum inspired algorithms executed on classical hardware, e.g. for optimizing stock market portfolios [30, 31] or radiotherapy plans [32] with quantum algorithms compressed to a classical TN approximation. ### Tensor Network Layouts Technically, the TN may have any shape but using regular TNs provides many benefits like simpler optimization, simpler control and transferability to problems with different structure. Such TNs are also more interpretable than arbitrary networks. The most common layouts either are grid (Fig. 1 b and c) or hierarchical (Fig. 1 d and e) states. Promoting state layouts to operators is either done by allowing every individual grid tensor node to have regular and dual indices or by connecting a complete hierarchical network with its dual on their topmost layers. Grid Layoutsare the most natural TN description of physical lattices as the layout has a similar structure to the system. These layouts can be seen as derivatives of Projected Entangled Pair States (PEPS) [33]. In quantum applications, PEPS nodes are constructed as composite objects consisting of coupled internal spins. Each spin connects to a neighboring site via an edge and at each node the constituent spins are entangled and truncated, thus the name PEPS. The number of spin tuples depends on the the dimensionality of the network [9], typically a hypercube or hexagonal. The constituent spin construction is very useful when employing PEPS for the description of quantum systems as this allows for spin constraints on the bonds. For ML applications however, ansaetze for the nodes reflect computational approximations or inductive biases. Although PEPS are defined for arbitrary dimensions, usually low dimensional layouts are used. One dimensional PEPS are called Matrix Product States (MPS) or tensor trains. These are the simplest and most studied TN layouts [9]. In index notation, the MPS from Fig. 1 (b) will look like \[A^{abcde}=\bar{A}_{\alpha}^{a(1)}\ \bar{A}_{\beta}^{ab(2)}\ \bar{A}_{\gamma}^{\beta c (3)}\ \bar{A}_{\delta}^{\gamma d(4)}\ \bar{A}^{\delta e(5)} \tag{3}\] with constituent tensors \(A^{(k)}\). Common gauges for MPS are called left, right and site canonical forms depending on the orientation of the isometric tensor nodes [9]. Brickwall or checkerboard TNs used in some quantum computing applications [34, 35, 36] are another variety of two dimensional grid layouts equivalent to a hexagonal PEPS. The brickwall layout is a superposition of MPS up to a certain bond dimension [35] as it allows for the realization of MPS of different gauges overlapping at the same time. Hierarchical Layoutshave input or output tensor nodes that are not coupled directly but are pooled on several internal layers. The simplest hierarchical structure is a tree tensor network (TTN) where two or more child nodes are connected to a parent node in the next layer until only a single node is left on the top. This layout is also called hierarchical Tucker decomposition. TTNs are able to catch both local entanglement and long range entanglement between groups of nodes, but not long range entanglement between individual tensor nodes. A TTN may have variable depth on different branches when the considered system is not homogeneous [37]. The Multi Scale Entanglement Renormalization Ansatz (MERA) is an isometric TTN derivative with better entropy scaling [38]. The main idea is to enhance the hierarchy with layers of unitary nodes connecting neighboring branches. These so called _disentanglers_ reduce entanglement passed on to the next level (see figure 1 e). MERA has a higher computational cost than other layouts due to the loops, but it can capture symmetry and far higher entanglement [9, 39] while still being efficiently storable [8]. Varieties of MERA offer even better entropy scaling [14]. Both TTN and MERA can be generalized to higher dimensions by considering unit cells of the respective dimension at each node [40, 41]. The layout of a TN determines the maximal entanglement or internal correlation it can support. This gives a bound on the system type the TN can approximate without having a bond dimension scaling exponentially with the system size. For MPS and PEPS entanglement fulfils an area law which means, that the amount of entanglement between a sub-network and its surroundings scales with its boundary [42]. This means, the entanglement for an 1-D MPS is constant [9], for a 2-D PEPS it scales linearily. For MERA based layouts, the entanglement scales up to a volume law, where a sub-networks entanglement with the surroundings depends on the number of nodes within the sub-network. In practice, the choice for a specific layout usually is a trade-off between the possible entanglement and the computational cost: MPS and TTN can be contracted efficiently, MERA and PEPS usually are costly. Further refinements can be made by applying symmetries to the TN [9]. Relevant symmetric systems are homogeneous or periodic grids or layers in hierarchical networks [8]. For ML, this reduces the complexity of the TN and makes it easier to train. ### Optimization Methods The term 'optimizing TNs' can refer to two things. The size of a TN representation can be reduced by iterative executions of tensor decompositions along the internal bonds. This allows for the local adaption of bond dimensions to relevant degrees of freedom, e.g. by defining a threshold for relevant singular values. More often however, one seeks to optimize the value of some function of the TN. In quantum physics for example, this means to maximize the overlap between some given state and a TN approximation or minimizing the energy expectation value with respect to some Hamiltonian to find its TN ground state. This corresponds to minimizing a loss function of a TN based ML approach. The optimization can be achieved via several well established methods. In particular, general global gradient methods are available as well as TN specific techniques which make use of the network's locality and the tensorial nature of the nodes. Renormalization methodsmake use of the gauge ambiguity in TNs. They exploit the locality of operators to optimize the TN site by site. Density matrix renormalization group (DMRG), the first method of this kind, was developed to optimize spin chain Hamiltonians efficiently [6]. Soon, it was understood that restricting the maximum entanglement at each site reduces computational resources while describing lowly-entangled chains very well [43] and further renormalization techniques were developed [44]. These provide powerful tools for optimizing MPS. Renormalization methods for TNs have been reviewed extensively before [11, 8, 14]. Hence, we will only sketch the basic idea of DMRG for a finite MPS here. DMRG can be applied to Hamiltonians \(H\) that consist of independent blocks connecting neighboring MPS nodes. First, initialize a state randomly and consider the expectation value \(\langle\Psi_{0}|H|\Psi_{0}\rangle\). Start with a block at one end of the chain and contract all other nodes to an environment tensor generating an effective Hamiltonian for the first site. Diagonalize the effective Hamiltonian and truncate its Hilbert space to the lowest (effective) eigenvalues. Subsequently iterating this procedure at each site, will deterministically evolve the MPS towards the Hamiltonian's groundstate. Renormalization methods provide a local and fast way of optimization adapted to the structure of TNs but also have some disadvantages. First, DMRG is hard to implement in standard ML frameworks, especially when combining TNs and neuronal layers [45]. The algorithm has to be handcrafted for each problem [22]. Second, generalization to higher dimensions is possible [21, 46] but not as efficient as for MPS due to entropy scaling [9]. Global gradient methodsare standard optimization techniques that also apply to TNs. While using an overall global gradient usually is outperformed by renormalization methods, global methods make sense in special cases. In particular, renormalization methods have not been established yet for QTNs. Currently, stochastic gradient approximation methods [47] are employed in QML to circumvent the need for costly calculations of total gradients in high dimensional parameter spaces [48]. Global gradients have the downside that the gradient may vanish for random initial conditions in high dimensional parameter spaces. In QML, this is usually referred to as the _barren plateau_ phenomenon [49] and is similar to the vanishing gradient problem known from classical ML [50]. The performance of gradient methods can be boosted by considering the special structure of TNs, e.g. with adapted initialization schemes [45]. Introducing locality either on the optimization routine or the loss can also mitigate barren plateaus (see Section 3). Geometric methodsmake use of the network's underlying tensorial geometry. Tools from differential geometry can be used for analyzing the TN on the space of entanglement patterns [51] and optimizing on loss manifolds [52]. This kind of optimization performs well on high dimensional parameter spaces, especially in combination with stochastic gradient descent [53] and auto-differentiation on individual nodes [54, 55] or whole layers [22]. More advanced geometric methods reuse previous update steps. For this, their gradient vectors have to be transported along the optimization manifold [16]. However, they have to be applied in practice yet. ### Classical Machine Learning with Tensor Networks We already discussed in Section 3.1 that TNs are able to approximate high dimensional states within a regular, less complex structure. In ML, such states arise as maps of data features and as weight tensors that connect the data features to the desired result, e.g. a label in classification [56]. In principle, an ML algorithm seeks to find a function \(f_{l}(x):\mathcal{D}\rightarrow\mathcal{S}\) of some datum \(x\) within the space of all possible inputs \(\mathcal{D}\) that is mapped to a space of possible results \(\mathcal{S}\), for instance a set of labels \(l\). This function is called the model. Usually, the model is a composition of a data embedding \(\Phi(x)\) and a trainable weight tensor \(W_{l}\) connecting the embedded data to the output, as shown in Fig. 2 (a)-(b). The weight tensor \(W_{l}\) can be approximated as a TN whose output represents the choice of labels (see Fig. 2, c). We get \[f_{l}(x)=W_{l}\circ\Phi(x)\approx\langle W_{l,TN}|\Phi(x)\rangle \tag{4}\] where \(\langle W_{TN}^{l}|\) is the TN approximation of the weight tensor. The dimensions of the weight tensor's index \(l\) store the probabilities \(P^{l_{i}}\) of the corresponding labels \(l_{i}\) \[W_{l_{i}}^{\chi}\circ\Phi_{\chi}(x)=P_{l_{i}}(x). \tag{5}\] Multi-class classifications are either done by training a single TN with large outgoing bond dimension or a set of networks with a single outgoing label bond each (one versus all). The data is embedded with a feature map that can transform the data before mapping it to the network [57]. This approach is very similar to encoding maps for QML [19] and can be approximated as TN as well. A second way of embedding data into a feature space is using a density matrix \(|\Phi(x)\rangle\langle\Phi(x)|\) and contracting it with a label dependent weight state \(|W_{l}\rangle\). In this construction, the bond dimension \(\chi\) is given directly by the non-vanishing eigenvalues of the covariance matrix [46] and the decision function is realized as the maximum overlap \[f_{l}(x)=\operatorname{argmax}_{l}\langle W_{l}|\Phi(x)\rangle\langle\Phi(x)| W_{l}\rangle. \tag{6}\] This construction has the advantage of being able to process incomplete data by contracting over missing bonds and can represent specific probability distributions based on the data sets [57]. Figure 2: From a classical classifier to an efficient quantum tensor network. (a) Formally, the task of classification is performed by some function \(f_{l}(x)\) which is an object, that accepts input data \(x\) and outputs some label \(l\). (b) In machine learning, one realizes the classification function \(f_{l}(x)=\langle W_{l}|\Phi(x)\rangle\) with a weight tensor \(W^{l}\) with trainable parameters where the (possibly transformed) input \(\Phi(x)\) is fed into. The classification function is constructed as an overlap between both tensors. (c) In a tensor network approach, one decomposes the large tensor \(W^{l}\) into a network of smaller tensors, e.g. by restricting the structure to a matrix product state (MPS) layout. In this case, the bond dimension is \(\chi=4\). (d) By identifying isometric tensor nodes with unitary quantum gates (grey boxes), the MPS classifier can be mapped to a quantum computer. Higher bond dimensions between the tensor nodes require multi qubit gates. In this case, the resulting circuit needs \(\log\chi=2\) internal qubits and three qubit gates. (e) The multi qubit gates can be expressed by a repetition of the MPS two qubit gate structure. Each additional internal qubit requires another layer of two qubit gates. (f) If the quantum hardware supports resetting qubits during execution, a qubit efficient approach can be implemented reusing discarded qubits. The efficient circuit is a trade-off between qubit number and circuit length. Building generative TN models is also straightforward. The goal of a generative ML model is to learn the distribution of its training data and to generate additional samples from this distribution. The simplest possibility is to use the dual of a trained classifier or a regressor obtained by adjoining all tensor nodes within the network. Due to their quantum inspired construction, TNs have the issue of not being able to copy information within their structure. This means, that information cannot be distributed to different branches of a TN in a way a neural network uses information to activate its neurons for example. If for instance an operation in image analysis needs to use the value of adjacent pixels, one has to pass the same data into several input nodes by using overlapping observation windows [58]. However, this approach does not allow copying connected tensors to different locations. Often, it makes sense to combine different layouts to use advantages of both. As an example, hierarchical layouts coarse grain the data and grid layers can be used to efficiently combine the information from different branches of the hierarchical TN [59]. The hierarchical part can be optimized with unsupervised ML methods where the ideal weight tensor is derived from the data covariance matrix [60]. It is even possible to add a TN layer to a neural network architecture e.g. for complexity reduction in the input layer with MPS [61], MERA convolutional layers [62] or approximating a fully connected layer [63]. TN architectures are closely related to neural networks. Restricted [61, 64] and deep [65] Boltzmann machines can be mapped to a two dimensional TN consisting of MPS and Matrix Product Operators (MPO), an operator valued version of MPS. Boltzmann machines therefore may be simulated using an MPS which allows for adjusting accuracy and execution time via the bond dimension allowing for a compression of neural network representations [16]. The map between both architectures has been exploited in both ways to compare specific network layouts. On the one hand, node numbers in an MPS representation of a Boltzmann machine will scale exponentially with the number of neurons [66] and recurrent neural networks can simulate MPS with reduced computational effort for certain cases [67]. On the other hand, hierarchical TNs efficiently implement convolutional or recurrent neural networks [68]. Applications in MLcan been found for a wide variety of tasks. In image analysis, TN based ML models are used for classification [69, 70, 46], compression [71] or feature extraction [72, 62]. TN based regressors have been successfully applied to nonlinear system identification [15] where the task is to generate a model of a nonlinear system from its behaviour. Generative TN structures have been employed in anomaly detection [73] and as classifiers when reversing the generative TN structure [74]. A TN can be used to learn a probability distribution from data and a simulated "measurement" of the TN state will generate a new instance from the distribution [75, 76]. Generative TNs have also been applied to unsupervised feature identification in images [77]. ## 3 Quantum Tensor Network Machine Learning ### Mapping to Quantum Circuits The quantum-inspired construction of TNs makes it straightforward to translate the concept to quantum computations. Tensor nodes are realized by multi-qubit gates with incoming and outgoing qubits carrying the bonds of the node. Figure 3: Mapping from isometric tensors to unitary quantum gates. (a) Whether a bond \(\alpha,\beta\) is mapped to an incoming or outgoing qubit bundle, depends on whether the tensor is given in right or left isometric form. The free bonds \(i\) are represented by outgoing qubits. Adjoining a tensor flips its directions. Qubit preservation is taken into account by adding additional ancilla qubits or discarding left over ones. (b) Mapping the normalization condition for isometric tensors illustrates that discarding qubits actually has to fulfil a condition: For an exact representation of the classical network, discarded qubits have to be post-selected to \(|0\rangle\) to be the dual of the ancilla state. The procedure for mapping the classical TN to a QTN is shown in Fig. 2 (c)-(d). Quantum gates are unitary therefore the corresponding TN has to be in canonical form with at least isometric nodes [78] (see Section 2.2). Fig. 3 (a) shows how isometric tensors are mapped to gates. The bond dimension \(\chi\) is determined by the number of qubits \(n\) transferred between connected gates, i.e., \(\chi=2^{n}\). These qubits are called internal or virtual qubits. The qubits carrying the free (or physical) bonds are either forward or backwards directed, depending on whether the node has a vector or dual valued index. To preserve the number of qubits, the sum of all incoming qubits (free and internal) must equal the sum of the outgoing qubits at each gate. Therefore, one prepares necessary additional incoming qubits in a dummy state \(|0\rangle\) or discards left over outgoing qubits. Discarded qubits usually are carried on unobserved, but a direct correspondence to classical TNs requires post-selection to a reference state \(\langle 0|\) on these qubits [79]. From the normalization condition in Fig. 3 (b) \[\delta^{\beta}_{\beta^{\prime}}=R^{\beta(k)}_{i\alpha}R^{i\alpha(k)}_{\beta^{ \prime}}\qquad\leftrightarrow\qquad\langle\beta^{\prime}*|U^{\dagger(k)}_{\; \;k}U^{(k)}_{\;\;k}|0\beta\rangle=\langle*|0\rangle\langle\beta^{\prime}|\beta\rangle \tag{7}\] it follows that a post-selection measurement on \(\langle*|\) is the counterpart of the ancilla \(|0\rangle\) initialization. This is caused by the fact that an isometry is mapped to a unitary and classical dimensional reduction or information loss has to be accounted for. Instead, one also can perform an uncomputation operation for each gate used [48]. For a network fully optimized on the quantum machine, the post selection requirement can be released which allows for hybrid methods [79, 80] or efficient layouts where discarded qubits can be reset and reused [81]. Using this recipe, one can map the TN layouts known from Section 2.2 to quantum circuits. Fig. 4 shows a central gauge MPS and a TTN. More advanced networks like a brickwall, MERA and a square PEPS are shown in Fig. 5. Mapping these networks to a quantum computer gets more and more involved with growing bond dimension and requires larger circuits with high connectivity (or many swap gates) between the qubits. Figure 4: Simple tensor networks and their quantum counterparts. Single qubit unitary gates are omitted as they can be absorbed into an adjacent two qubit unitary. Each bond may be realized by one or more qubits. (a) shows a matrix product state (MPS) in site canonical form and its quantum implementation. Choosing a central gauge halves the circuit depth compared to the left canonical MPS from Fig. 2. However, the central node is not isometric in general and can only be mapped to the unitary quantum gate approximately. If the quantum computer supports resetting qubits during execution, a qubit efficient approach can be implemented reusing discarded qubits with constant qubit number. (b) A tree tensor network offers higher entanglement than a matrix product state, but its qubit efficient quantum representation will need a total of \(\log n\) qubits for \(n\) inputs. Figure 5: Higher dimensional quantum tensor network structures. (a) The brickwall architecture offers a higher amount of entanglement than matrix product states (MPS) and can be seen as a derivative of hexagonal projected entangled pair states. A brickwall allows for the representation of every MPS gauge up to a bond dimension given by the depth of the circuit. (b) The multiscale renormalization ansatz (MERA) quantum network requires gates between qubits further apart which may be realized by introducing swap gates in between on current hardware. Both brickwall and MERA do not allow for qubit efficient implementations. (c) The quantum circuit of pair entangled product states (PEPS) heavily depends on the order in which the PEPS nodes are evaluated. The realization will feature coupled staircase structures similar to MPS. Here, a qubit efficient approach scales linearily with the length of the diagonal. ### Efficiently implementing Quantum Tensor Networks Large circuits, multi-qubit gates and gates not using the standard gate set are hard to implement on near-term noisy intermediate scale quantum (NISQ) computers. To reduce circuit complexity, several approaches exist. A major step in bringing TNs to quantum computers was the development of a breakdown method for multi-qubit nodes to two-qubit unitaries with high fidelity [82, 83], which can be implemented on NISQ devices efficiently. This approach has been based on a classical procedure for photonic qudits [84]. The approach is shown for an MPS in Fig. 2 (d-e): This MPS has left canonical gauge and therefore has a quantum gate equivalent looking like a staircase of multi-qubit gates. The size of the gates is given by the number of internal qubits \(n=\log\chi\) that have to be passed on to the next gate (d). The three qubit gates in this example may be replaced by two layers of two qubit gates which provide the same connectivity between adjacent incoming free qubits (e). Each additional internal qubit would add another layer of two-qubit gates to the circuit. The most general ansatz for the gates within a tensor node is a full unitary gate [81, 85]. Representing these gates with simple gates available on a NISQ device however results in long circuits that are prone to noise. Therefore, simplified ansaetze for the two-qubit gates are commonly used [34, 39, 86, 87, 80]. For quantum input, the performance of these simplified ansaetze can yield comparable maximum performance to a general unitary ansatz, but they seem to be harder to train. For classical data, the performance was much lower using simple nodes. This holds for both grid [36] and hierarchical layouts [48]. To reduce the total qubit count, the structure of many TNs allows for an efficient reordering of its blocks, such that discarded qubits can be reset and reused for the input of new information (see Fig. 2 f). A qubit efficient MPS only requires a constant amount of qubits determined by the dimension of the inputs and the desired bond dimension. For a TTN, the qubit number scales logarithmically with the size of the input. Using the qubit efficient approach may not have an effect on the optimized model parameters because the circuit is trained to carry on the label information and the 'no signalling' principle therefore forbids an influence of these discarded qubits on the result [81]. Instead of simply resetting, the information in the discarded qubits can be used for quantum error correction within the nodes [88] which improves performance on NISQ devices. In general, the influence of the qubit efficient procedure e.g. on trainability is still not clear. When combined with a local loss however, no barren plateaus arise in the error correcting ansatz [89]. Combining these simplifications reduces both qubit number and gate complexity [82]. The overall circuit depth is harder to reduce. Choosing a central gauge for MPS at least halves circuit depth compared to left or right gauges [80] (see Fig.4). ### Variational Machine Learning with Quantum Tensor Networks Recently, a wide variety of ML architectures employing variational quantum circuits (VQC) have been developed. A VQC is a quantum circuit whose gates have tunable parameters. General unitaries can be constructed from a combination of rotation and entangling gates like the CNOT gate. A common architecture for QML is a layered VQC. Here, the circuit consists of encoding blocks that map the data to the circuit and parametrized variational blocks which entangle the qubits. To increase the expressivity of the quantum circuit, these blocks can be repeated before the measurement [19]. A QTN with tunable gates is also a variety of VQC with an internal TN layout. The structure of TNs provides several advantages for QML. First of all, insights from the available theory on classical TNs also apply to their quantum counterparts. Due to the direct correspondence, data and models from classical TNs can be translated to QTMs and vice versa. This can be used to better initialize quantum models (see Section 3.4). Furthermore, the choice of a specific TN layout allows for the introduction of inductive bias, e.g. knowledge about the type of data and therefore the construction of a QML structure that will fit the data well. Finally unlike for general VQC algorithms, the space of possible weights in TN based ML can be adjusted easily by varying the bond dimension. This allows for tuning the expressivity of the circuit to mitigate under- and overfitting [81]. It is not clear yet how the expressivity of a QTN scales or compares to layered VQC approaches but both architectures can be mapped onto each other [90]. Until now, the development of QTN ML approaches focuses on supervised classificators and generators. Supervised learning with QTMs works similar to the classical approach shown in Section 2.4. Examples for QML circuits based on different layouts are shown in Figures 4 and 5. The data is mapped to the quantum computer using some feature map \(\Phi(x)\) which is shown as orange dots in the images. The feature map may be a tensor network itself (see Section 4). The weight tensor \(W_{I}\) is represented as the blue quantum tensor network. In the end, a measurement on the remaining qubits yields a result, e.g. a classification. If a multiclass output is needed, introducing an exit node (see Fig. 7 b) will improve the fraction of correct classifications [85]. Generative TNs can be realized by reversing the TN structure. The inputs of the generative network are given by some reference computational basis state which are entangled by the TN (see Fig. 7 a). These generative networks can be trained either by sampling the generative QTN and comparing the results to a given training set [81] or by training a classifier and adjoining every gate as noted in Section 2.4. Some studies already include an investigation of the influence of noise on the QTN circuit. Numerical results indicate, that low level noise is not a problem for classification [81, 48]. It even may be used to enhance the performance of the algorithm by adding ancilla qubits initialized with noise to the circuit. This effectively generates a probabilistic model which is easier to train. However, if the noise is too high this also leads to decoherence rendering the circuit unfunctional [91]. Optimizing the parameters of these QTNs relies on some variety of global gradient descent for the majority of literature. Geometric [79] or genetic methods [92] are used only rarely. Renormalization methods like for classical TNs have not been adapted to QTNs but may be employed in hybrid methods [35]. Some proposals even consider employing TNs for optimizing parameters or hyperparameters of QML algorithms [93]. For specific implementations, first evidence exists that the locality of TNs can overcome barren plateaus [94, 95]. Especially the use of local loss functions, which can be implemented using local Hamiltonians, provides a favourable loss landscape without gradients vanishing exponentially fast [89, 96]. The same approach may also reduce the amount of training data needed [39]. ### Hybrid training Hybrid QTN architectures combine quantum and classical elements to use the advantages of both worlds. Compared to NISQ devices, classical computers are able to perform computations on far larger datasets and their use is very cheap. The quantum part of the algorithm may introduce some qualitative quantum advantage like higher maximum performance or generalization of the model. At the moment, two hybrid strategies make use of these characteristics. First, the classical reduction of the input data's dimensionality with pre-processing like PCA [80], auto-encoders or TN based encodings discussed in Section 4. If the classical part is trainable, it may be optimized together with the subsequent QTN. Second, the direct maps between TNs and their quantum counterparts allow for classical pre-training of the quantum model's initial values. Even when more powerful quantum computers are available, the execution of quantum circuits will still be expensive and pre-training methods to reduce the number of quantum circuit executions will stay relevant. In this section, we will focus on hybrid pre-training methods. As discussed in Section 3.1, a TN in canonical form can be mapped exactly to a quantum computer. This allows to train a coarse classical TN model which can be refined and expanded after mapping it to a quantum computer. Any standard QTN layout may be prepared with classically prepared initial conditions [57, 46] and for providing efficient initial values, no post-selection on the quantum computer is required [79]. Using these initial values for the QTN's parameters makes the training of larger quantum circuits far more efficient in comparison with random or identity initialization schemes. The main benefit is that the initial training phase, where the gradients decrease exponentially with the qubit number already has been performed classically and therefore the training on the quantum device starts in a favourable spot of the parameter space [80]. Modifications to the basic procedure of classically pre-training a QTN have been developed to lower the requirements on the classical preparation and to make the quantum part easier to train. For training a brickwall layout (see Fig. 5 a), it is sufficient to prepare an initial MPS state that is embedded within the brickwall, e.g. the diagonal, and the remaining gates start as identity gates [80] as shown in the centre panel of Fig. 6 (a). To make quantum training easier, one does not have to use full unitary gates \(U\) on the whole circuit, but can restrict the off-diagonal gates to some simple ansatz \(W\) as shown in the right panel of this figure. This approach can be seen as a quantum version of the copy node initialization for classical TNs, where most of the tensor nodes are initialized with identity tensors [45]. Another modification considers preparing a prior distribution within the feature space before uploading the data. This reduces the bond dimensions and gate complexity needed [79]. Fig. 6 (b) shows the approach for an efficient MPS classifier. At the beginning of the circuit, a homogeneous MPS \(U_{G}^{N_{b}}\) of length \(N_{b}\) prepares the prior distribution. Setting a trainable boundary condition \(U_{R}\) reduces the number of nodes the MPS needs to represent an effective prior. The second part \(U_{D}^{(i)}\) is a standard efficient MPS similar to Fig. 2 (f), where the \(N_{\mathrm{x}}\) data features are introduced into the QML and an output node \(U_{C}\) prepares the classification result in the end. The circuit technically can be optimized without classical pre-training. But for higher bond dimensions, this construction is far easier to train having initial values obtained with classical TN-specific methods like DMRG [79]. ### Case Studies and Implementations The application of QTN ML methods has been limited to demonstrative feasibility studies up to now. Most authors focus on classification tasks for image recognition either with binary classes [39] or multiclass setups [85]. One implementation of binary image classification [81] has been performed on real photonic hardware [97]. Other uses are classifications on parameterized classical data [98] and on quantum simulation results [34, 36]. Besides the proof of concept, these studies demonstrate that QTN approaches already can process relatively high dimensional input data like grayscale images of up to \(37^{2}\) pixels. They show that QTNs can achieve accuracies for the classification of both classical and quantum datasets in the range of 0.85 to 0.95 with only a small amount of parameters and internal qubits. The application of QTNs for a regression of continuous properties has not been discussed widely yet. One proposition for this application is to approximate eigenvectors of unitary matrices [99], but finding the right bond dimension is crucial to find an approximate state having sufficient overlap with the real eigenvector without using huge circuits. Figure 6: Hybrid training methods for quantum tensor networks. A pre-trained classical tensor network provides suitable initial values for further optimization on a quantum computer. The direct approach may be refined to make training easier or to have access to a larger part of the multi-qubit Hilbert space. Method (a) [80] maps a classically optimized MPS to the diagonal gates \(U_{c}\) of a brickwall ansatz; all off-diagonal gates are initialized as identities. Then a second optimization step on the quantum computer is conducted. While the optimized diagonal gates \(U_{o}\) have to be full unitary gates to enable the transfer from the classical network, the off-diagonal gates \(W\) can use a simpler ansatz with fewer parameters. Approach (b) [79] maps two classical MPS to the quantum circuit. A homogeneous MPS (\(N_{b}\)-times \(U_{G}\)) truncated to an appropriate boundary condition \(U_{R}\) prepares an initial state. In the second MPS, the nodes \(U_{D}^{(i)}\) upload and process the \(N_{x}\) elements of the datum \(x\). Finally, the classification is performed on the exit node \(U_{C}\). Method (a) is shown as a generator, method (b) as an efficient classifier. QTN generators have been implemented by various authors to provide quantum state samples from learned distributions [57, 80, 81] as a feasibility study. Most case studies that use publicly avaiable frameworks rely on the qiskit [100], as it supports resets in the middle of an execution, which are necessary for efficient TNs. For ML, Qiskit is compatible with the pytorch framework. Cirq [101] also provides a reset functionality and integrates with the tensor flow ML suite. Pennylane [102], which focuses on QML applications, currently cannot implement mid-circuit measurements for efficient QTNs, but provides methods for both basic MPS and TTN based quantum classifiers. The implementation allows for varying virtual qubit bonds and connects the quantum circuits to most common ML frameworks in Python. ## 4 Tensor Networks for Data Encoding For the performance of data driven quantum algorithms and QML algorithms in particular, encoding of data plays a crucial role. Current quantum computing hardware provides neither a sufficient amount of qubits nor gate depth to encode high dimensional data sets in a straightforward fashion. However, this does not necessarily mean that the problem size to be tackled with current quantum algorithms has to be small. Instead, one relies on classical and quantum pre-processing steps that reduce the data to its essential features. By adjusting the bond dimension, TNs provide a direct way of compressing data both lossless by discarding dimensions with singular values equal to zero and approximate by setting upper bounds on the bond dimension. The input QTN state can be prepared in at least five different ways. First, by maximizing the overlap between a classical representation of a quantum state and a TN. Second, by encoding classical data in a TN and reducing its bond dimensions by tensor decompositions. In both approaches, the network will then be mapped to quantum circuits as shown in Fig. 7 a for MPS and TTN. Efficient mapping methods are also available for PEPS [103]. A third classical method is training a TN to compress the data into a latent representation with fixed bond dimension and encode the latent vectors using some direct strategy. On a quantum computer, one can directly maximize the overlap between an existing quantum state and a QTN. This requires preparing the reference quantum state multiple times until convergence which may be very costly. Finally, one can train a generative network to output a state from some distribution (see Section 3.3). Therefore we focus on classical pre-processing in this section. Encoding data in TN based quantum layouts promises some benefits over classical encoding. Especially, the access to a large state space even when using few qubits could boost efficiency of information storage. For example, the number of parameters needed to represent certain time evolutions of quantum states is exponentially reduced with QTNs compared to classical ones [83]. Depending on the type of data, different layouts of TNs provide the most efficient storage because the scaling of their mutual information has to match the scaling behaviour of entanglement in the TN. As discussed in section 4, MPS suit 1D-Data like time series and logarithmic TTNs or 2D TNs like MERA or PEPS are better suited for images depending on the amount of local correlation. For text, the information scales even steeper [104], which requires 3D-PEPS or high dimensional MERA variants which have not been implemented on a quantum computer yet. Exploiting symmetries reduces the need for complexity within the structure, e.g. by using wavelet transform techniques in images [105]. Nevertheless, MPS and TTN can be implemented and optimized easily and still provide an improvement over direct encoding methods. They are therefore widely used in encoding for QML. The performance of MPS and TTN can be improved by combining them with other methods. When encoding images, one can split the whole image into patches and encode each patch into an MPS (see Fig. 7 c) which will catch local entanglement better but requires more storage. For a fixed bond dimension, the number of qubits is proportional to the number of patches encoded. The pixels in each patch are addressed by a method that is known as flexible representation of quantum images (FRQI) [106]. The method was developed as a classical compression method [107] and has recently been transferred to quantum computers [82, 61]. Patchwise MPS encoding can be easily combined with MPS QML methods (see Fig. 7 b) [85]. Trainable TN encoding using a latent space representation from the outgoing bond dimensions usually is optimized together with the parameters of the QML circuit [108, 39]. A theoretical study on the error performance of function regression models finds upper bounds when certain continuity requirements on the loss and the network are met [95]. Particularly, they find that the optimization error connected to barren plateaus will be negligible if the loss on the TN parameters is Lipschitz and satisfies a Polyak-Lojasiewicz condition. However, they do not develop a method to set up a TN that actually fulfils these conditions. Trainable encoding can be improved by a patchwise approach, too. Applying trainable MPS approximators on small regions of the image yields a linear model of the image where the spatial information is stored in the feature space [71]. Due to the independence of the various layers, this method also could be realized with a hybrid circuit, where the initial layers are classical and the final layers are quantum. TN encoding pairs well with TN based ML but it is applicable to any other QML approach. For layered VQC approaches, first results imply that TN pre-processing trained together with the VQC classifier performs better than regular PCA on image data [108] and can be used as an estimator for the Q-value function of a reinforcement learning ansatz [92]. Figure 7: Encoding strategies for machine learning using tensor networks. (a) Encoding classical or compressing quantum data using matrix product states (MPS, left) or a tree tensor network (TTN, right). Each gate \(U_{i}^{\dagger}\) is a direct mapping from an isometric node of a classical tensor network. A generative quantum tensor network has the same structure as an input state, but with one or more \(|0\rangle\) input qubits replaced by a label or noise encoded input. (b) Data may be encoded in several independent MPS (blue, green) and fed into the circuit to reduce information loss. A quantum machine learning algorithm can make use of the same MPS structure (red) and directly connect to the encoding MPS. An additional output gate (yellow) improves classification accuracy for multi-class tasks. (c) To encode two-dimensional data like images into MPS, one needs to choose a one dimensional path either at the cost of losing parts of the information or introducing high bond dimensions. Cutting the area into patches improves the encoding result as this reduces the maximum distances on the MPS between neighbouring sites in the original data. TN encoding is not only relevant for QML, but can be used to provide states for any other quantum application that requires complex input. For example, overlaps of QTN generated basis functions can be used to approximate non-linear functions [35]. This approach may reduce the number of grid points needed in quantum simulations with nonlinear PDEs as couplings compared to the classical approach. ## 5 Conclusion TNs have proven themselves useful for storing and processing quantum states as well as for classical ML applications. Combining both aspects makes them a suitable tool for QML as well. We have seen in Sections 3 and 4, that TNs can be employed for various tasks within the QML pipeline, from pre-processing and encoding to the variational part and the optimizers [93]. They have a very flexible representation as they allow for both pure quantum algorithms and classical-quantum hybrids while a wide range of optimization methods can be applied. Bringing TNs to a quantum computer has advantages considering architecture design. The representation of a quantum state with TNs on a quantum computer reduces the necessary amount of qubits compared to other encoding methods [28]. Thereby TNs provide an efficient way of mapping classical data to quantum applications. The tensors of a QTN do not have to be contracted costly as on classical hardware since the contraction happens as part of the execution of the quantum circuit. When using hybrid approaches, TNs allow for a seamless connection between classical and quantum methods enabling pre-training and gradual tuning of the border between both systems - which will become important when the power of NISQ devices scales up significantly. Having the possibility of choosing a qubit efficient implementation is also a very important feature although its effects on trainability are not yet fully understood and require further investigation. In comparison to classical TNs, QTNs are expected to provide several benefits for the algorithms themselves. As quantum algorithms naturally implement entanglement, QTNs will have access to a Hilbert space that grows exponentially with the number of qubits. This enlarges storage capacity and the available parameter space for QML algorithms. While classical TNs are able to represent only low-entangled, low-complexity states, QTN have access also to low-complexity states that can be generated by Hamiltonian time evolution [83] independent of the amount of entanglement. However, this may need very large circuits depths. Additionally, QTNs provide a natural way of using complex numbers instead of real ones which reduces the number of parameters necessary greatly in certain architectures [64]. It is yet unclear this is a general advantage of QTNs. Regardless of that, QTNs seem to be easier to train than other QML methods. For example, utilizing local optimization routines that make use of the localized TN structure can help to overcome problems like barren plateaus and reduce the amount of training data needed. However, these results have been obtained using specific implementations, some combined with special features like error correction. The results are therefore not generalizable to all QTN layouts yet. Choosing a layout that fits the data structure well also can reduce the need for large general circuits that are hard to train due to their large amount of parameters. The mentioned actual and possible benefits come with downsides compared to classical networks. Gates are directed and reshaping the network cannot be performed in a straightforward way. The usual difficulties with quantum computations like encoding classical data and the need to perform non-reversible measurements to obtain a result still apply. Moreover, compared to more general QML methods like layered VQCs, the strict structure of a TN layout may render it an architecture which cannot be applied on general problems but has to be handcrafted each time. Therefore it is unclear at the moment, whether the benefits of TNs really can be translated to a relevant quantum advantage outside the lab. Although QTNs have the potential to be a successful framework for QML, their development has just begun and further research is needed in many directions. Modifications to the basic layouts like variable bond dimensions which can be used to reduce computational costs have not been adapted to QTNs yet. In particular, a quantum version of TN-specific local optimization methods is interesting for building algorithms that can be trained more easily. However, most important is a more fundamental insight into the capabilities of QTNs - especially in comparison with classical or other VQC based methods. This includes methods to assess ML performance theoretically on the layout level and not just for specific implementations. Having general measures e.g. for expressivity or trainability would enable us to identify the range of application where it makes sense to use QTN architectures and concentrate future development on these areas.
2303.06249
A declining major merger fraction with redshift in the local Universe from the largest-yet catalog of major and minor mergers in SDSS
It is difficult to accurately identify galaxy mergers and it is an even larger challenge to classify them by their mass ratio or merger stage. In previous work we used a suite of simulated mergers to create a classification technique that uses linear discriminant analysis (LDA) to identify major and minor mergers. Here, we apply this technique to 1.3 million galaxies from the SDSS DR16 photometric catalog and present the probability that each galaxy is a major or minor merger, splitting the classifications by merger stages (early, late, post-coalescence). We present publicly-available imaging predictor values and all of the above classifications for one of the largest-yet samples of galaxies. We measure the major and minor merger fraction ($f_{\mathrm{merg}}$) and build a mass-complete sample of galaxies, which we bin as a function of stellar mass and redshift. For the major mergers, we find a positive slope of $f_{\mathrm{merg}}$ with stellar mass and negative slope of $f_{\mathrm{merg}}$ with redshift between stellar masses of $10.5 < M_* (log\ M_{\odot}) < 11.6$ and redshifts of $0.03 < z < 0.19$. We are able to reproduce an artificial positive slope of the major merger fraction with redshift when we do not bin for mass or craft a complete sample, demonstrating the importance of mass completeness and mass binning. We determine that the positive trend of the major merger fraction with stellar mass is consistent with a hierarchical assembly scenario. The negative trend with redshift requires that an additional assembly mechanism, such as baryonic feedback, dominates in the local Universe.
R. Nevin, L. Blecha, J. Comerford, J. Simon, B. A. Terrazas, R. S. Barrows, J. A. Vázquez-Mata
2023-03-11T00:08:39Z
http://arxiv.org/abs/2303.06249v1
A declining major merger fraction with redshift in the local Universe from the largest-yet catalog of major and minor mergers in SDSS ###### Abstract It is difficult to accurately identify galaxy mergers and it is an even larger challenge to classify them by their mass ratio or merger stage. In previous work we used a suite of simulated mergers to create a classification technique that uses linear discriminant analysis (LDA) to identify major and minor mergers. Here, we apply this technique to 1.3 million galaxies from the SDSS DR16 photometric catalog and present the probability that each galaxy is a major or minor merger, splitting the classifications by merger stages (early, late, post-coalescence). We present publicly-available imaging predictor values and all of the above classifications for one of the largest-yet samples of galaxies. We measure the major and minor merger fraction (\(f_{\rm merge}\)) and build a mass-complete sample of galaxies, which we bin as a function of stellar mass and redshift. For the major mergers, we find a positive slope of \(f_{\rm merge}\) with stellar mass and negative slope of \(f_{\rm merge}\) with redshift between stellar masses of \(10.5<M_{\ast}(log~{}M_{\odot})<11.6\) and redshifts of \(0.03<z<0.19\). We are able to reproduce an artificial positive slope of the major merger fraction with redshift when we do not bin for mass or craft a complete sample, demonstrating the importance of mass completeness and mass binning. We determine that the positive trend of the major merger fraction with stellar mass is consistent with a hierarchical assembly scenario. The negative trend with redshift requires that an additional assembly mechanism, such as baryonic feedback, dominates in the local Universe. keywords: galaxies: interactions - galaxies: evolution - surveys - catalogues -methods: statistical - techniques: image processing ## 1 Introduction The \(\Lambda\)CDM model of structure growth predicts that galaxies grow hierarchically through mergers, but uncertainty still surrounds the impact of mergers on physical processes in galaxies. For instance, while theory predicts that mergers contribute to the growth of stellar bulges and elliptical galaxies Springel 2000; Cox et al. 2008, trigger star formation (Di Matteo et al., 2008) and active galactic nuclei (AGN, Hopkins et al., 2006), and even quench star formation (Di Matteo et al., 2005; Hopkins et al., 2008), observational work often disagrees about the importance of mergers for driving these evolutionary processes (e.g. for whether mergers trigger AGN and/or star formation, see Cisternas et al., 2011; Knapen et al., 2015; Ellison et al., 2019; Pearson et al., 2019). This is a critical tension: the implication is that our models and/or our current methods for identifying mergers are incorrect. In order to determine the role of mergers in driving galaxy evolution, reconcile simulations with observations, and test the \(\Lambda\)CDM cosmological model, the galaxy-galaxy merger rate and merger fraction are key diagnostic tools. The merger rate, which will be the focus of future work (Simon et al., 2023, in prep), is measured using the merger fraction and the merger observability timescale (Lotz et al., 2011), both of which vary as a function of redshift, mass, mass ratio, and critically, the technique used to identify mergers. Characterizing the merger fraction as a function of mass, redshift, and mass ratio is critical for understanding the relative contributions of both major and minor mergers to the growth of different types of galaxies over cosmic time. For instance, we can use the mass- and redshift-dependent merger fraction to constrain the relative contribution of major and minor mergers to the growth of the most massive galaxies, which are predicted to assemble at late times by \(\Lambda\)CDM. It is therefore an important test of \(\Lambda\)CDM cosmology. We can also use the merger fraction to test the predictions of other structure formation channels (see SS5.1 for a review). Many different techniques exist to measure the evolution of the major merger fraction with redshift, including close-pair (e.g. Patton et al., 1997; Lin et al., 2004; Kartaltepe et al., 2007; Bundy et al., 2009), clustering (e.g. Bell et al., 2006; Robaina et al., 2010) and morphological techniques (e.g. Lotz et al., 2008; Conselice et al., 2009). The majority of these studies find that the major merger fraction peaks at earlier times, in agreement with the above theoretical measurements. Other work focuses on the evolution of the major merger fraction with stellar mass (e.g. Xu et al., 2012; Casteels et al., 2014), finding either an increasing or decreasing merger fraction with stellar mass. For a thorough review of past results, see SS5.2. Most of the literature has focused either on the mass- or redshift-dependence of the merger fraction separately. Also, most of the redshift-dependent studies only cover higher redshifts. In this work we focus on constraining the mass- and redshift-dependent merger fraction for galaxies in the Sloan Digital Sky Survey (SDSS). Our focus is on the local Universe, which will allow us to avoid the uncertainties that plague many of the above studies due to small sample sizes. We additionally use a carefully calibrated morphologically-based technique that avoids incompleteness issues due to fiber overlap. While most past work has focused on the more easily measured major merger fraction, the minor merger fraction is also an important quantity. Past work finds that the minor merger fraction is several times higher than the major merger fraction (e.g. Lotz et al., 2011; Lopez-Sanjuan et al., 2011; Bluck et al., 2012; Kaviraj, 2014, 2015), indicating that minor mergers have a critical role to play in building mass in disk galaxies, the envelopes of massive ellipticals, and the bulges of lower mass galaxies without destroying the merger remnant (Hopkins et al., 2010). In this work we set out to constrain not only the major merger fraction but also the minor merger fraction and how they both vary as a function of stellar mass and redshift. In addition to providing constraints on the importance of galaxy mergers for galaxy evolution, the galaxy-galaxy merger fraction and rate are crucial for constraining the predicted supermassive black hole (SMBH) merger rate. The SMBH merger rate will be measured by upcoming gravitational wave observatories such as the (evolved) Laser Interferometer Space Antenna (eLISA, LISA), which is anticipated to detect SMBH mergers out to \(z\sim 10\)(Amaro-Seoane et al., 2017; Mueller and Gravitational Observatory Advisory Team, 2016; Arun et al., 2022), and indirectly measured by pulsar timing arrays through the gravitational wave background (e.g. Hobbs et al., 2010; NANOGrav Collaboration et al., 2015; Arzoumanian et al., 2020), which is dominated by the signal from binary SMBHs, which form following major galaxy mergers, with masses \(M_{SMBHB}>10^{7}\)\(M_{\odot}\) out to \(z\sim 2\)(e.g. Sesana, 2013; Simon and Burke-Spolaor, 2016). The galaxy-galaxy merger rate is also important for breaking degeneracies in the gravitational wave signal. For instance, Siwek et al. (2020) find that the chirp mass of SMBH binaries is degenerate with the merger rate, so separately constraining the galaxy-galaxy merger rate can complement gravitational wave background measurements, break these degeneracies, and constrain SMBH accretion models. A strength of the LDA technique used in this work to identify mergers is that it is created from detailed temporal simulations of mergers, hence we have a solid understanding of the merger observability timescale. In future work (Simon et al., 2023, in prep), we plan to combine the observability timescales from this work with the merger fractions also measured in this work to derive the galaxy-galaxy merger rate and make predictions for the expected gravitational wave background signal from merging binary SMBHs in the local universe. In this paper, we address the above challenges using a statistical learning tool calibrated on well-understood hydrodynamical models of merging galaxies from Nevin et al. (2019) (henceforth N19). We apply this automated merger classification technique to the 1.3 million galaxies in the Sloan Digital Sky Survey (SDSS) DR16 photometric sample (SS2). The strength of this approach lies in the massive statistical sample of mergers identified using a morphological-based technique that exceeds previous morphological techniques in accuracy and completeness to classify different types of mergers (SS3). The focus of this paper is twofold: 1) We present publicly-available catalogs of different types of mergers identified by both stage and mass ratio (major/minor, early, late, and post-coalescence) and 2) We estimate the galaxy merger fraction as a function of mass ratio, mass, and redshift (SS4). We end by discussing our results in the context of cosmological models, past empirical studies of the merger fraction, and future directions (SS5). A cosmology with \(\Omega_{m}=0.3\), \(\Omega_{\Lambda}=0.7\), and \(h=0.7\) is assumed throughout. ## 2 Data Here we present an overview of the data set. We describe how we create image cutouts and the properties of the photometric sample in SS2.1. We present our process for measuring imaging predictor values from these image cutouts in SS2.2. ### Creating image cutouts of galaxies in SDSS The Sloan digital sky survey (SDSS, Gunn et al., 2006) is an all-sky spectroscopic and imaging survey. To construct our sample of galaxies, we use the \(r-\)band imaging data from data release 16 (DR16, Ahumada et al., 2020), which is the fourth data release of SDSS-IV (Blanton et al., 2017). Using CasJobs, we select all galaxies from the DR16 photometric catalog that have an \(r-\)band magnitude less than or equal to 17.77, the completeness limit of SDSS. We do not restrict the selection to objects that also have a spectroscopic object ID, maximizing the number of objects in the sample. We also do not restrict the sample by redshift. The redshift range of the mass complete sample (described in SS3.6) is \(0.03<z<0.19\). The exact SQL search is as follows: **select** po.objID, po.ra, po.dec, (po.petroMag_r - po.extinction_r) as dered_petro_r **into** MyDB.five_sigma_detection_saturated_mode1 **from** PhotoObj as po **where** (po.petroMag_r)<=17.77 and po.type = 3 and ((flags_r & 0x10000000)!= 0) and (flags_r & 0x40000) = 0 and mode=1 This query restricts the search to galaxies (po.type=3), eliminates galaxies that are detected at less than 5\(\sigma\) (flags_r 0x10000000!=0) and galaxies for which no petrosian radius could be determined in the \(r-\)band (flag_r 0x40000 = 0), and removes duplicates using mode=1. This search returns 1393923 galaxies. We use the Skycoords utility from astropy to create 80\(\aas@@fstack{\prime\prime}\)0 by 80\(\aas@@fstack{\prime\prime}\)0 square cutout \(r-\)band images for each galaxy from the frame images. After eliminating a small fraction (\(\sim\)0.4%) of the cutouts that are blank, corrupted, or at the edge of the frame, we have a total of 1388533 galaxy cutout images. ### Measuring predictor values from the SDSS cutout images For each galaxy image, we measure seven imaging predictor values: \(Gini\), M\({}_{20}\), Concentration (\(C\)), Asymmetry (\(A\)), Clumpiness (\(S\)), Sersic index (\(n\)), and shape asymmetry (\(A_{s}\)). We use the same procedure as N19 to measure the imaging predictors, which incorporates SourceExtractor (Bertin and Arnouts, 1996), GALFIT (Peng et al., 2002, 2010), and statmorph (Rodriguez-Gomez et al., 2019). We also use statmorph to measure the average S/N value (<S/N>) within the segmentation maps. After extracting the imaging predictors, the sample size is 1344677 galaxies; we lose about 3% of the sample due to either GALFIT or statmorph failing to converge on a good fit. We next flag galaxies for unreliable predictor values; these galaxies are included in both the predictor and the classification tables but are excluded from our analysis of the merger fraction. Excluding the galaxies with one or more flags, there are 938892 galaxies with clean photometry. We employ three separate flags: 1. The 'low S/N' flag is thrown when the average S/N value is below 2.5, which is the cutoff value quoted in N19 below which the classification is significantly different. 2. The 'outlier predictor' flag is thrown when one or more imaging predictors are outside the range of predictor values from the simulated galaxies. The range of simulated values is: \(0.44<Gini<0.72\), \(-2.70<M_{\rm 2D}<-0.50\), \(1.32<C<5.57\), \(-0.24<A<0.76\), \(-0.24<S<0.16\), \(0.47<n<5.14\), and \(0.0<A_{s}<1.21\). 3. The'segmap' flag is thrown when the segmentation map does not include the central pixel or for when the segmentation map extends beyond the edge of a clipped image. This identifies images for which the predictor values are actually measuring a brighter foreground galaxy or star. We present the predictor values for six galaxies in Table 1. We plot the distributions of predictor values for the full sample in Figure 1 alongside the six example galaxies from Table 1 identified with capital letters A-F. ## 3 Methods With predictor values in hand for 1.344 million galaxies, we are ready to classify the galaxies using the LDA imaging classification technique (Nevin et al., 2019). We review the classification technique and discuss some relevant changes in SS3.1. We describe how we further split the classification by merger stage in SS3.2. We apply the different classifications to the measured predictor values in SS3.3 and describe how we account for all possible merger priors in SS3.4, which is critical for the direct comparison of \(p_{\rm merg}\) values across the different classifications as well as the calculation of the merger fraction. We present the MergerMonger suite in SS3.5. Finally, we describe how we create a mass-complete sample in SS3.6. ### Review of the LDA merger identification technique The merger classification technique is built on a Linear Discriminant Analysis (LDA) framework that is trained to separate mock images of simulated nonmerging from merging galaxies using their imaging predictors. The full details of the technique are presented in Nevin et al. (2019) and Nevin et al. (2021) (henceforth, N19 and N21). N19 presents the imaging side of the approach, and N21 presents the kinematic side of the approach and some relevant changes to the N19 method. Here we will briefly review the results of these earlier papers. The classification was trained using a suite of five SUNRISE/GADGET-3 simulations of merging galaxies. The galaxies in this suite are best described as initially disk-dominated intermediate mass galaxies (\(3.9-4.7\times 10^{10}\) M\({}_{\odot}\)). They span a range of stellar mass ratios (\(\mu_{*}=0.1,0.2,0.333,0.333,0.5\)), have gas fractions of 0.1 and 0.3, and have initial bulge-to-total-mass ratios of 0 and 0.2. While the simulated training set is limited in morphological parameter space, this does not significantly affect our main results (see SS5.6). Each simulation spans 3-10 Gyr and contains a total of 100-200 snapshots in time, with a spacing of \(\sim\)10 Gyr. For each snapshot in time, we sample the merger at seven isotropically spaced viewpoints. We show example snapshots from the \(\mu_{*}=0.5\) major merger and the \(\mu_{*}=0.1\) minor merger in Figure 2, where \(\mu_{*}\) is the stellar mass ratio of the two merging galaxies. In order to build the classification, we also required a set of simulated nonmerging galaxies which consist of isolated galaxies that were matched in gas fraction and stellar mass to each simulated merger as well as merging snapshots before first pericentric passage and 0.5 Gyr after final coalescence (pre- and post-merger snapshots). We created mock images from the simulated galaxies that match the specifications of SDSS \(r-\)band images and measured the seven imaging predictors from the mock images. We trained seven separate LDA classifiers to identify mergers (one each for the five simulations and one each for a combined major and combined minor merger simulation). Relevant details of the LDA classification include: * The LDA relies on a prior to correct for the larger fraction of merging relative to nonmerging galaxies in the simulations. In N19, we use fiducial merger fraction priors of \(f_{\rm merg}=0.1\) and 0.3 for the major and minor merger classifications, respectively. We explore how changing the merger fraction prior affects our measured posterior merger fraction in SS4.7. * We include interaction terms to explore correlations between predictors. * We use \(k\)-fold cross-validation to obtain \(1\sigma\) errors on the predictor coefficients and to measure the performance statistics of the classifications. * In order to select which coefficients are necessary for the classification, we use a forward step-wise selection technique, which orders and includes only the relevant terms and interaction terms. For complete details, including the full mathematical formulation for the LDA, see N19 and N21. There are two key differences between the imaging LDA presented in N19 and the classification we use in this work that result in slightly different merger classifications and performance metrics. First, updates to the scikit-learn software (we are now using version 0.24.2, Pedregosa et al., 2011) including bug fixes and enhancements to the modeling logic result in classifications with different coefficients, terms, and slightly different performance metrics. Second, the training sets are slightly different from those used in N19; in N21 and here, we use the predictor values from all of the simulated snapshots that have measured values of imaging and kinematic predictors. After rerunning the analysis from N19 with all of the above updates, the major merger classification is: \[\text{LD1}_{\text{major}}= +13.9\ A_{s}-8.0\ C*A_{s}-5.4\ A*A_{s}+5.1\ A \tag{1}\] \[+4.8\ C-2.9\ Gini*A_{s}+0.6\ M_{\rm 20}*A\] \[+0.4\ M_{\rm 20}*n+0.4\ Gini-0.6\] Terms with positive/negative contributions to the LD1 value are blue/red. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} & \multicolumn{6}{c}{Predictor Values\({}^{b}\)} & \multicolumn{6}{c}{Flags\({}^{d}\)} \\ SDSS ObjID\({}^{a}\) & \(Gini\) & \(M_{20}\) & \(C\) & \(A\) & \(S\) & \(n\) & \(A_{s}\) & S/N\({}^{c}\) & low S/N & outlier predictor & segmap \\ \hline 1237665179521187863 (A) & 0.54 & -2.15 & 3.62 & -0.04 & -0.01 & 1.49 & 0.13 & 9.98 & 0 & 0 & 0 \\ 1237661852010283046 (B) & 0.69 & -0.96 & 3.59 & 0.22 & 0.01 & 1.32 & 0.78 & 12.49 & 0 & 0 & 0 \\ 1237648720718463286 (C) & 0.56 & -1.0 & 3.66 & 0.43 & -0.16 & 0.58 & 0.89 & 6.4 & 0 & 0 & 0 \\ 1237662306186428502 (D) & 0.56 & -2.16 & 3.59 & 0.14 & 0.02 & 1.38 & 0.57 & 16.35 & 0 & 0 & 0 \\ 123765389018018166 (E) & 0.56 & -2.07 & 3.53 & 0.02 & 0.01 & 1.47 & 0.40 & 14.31 & 0 & 0 & 0 \\ 1237654383587492073 (F) & 0.58 & -0.81 & 1.61 & 0.54 & 0.06 & 0.97 & 0.12 & 54.27 & 0 & 0 & 0 \\ \end{tabular} \end{table} Table 1: Six galaxies from the table of predictor values alongside their identification letters (A-F) that will be used throughout this paper. \({}^{a}\) The SDSS photometric object ID from DR16 \({}^{b}\) The pre-standardized predictor values \({}^{c}\)Average S/N for the area of the galaxy enclosed by the segmentation mask \({}^{d}\)Flags have a value of 1 when activated Figure 1: Distributions of predictor values for the full SDSS DR16 sample of galaxies (top, grey distribution), the simulated galaxies (black contours), and the selected non-flagged sample of galaxies (color distribution). We show six example galaxies with predictor values and segmentation maps (bottom) and overplot the locations of these galaxies on the top panels. All galaxy image panels are 80\({}^{\prime}\)0 \(\times\) 80\({}^{\prime}\)0. The minor merger classification is: \[\begin{split}\text{LD1}_{\text{minor}}=&-10.4~{}C~{} *A_{\text{s}}+8.8~{}C~{}*A-7.8~{}Gini*S-7.8~{}A\\ +6.6~{}A_{\text{s}}+6.5~{}Gini*M_{20}-6.0~{}M_{20}*S\\ -5.7~{}M_{20}~{}A_{\text{s}}+&4.9~{}S-4.4~{}M_{20}+3.7~{}Gini*C\\ -2.9~{}S*n-1.0~{}n*A_{\text{s}}-0.2~{}A*S-0.7\end{split} \tag{2}\] We present the four leading coefficients for each LDA run alongside their uncertainties in Table 2. We quantify the observability timescales and performance metrics for the LDA classifications using the cross-validation set of simulated mergers. We measure the observability timescale by applying each classification to the corresponding simulation and determining the length of time where the average LD1 value for consecutive snapshots is greater than zero. The observability timescale of the major/minor merger classifications is 2.31/5.36 Gyr. It is important to emphasize that the observability timescale is a performance metric that is measured by applying the derived LDA classifications applied to the simulated images. This is why the observability timescales from the early and late stage classifications do not sum to the observability timescale of the pre-coalescence classification. Accuracy (\(A\)) is the fraction of true positive (TP) and true negative (TN) classifications relative to all classifications: \[A=\frac{TP+TN}{TP+TN+FP+FN}\] where FP are false positive and FN are false negative classifications. Precision (\(P\)) quantifies the fraction of true positive classifications to all positive classifications: \[P=\frac{TP}{TP+FP}\] Recall is also known as the completeness and quantifies the ability of the classifier to retrieve mergers: \[R=\frac{TP}{TP+FN}\] The F1 score is the harmonic mean of precision and recall: \[F1=\frac{2TP}{2TP+FN+FP}\] The major merger combined simulation has an accuracy of 0.86, a precision of 0.96, and a recall of 0.83. The minor merger combined simulation has an accuracy of 0.77, a precision of 0.93, and a recall of 0.63. We present these performance metrics and the observability timescales for all classifications in Table 3. ### Classifying by Merger Stage In N19, the classification is applied to the entire duration of the merger (from early to post-coalescence stages). In this work, we further split the classification into multiple different stages (pre-coalescence, further subdivided into early and late, and post-coalescence). Splitting the classification by merger stage will enable other work to address if and how galaxy mergers drive time-dependent evolutionary processes. Our definitions of merger stage are based on previous theoretical and observational work that define merger stages using both morphological and evolutionary (i.e. star formation) properties. Moreno et al. (2015) establish a sequence of merger stages for the pre-coalescence stages of the merger based on triggered star formation: a) Incoming, b) First pericentric passage, c) Apocenter, and d) Second approach. Other theoretical work to identify mergers in cosmological simulations such as IllustrisTNG is limited in temporal sampling and tends to distinguish more coarsely between pre-coalescence and post-coalescence mergers, where the time since merger varies based on the study (Hani et al., 2020; Bickley et al., 2021). Observational work most often defines merger stage based on projected separation. Ellison et al. (2013) distinguish between pre- and post-coalescence mergers in a sample of 10,800 spectroscopic close pairs in SDSS, where pre-coalescence mergers have projected separations less than 80 kpc. Pan et al. (2019) define a merger sequence based on morphological disturbance and separation; 1) well-separated pairs without disturbance, 2) close pairs with strong interaction signs, 3) well-separated pairs with weak distortion (apocenter), and 4) strong distortion (final coalescence) and single galaxies with morphological remnants from merging (post-mergers). We divide our classification into pre- and post-coalescence stages to match the methodology of cosmological merger identification Figure 2: Snapshots from the \(\mu=0.5\) major merger (top row) and \(\mu=0.1\) minor merger (bottom row) with merging snapshots in pink and orange, respectively, and non-merging snapshots in blue. The non-merging snapshots include the pre-merger snapshots (before first pericentric passage), the post-merger snapshots (\(>0.5\)Gyr after coalescence), and the matched isolated galaxies (right column), which are matched to the initial conditions of each merger simulation in mass and gas fraction. schemes. The early and late stages roughly correspond to the stages from Moreno et al. (2015) and Pan et al. (2019) of first pericentric passage and apocenter (early) and final approach (late). We also implement a sliding timescale for the definition of the post-coalescence stage; we use the time cutoff of 0.5 Gyr after coalescence and then additionally implement a time cutoff of 1 Gyr. The 1 Gyr cutoff is motivated by the work of Bickley et al. (2021), who find that the morphology of IllustrisTNG galaxies is disturbed for up to 2.5 Gyr following a merger. To reconstruct the separate classifications, we eliminate all merger snapshots that are not from the stage in question. For example, for the major merger combined early stage classification, we eliminate all of the merger snapshots belonging to the late and post-coalescence stages, but we retain the pre- and post-merger snapshots as examples of nonmergers. In this way, we are training the classification to recognize traits of a specific stage while discouraging it from learning a strict cutoff between stages. It is important to mention that since the merger stage classifications are all trained separately, there may be overlap between stages, i.e. certain galaxies will have high probabilities of belonging to multiple merger stages. We discuss how to directly compare \(p_{\rm merg}\) values from different classifications in SS4.3 and quantify this overlap in SS4.7. We present the accuracy, precision, recall and F1 score for the new classifications in Table 3 and the four leading coefficients for each new classification in Table 2. ### Classifying SDSS image cutouts The next step is to measure the LD1 values for each SDSS galaxy and to assign each galaxy a probability of merging for each merger classification. To calculate LD1 for each galaxy, we standardize the measured predictor values using the mean and standard deviation for each classification. We then determine the value of LD1 for each galaxy by summing the coefficients and standardized predictor values for each classification. We present a schematic of this process in Figure 3, which demonstrates how this process works for one example image for the major merger combined classification. We assign a probability of merging to each galaxy. From N19, the probability of a galaxy belonging to the merging class is: \[p_{\rm merg}=\frac{e^{\delta_{\rm merg}}}{e^{\delta_{\rm merg}}+e^{\delta_{ \rm nonmerg}}} \tag{3}\] where \(\delta_{\rm merg}\)\(\delta_{\rm nonmerg}\) is the score of a galaxy for the merging/nonmerging class. Linear discriminant axis 1, or LD1, can be written in terms of \(\delta_{\rm merg}\) and \(\delta_{\rm nonmerg}\): \[{\rm LD1}=\delta_{\rm merg}-\delta_{\rm nonmerg} \tag{4}\] where the decision boundary is at LD1 = 0 and if \(\delta_{\rm merg}>\delta_{\rm nonmerg}\), then the galaxy will be classified as merging. \begin{table} \begin{tabular}{c c c c c} Classification & Term 1 & Term 2 & Term 3 & Term 4 \\ \hline All Major Mergers & 13.9 \(\pm\)1.0 \(A_{s}\) & -8.0 \(\pm\) 0.7 \(A_{s}\) * & -5.4 \(\pm\) 0.4 \(A_{s}\) * & 5.1 \(\pm\) 0.4 \(A\) \\ Major, pre-coalescence & 10.0 \(\pm\) 0.6 \(A_{s}\) & 7.5 \(\pm\) 0.2 \(A\) & -6.3 \(\pm\) 0.2 \(A_{s}\) * & -6.1 \(\pm\) 0.5 \(A_{s}\) * \\ Major, early stage & 9.1 \(\pm\) 0.4 \(A_{s}\) & -5.8 \(\pm\) 0.4 \(A_{s}\) * & 5.3 \(\pm\) 0.6 \(C\) & 4 \(\pm\) 0.5 \(A\) \\ Major, late stage & -8.9 \(\pm\) 0.8 \(A_{s}\) * & 7.9 \(\pm\) 0.4 \(A_{s}\) & 7.2 \(\pm\) 0.7 \(Gini\) * & 1.2 \(\pm\) 0.2 \(A\) * \\ Major, post-coalescence (0.5) & -10.8 \(\pm\) 0.9 \(A_{s}\) * & \(M_{20}\) & 10.1 \(\pm\) 1.1 \(C\) * \(Gini\) & -10.0 \(\pm\) 1.1 \(A_{s}\) * & 5.0 \(\pm\) 0.9 \(Gini\) * \(M_{20}\) \\ Major, post-coalescence (1.0) & -14.3 \(\pm\) 0.9 \(C\) * & 11.7 \(\pm\) 1.4 \(C\) & 5.9 \(\pm\) 0.9 \(Gini\) * & -1.3 \(\pm\) 0.2 \(A_{s}\) * \(M_{20}\) \\ \hline All Minor Mergers & -10.4 \(\pm\) 1.9 \(A_{S}\) * & 8.8 \(\pm\) 0.7 \(A\) * & -7.8 \(\pm\) 3.3 \(Gini\) * & -7.8 \(\pm\) 0.6 \(A\) \\ Minor, pre-coalescence & -31.3 \(\pm\) 7.7 \(Gini\) * & -28.6 \(\pm\) 6.0 \(Gini\) * & 27.4 \(\pm\) 5.7 \(n\) & 21.0 \(\pm\) 2.8 \(C\) \\ Minor, early stage & 20.8 \(\pm\) 3.6 \(C\) & -20.5 \(\pm\) 5.4 \(Gini\) * & -18.0 \(\pm\) 2.2 \(n\) * \(M_{20}\) & -16.7 \(\pm\) 22 \(n\) * \(C\) \\ Minor, late stage & 10.1 \(\pm\) 1.4 \(A_{s}\) * & -5.3 \(\pm\) 1.0 \(A_{s}\) * \(Gini\) & 1.9 \(\pm\) 0.1 \(A_{s}\) * & - \\ Minor, post-coalescence (0.5) & 2.3 \(\pm\) 0.2 \(A_{s}\) & & - & - \\ Minor, post-coalescence (1.0) & 2.0 \(\pm\) 0.1 \(Gini\) & -1.1 \(\pm\) 0.1 \(A\) * & 0.6 \(\pm\) 0.1 \(n\) & - \\ \end{tabular} \end{table} Table 2: The four leading coefficients and terms of each classification. The LD1 value for each classification is constructed by multiplying the standardized predictor value by each coefficient and summing all terms. We distinguish between the post-coalescence classifications with a 0.5 Gyr and 1.0 Gyr cutoffs after coalescence. \begin{table} \begin{tabular}{c c c c c} Classification & Accuracy & Precision & Recall & F1 & \(t_{\rm obs}\) \\ \hline All Major Mergers & 0.86 & 0.96 & 0.83 & 0.89 & 2.31 \\ Major, pre-coalescence & 0.87 & 0.96 & 0.83 & 0.89 & 2.16 \\ Major, early stage & 0.86 & 0.95 & 0.78 & 0.86 & 1.72 \\ Major, late stage & 0.94 & 0.97 & 0.84 & 0.90 & 0.83 \\ Major, post-coalescence (0.5) & 0.84 & 0.89 & 0.65 & 0.75 & 0.40 \\ Major, post-coalescence (1.0) & 0.90 & 0.94 & 0.85 & 0.89 & 1.26 \\ \hline All Minor Mergers & 0.77 & 0.93 & 0.63 & 0.75 & 5.36 \\ Minor, pre-coalescence & 0.80 & 0.89 & 0.71 & 0.79 & 5.75 \\ Minor, early stage & 0.83 & 0.89 & 0.73 & 0.80 & 3.11 \\ Minor, late stage & 0.93 & 0.79 & 0.79 & 0.79 & 5.85 \\ Minor, post-coalescence (0.5) & 0.85 & 0.53 & 0.60 & 0.56 & 0.19 \\ Minor, post-coalescence (1.0) & 0.85 & 0.84 & 0.71 & 0.77 & 0.96 \\ \end{tabular} \end{table} Table 3: Accuracy, precision, recall, F1 score, and observability timescale for each classification measured from the cross-validation sample of simulated mergers. Using equation 4, equation 3 can be re-written in terms of LD1: \[p_{\rm merge}=\frac{1}{1+e^{-\rm LD1}} \tag{5}\] For the 1344677 galaxies in SDSS DR16, we calculate the value of LD1 and the merger probability for the major and minor merger classifications and for all of the stage-specific classifications (early/late/pre-coalescence/post-coalescence). We present these results in SS4.1. ### Marginalizing the calculation of the merger fraction over all merger priors Critical to this paper is a discussion of the merger fraction priors (\(\pi\)) that are incorporated into the calculation of the \(p_{\rm merge}\) values. In N19, we adopt a fiducial merger fraction prior of \(\pi=0.1\) for the major merger classifications and \(\pi=0.3\) for the minor merger classifications, meaning that we expect 10% and 30% of galaxies in the local Universe to be experiencing major and minor mergers, respectively. These priors are based on observations and simulations (e.g. Rodriguez-Gomez et al., 2015; Lotz et al., 2011; Conselice et al., 2009; Lopez-Sanjuan et al., 2009; Shi et al., 2009; Bertone and Conselice, 2009). The fiducial priors are used to measure the LD1 and the \(p_{\rm merge}\) values in the previous section. The choice of this input prior affects the distribution of LD1 and \(p_{\rm merge}\) values for the full sample and therefore also affects the individual values. It is therefore particularly important to consider which \(\pi\) value is used when comparing \(p_{\rm merge}\) values between classifications and when calculating the merger fraction \(f_{\rm merge}\), which is the focus of this paper. To approach the comparison of \(p_{\rm merge}\) values and the calculation of \(f_{\rm merge}\) in the cleaned and most agnostic (to input prior) way possible, we perform a Bayesian marginalization where we re-calculate the \(p_{\rm merge}\) values for all possible input priors in a range \(0.05<\pi<0.5\) (we fully justify this range of priors in SS4.7). The implication is that we redo the previous \(p_{\rm merge}\) calculation for 46 different input priors, returning 46 different \(p_{\rm merge}\) values for each galaxy in SDSS. From these, we calculate the 16th, 50th (median), and 84th percentile of the posterior distribution for each galaxy, which we present in SS4.1. We present the results for the overall merger fraction calculation based on these measurements in SS4.7. ### The MergerMonger Suite We prepare a suite of tools (MergerMonger)1 that applies the LDA method to classify major and minor merging galaxies from optical images. MergerMonger includes four main utilities: Footnote 1: [https://github.com/beckynevin/MergerMonger](https://github.com/beckynevin/MergerMonger) 1. GalaxySmelter: A tool for measuring imaging predictors from simulated or observed galaxy images. 2. Classify: A tool that creates the LDA classification using the predictor values from the simulated training set. 3. MergerMonger: A tool that applies the LDA classification to observed galaxies, measuring merger probabilities. 4. Utilities that help with the interpretation of the predictor and probability values for each galaxy. In this work we apply the MergerMonger suite to SDSS \(r-\)band imaging. However, the classification is designed with broader use in mind. The classification can be re-created using new sets of simulated images (i.e. simulated images created to match the specifications of LSST or DESI imaging) or new imaging filters. For example, to apply the classification to LSST images, one could design their own set of mock LSST mergers and extract the training data using GalaxySmelter. Then, they could use Classify to train their own LDA classification and finally classify LSST galaxies using MergerMonger. ### Galaxy stellar masses To measure the stellar masses for the SDSS galaxies, we use the empirical relation from Bell et al. (2003) that relates the SDSS \(u\), \(g\), \(r\), \(i\), and \(z\) band luminosities and colors to the stellar mass-to-light (M/L) ratio using the k-correction: \(log_{10}(M/L)=a_{\lambda 4}+(b_{\lambda}\times color)\), where the color in units of AB magnitudes and the luminosity is in solar units. We use the values for SDSS \(g-r\) color because Du et al. (2019) find that the \(g-r\) color provides an almost unbiased Figure 3: Schematic showing the classification steps for an example galaxy (left). Our first step is to measure the imaging predictor values (top middle). We then standardize these values and plug them into each LD1 formula. We show this (top right) for the major merger classification. The LD1 value for this galaxy is 4.781, which places it to the right of the decision value in the histogram of LD1 values (right). Our final step is to assign each galaxy a probability value (bottom left). \(M/L\) value for many different galaxy types and regions. We use the \(a_{r}=-0.840\) and \(b_{r}=1.654\) from Zibetti et al. (2009), which incorporates an TP-AGB star correction and revised SFHs for bursty galaxies, improving upon the prescription from Bell et al. (2003). To conduct this calculation, we rely on photometric-based redshifts, which are available for the full SDSS sample (1035607 available photometric redshifts versus 437094 spectroscopic redshifts). In Appendix A we further explore the differences between using photometric and spectroscopic redshifts to determine the stellar mass. Although there are biases inherent to using the photometric-based redshifts (especially at low redshift), we find that our results remain unchanged when we measure the merger fraction as a function of redshift (SS4.8). Our method for measuring stellar mass shows good agreement with the SED-based approach of Mendel et al. (2014), which uses a stellar population synthesis approach to measure the stellar mass using SDSS SEDs and Sersic models of the bulge and disk components. We present this comparison in Figure 4, where the mean stellar masses agree above a stellar mass of \(\sim 10\)9. Footnote 2: We use the redshift bins presented in §4.8. Next, we determine the mass completeness limit as a function of redshift using the technique from Darvish et al. (2015). For each redshift bin2, we compute the lowest stellar mass (\(M_{\rm lim}\)) that could be detected for each galaxy given the magnitude limit of SDSS (\(r=17.77\)): \(log\,(M_{\rm lim})=log\,(M)+0.4\times(r-17.77)\), where \(r\) is the apparent (rest-frame) \(r\)-band magnitude of each galaxy and \(M\) is the stellar mass. The mass completeness limit at each redshift bin is the mass at which 95% of the limiting masses are below the mass completeness limit, meaning that only 5% of galaxies would be missed in the lowest mass end of the mass function. Footnote 2: We use the redshift bins presented in §4.8. Our final step is to eliminate all galaxies below the mass completeness limit at each redshift bin. We show this process in Figure 5. This reduces our sample by roughly a factor of three from 958840 photometrically clean galaxies with measured masses to 362216 galaxies in a mass-complete sample. The factor of \(\sim\)3 reduction in sample size induced by the mass completeness correction is similar to the sample reduction in Cebrian & Trujillo 2014, which applies a similar mass completeness correction to the NYU-VAGC catalog of SDSS DR7 galaxies. ## 4 Results We present the classification results in SS4.1, and provide a guide for interpreting the predictors that influence the classification in SS4.2. We also provide a guide for deciding between merger stages and types in SS4.3 and a guide for dealing with cases where by-eye classification and the LDA classification are in conflict in SS4.4. We then analyze the properties of the merger sample in SS4.5 and compare our results to previous SDSS merger selections in SS4.6. We constrain the observed merger fraction using all of the different merger classifications in SS4.7 and explore how the major merger fraction varies as a function of galaxy mass and redshift in SS4.8. We explore if S/N or galaxy morphology (bulge-to-total mass ratio and color) are confounding the redshift-dependent major merger fraction in SS4.9 and 4.10, respectively. We explore how the minor merger fraction varies as a function of stellar mass and redshift in SS4.11. We discuss the influence of contamination of the major and minor merger fraction calculations by mergers of the opposite type (minor and major, respectively) in SS4.12. We run numerous sanity checks in SS4.13 (more details can be found in Appendix B) to confirm the main result of how the major merger fraction trends with mass and redshift. Finally, we end with a discussion of the importance of mass binning to our result in SS4.14, where we find a different result in the absense of mass binning. Figure 4: Comparing the stellar masses derived from the Mendel et al. (2014) method (x-axis) to those derived from using the empirical color method from Zibetti et al. (2009). Figure 5: Mass completeness as a function of redshift for redshift bins with spacing \(\Delta x=0.02\). For each redshift bin, we determine the 95% completeness limit (pink line) and eliminate all galaxies below this point. For the distribution of masses at each redshift bin, see Appendix A. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline ID & LD1 & \(P_{max}\) & CDF & Leading term 1 & Leading cost 1 & Leading term 2 & Leading cost 2 & Leading term 3 & Leading cost 3 & \multicolumn{1}{c}{regump} \\ \hline 1237665179521187863 & A & -137 & 0.016 & 0.510 & \(A_{s}\) & -11.3 & \(A\) & -3.8 & \(C\) & -0.5 & 0.000 \\ 123766517020302086 & (6) & 4.781 & 0.929 & 0.919 & \(A_{s}\) & 31.8 & \(A\) & 5.2 & \(Gini\) & 0.9 & 0.000 \\ 12376487207 & 18643265 & (C) & 2.081 & 0.899 & 0.839 & \(A_{s}\) & 39.4 & \(A\) & 12.4 & \(n+N_{0}\) & 0.5 & 0.000 \\ 123766318642892 (D) & 4.235 & 0.986 & 0.907 & \(A_{s}\) & 17.9 & \(A\) & 2.2 & \(n+N_{0}\) & 0.2 & 0.000 \\ 1237635890816 & (6) & 1.784 & 0.956 & 0.830 & \(A_{s}\) & 6.9 & \(A_{s}+A\) & 2.2 & \(n+N_{0}\) & 0.2 & 0.000 \\ 1237654338587492073 (D) & 6.078 & 0.663 & 0.792 & \(A\) & 16.0 & \(A_{s}+C\) & 9.1 & \(A_{s}+Gini\) & 2.4 & 0.000 \\ \hline \hline \end{tabular} \end{table} Table 4: Classification results for the six galaxies presented in Figure 1. Here we provide the LD1 value and corresponding \(P_{max}\) value for the major merger classification. We also list the leading (most influential) term in each classification and the contribution from this term, which is the product of the standardized preference value and the LD1 coefficient for the term. We bold the classifications where a galaxy is classified as a merger (\(P_{max}>0.5\)). The online-available tables provide these values for all six merger classifications (major, major pre-coalescence, major post-coalescence, minor, minor pre-coalescence, and minor post-coalescence). ### LDA classification results Here we present three data products: 1. For each galaxy in the 1,344,577 DR16 sample, we provide all of the predictor values and the flag values. This table was previously described in SS2 and presented in Table 1. 2. For each merger classification, we provide the fiducial LD1, \(p_{\rm merg}\), and CDF (described below) values for each galaxy in the 1,344,577 SDSS DR16 galaxy sample accompanied by explanatory information such as the most important (leading) terms in the classification and the coefficients associated with these leading terms. Our intent is that these tables can be used to ascertain why a galaxy is classified as a merging or non-merging galaxy according to the different fiducial classifications. We describe how this explanatory analysis might work in SS4.2. Table 4 presents the major merger classification results for the six galaxies from Figure 1. 3. We also provide a table (Table 5) that presents the 16th, 50th, and 84th percentile of the posterior \(p_{\rm merg}\) distribution (and accompanying CDF value) for all photometrically clean galaxies (958,840) from the marginalization analysis described in SS3.4. This single table includes these results for all of the merger classifications. Using this table, the user can directly compare \(p_{\rm merg,50}\) values across different classifications. In Figure 6, we present histograms of the fiducial LD1 values and the corresponding \(p_{\rm merg}\) values for the training set of simulated galaxies and the SDSS galaxies classified by the major and minor merger classifications. Since the LDA technique is designed to find the hyperplane of maximal separation between two populations, the distribution of probability values in the bottom panels of Figure 6 peak very near to 0 and 1. This makes direct interpretation of these probability values very difficult. We therefore provide a complementary cumulative distribution function (CDF) analysis (which is part of data products 2 and 3) to compare individual \(p_{\rm merg}\) values to the \(p_{\rm merg}\) values of all SDSS galaxies for a given classification. For instance, if we examine the major merger classifications in Table 4, galaxy A has a \(p_{\rm merg}\) value of 0.016, which corresponds to an CDF value of 0.510, meaning that 51% of galaxies in SDSS have a lower \(p_{\rm merg}\) value. In Table 6, we list the \(p_{\rm merg}\) values that correspond to the 5%, 10%, 90%, and 95% values of the CDF for the fiducial merger classifications. Finally, we provide visual examples of a randomly selected sample of merging galaxies (Figure 7) and non-merging galaxies (Figure 8) according to the fiducial major merger LDA classification. ### A guide for interpreting classification results The LDA classification method was designed with the interpretability of individual results as one of its central goals. In this section, we discuss how to use the additive linear terms that compose LD1 to understand why a galaxy is classified as merging or non-merging. To assist users with this interpretation, we provide Table 4, which lists the \(p_{\rm merg}\) and CDF values for the major merger classification for individual galaxies alongside the most influential predictors and coefficients. We include an utility within MergerMonger that calculates CDF values for \(p_{\rm merg}\) values and vice versa. This is useful if the user wants to create a'superclean' merger sample that has minimal non-merger contamination. They could either do this by defining an CDF threshold or by deciding on a \(p_{\rm merg}\) threshold (i.e. \(p_{\rm merg}>0.95\)) and using Table 6 or the MergerMonger utility to determine the corresponding \(p_{\rm merg}\) or CDF value. It is then possible to re-run the LDA classifications using MergerMonger and a different \(p_{\rm merg}\) value as the threshold to identify mergers. We also provide a diagnostic tool within MergerMonger (find_galaxy.py) that accepts single or multiple galaxy SDSS Object ID(s) as input. This utility then presents the predictor values, the most influential predictors in the classification, and the classification results in a diagnostic diagram that includes the individual galaxy image and segmentation map. We show an example of two diagnostic diagrams in Figure 9 for the major (top) and minor (bottom) merger classifications for galaxy F from Figure 1. This galaxy is classified as a merger by both major and minor merger fiducial classifications, with high LD1 and corresponding \(p_{\rm merg}\) values in the upper left informational panel. The lower panel on the left hand image lists the three leading terms and their corresponding contribution to the value of LD1; here, shape asymmetry and asymmetry are important predictors for both classifications. The inset informational panel for the right hand segmentation maps lists all of the pre-standardized predictor values. These diagnostic diagrams can help the user interpret why the classifications have determined that this galaxy is likely to be a merger. Looking first at the major merger panels, shape asymmetry followed by the \(A_{s}*A\) cross term are the most influential terms. In the right panel, the asymmetry for this galaxy is low while the shape asymmetry is high. This is due to the low surface brightness of the shell feature. Since the coefficient of the \(A_{s}\) term is positive in Equation 1, this boosts the LD1 score. The coefficient of the \(A_{s}*A\) term is negative in Equation 1. This coefficient will be multiplied by the standardized \(A_{s}\) and \(A\) values, which will be positive and negative respectively (recall, the \(A\) value is relatively low). The net result will be a positive contribution to LD1, meaning that this galaxy is even more likely to be detected as a merger. In this case, the \(A_{s}*A\) term allows the LDA to better distinguish between asymmetric bright features such as spiral arms and low surface brightness asymmetric features that are more likely to be caused by a merger. For the minor merger classification, the \(M_{20}*A_{s}\) cross term is influential; this term has a negative coefficient in Equation 2, so for this term to have a large positive influence, either the standardized value of \(M_{20}\) or \(A_{s}\) must be very negative (meaning relatively low for SDSS galaxies). Here, this is because \(M_{20}\) is quite negative, meaning that the light is concentrated. By eye, this galaxy looks like a post-coalescence merger with a shell from the merger event; the minor merger technique is relying both upon the high concentration (also measured by \(M_{20}\)) and the shell feature to identify it as a merger. This galaxy and others like it demonstrate that the LDA classification succeeds in the case of concentrated early-type galaxies. ### A guide for distinguishing between merger types and stages Here we discuss the overlap between different merger stages and types and how to directly compare \(p_{\rm merg}\) values across different classifications. Directly comparing \(p_{\rm merg}\) values between the fiducial runs is not encouraged, especially between minor and major classifications. These different classifications were prepared assuming different priors, meaning that the distribution of \(p_{\rm merg}\) values will be affected by this choice. We also do not recommend directly comparing the \(p_{\rm merg}\) values from Table 4 between different stages of the same merger type (i.e. early versus late stage major mergers) because these tables assume the same fiducial merger prior. As we will show in SS8.6, this is not a safe assumption. _Best practice is therefore to use the marginalized \(p_{\rm merg}\) values from Table 5 to decide which stage or which merger type is most likely for a given galaxy._ This table includes the \(p_{\rm merg}\) values that corresponds to the 16th, 50th, and 84th percentile of the posterior distribution of \(p_{\rm merge}\) for each galaxy for the major, minor, and pre- and post-coalescence (1.0 Gyr) stages. The online-available table also includes the early, late and post-coalescence (0.5) stage results. Here we walk the user through the process of distinguishing between merger types and stages using Table 5 and the (compare_classifications.py) utility within MergerMonger, which plots an image of a galaxy and compares the \(p_{\rm merge}\) values between different classifications. Using galaxy E from Figure 1 and Table 5, we show a diagnostic diagram in Figure 10 created using compare_classifications.py as an informative example for how to decide between merger type and stage for an individual galaxy. The compare_classifications.py utility decides the most likely classifications in a hierarchical manner; first, it determines if the galaxy is more likely to be a major or \begin{table} \begin{tabular}{l c c c c c c c c c c} & & & & \multicolumn{5}{c}{CDF threshold value} \\ Classification & 0.01 & 0.05 & 0.1 & 0.25 & 0.5 & 0.75 & 0.9 & 0.95 & 0.99 \\ \hline Major merger & 3.2e-8 & 1.6e-7 & 3.2e-7 & 7.4e-7 & 0.01 & 0.39 & 0.9891260 & 0.99999353 & 0.999998720 \\ Minor merger & 5.6e-8 & 2.8e-7 & 5.7e-7 & 0.02 & 0.24 & 0.79 & 0.996 & 0.9999950 & 0.99999989 \\ \end{tabular} \end{table} Table 6: Probability of merging (\(p_{\rm merge}\)) that correspond to different thresholds of the CDF for all of the merger classifications. This table is provided to enable user interpretation of individual \(p_{\rm merge}\) values, which evolve exponentially and their interpretation can be assisted with careful consideration of the CDF values. For instance, if a galaxy has a \(p_{\rm merge}\) value of 0.01 for the major merger classification, this corresponds to a CDF value of 0.5, meaning that about half of our SDSS sample is more likely to be a non-merger. Figure 6: Distribution of LD1 values for the simulated suite (top panel) and the SDSS sample (middle panel) and the corresponding distribution of \(p_{\rm merge}\) values for the SDSS sample (bottom panel) for the major (left) and minor (right) merger classifications. In all cases, the y-axis is the number of galaxies. In the bottom panels, we zoom in on the distributions and the inset numbers give the number of galaxies in the largest bin. \begin{table} \begin{tabular}{l l l l l l} & & & \multicolumn{5}{c}{\(p_{\rm merge,16}/p_{\rm merge,50}/p_{\rm merge,84}\) (CDF)} \\ ID & Type & \multicolumn{2}{c}{All} & \multicolumn{2}{c}{Pre-coalescence} & \multicolumn{2}{c}{Post-coalescence (1.0)} \\ \hline 1237648720718463286 & Major & 0.84/0.99/1.0 (0.96) & 0.67/0.88/0.99 (0.85) & 0.01/0.1/0.0 (0.99) \\ & Minor & 0.00/12/1.0 (0.51) & 0.00/0.04/0.84 (0.44) & 0.46/1.0/1.0 (0.98) \\ 1237653589018018166 & Major & 0.79/0.88/0.92 (0.89) & 0.81/0.89/0.94 (0.85) & 0.88/0.97/0.99 (0.83) \\ & Minor & 0.88/0.95/0.98 (0.88) & 0.88/0.96/0.99 (0.87) & 0.740/0.93/1.0 (0.74) \\ 1237654383587492073 & Major & 0.52/1.01/0.98 (0.96) & 0.96/1.0/1.0 (0.97) & 0.00/0.00 (0.00) \\ & Minor & 0.00/0.09/1.01 (0.18) & 0.00/0.1/0.1 (0.17) & 0.1/0.89/1.0 (0.7) \\ 1237661852010283046 & Major & 0.93/0.98/0.99 (0.94) & 0.99/1.0/1.0 (0.94) & 0.02/1.01/0.1 (0.92) \\ & Minor & 0.04/0.89/1.0 (0.85) & 0.33/0.98/1.0 (0.89) & 0.19/1.0/1.0 (1.0) \\ 1237662306186428502 & Major & 0.99/1.0/1.0 (0.98) & 0.99/1.0/1.0 (0.95) & 0.98/1.0/1.0 (0.93) \\ & Minor & 0.78/0.98/1.0 (0.91) & 0.71/0.99/1.0 (0.91) & 0.63/0.98/1.0 (0.84) \\ 1237665179521187863 & Major & 0.03/0.99/0.17 (0.56) & 0.01/0.02/0.07 (0.51) & 0.29/0.63/0.76 (0.68) \\ & Minor & 0.13/0.36/0.56 (0.67) & 0.18/0.37/0.57 (0.68) & 0.17/0.46/0.62 (0.55) \\ \end{tabular} \end{table} Table 5: Marginalized \(p_{\rm merge}\) values and accompanying CDF values for the six galaxies from Figure 1. We list the \(p_{\rm merge}\) corresponding to the 16th, 50th (median), and 84th percentiles of the marginalized posterior \(p_{\rm merge}\) distributions for each galaxy using each classification. We also list the CDF value that corresponds to the 50th percentile in parenthesis. For each galaxy, we list only the combined minor/major merger classifications and the pre- and post-coalescence (1.0) results. In the online-available table, we also include the early, late, and post-coalescence (0.5) results. In the online-available table, each of the 16/50/84 percentile values is its own column. minor merger by directly comparing the \(p_{\rm merge,50}\) values from each classification. The utility then decides whether the galaxy is more likely to be a pre-coalescence merger or a post-coalescence (1.0 Gyr) merger. It does this for both the major and minor classifications. All of these rankings occur regardless of if the \(p_{\rm merge,50}\) values are greater than 0.5. For galaxy E using the major and minor merger \(p_{\rm merge,50}\) values, we are able to conclude that it is more likely a minor merger. We can then further distinguish between the minor merger stages, finding that it is more likely a pre-coalescence minor merger. In general, we recommend following the hierarchical framework of compare_classifications.py; first decide between the all-inclusive major and minor merger classifications and then decide between the sub-stages of each. We also recommend using the post-coalescence (1.0) classification as opposed to the post-coalescence (0.5) classification, which has lower performance statistics due to its short observability timescale. If the use case is to identify all early-stage major and minor mergers, then we recommend creating a new process using the code framework of compare_classifications.py that requires that \(p_{\rm merge,50}\) from the early stage classifications is greater than the \(p_{\rm merge,50}\) values corresponding to the late and post-coalescence (1.0) stage classifications. In this case, we recommend comparing the stages of the major/minor merger classification directly to one another (i.e. major merger early is compared to major merger late and post-coalescence). We also provide the 16th and 84th percentile values if the user wants to develop a more conservative sample, i.e. requiring that \(p_{\rm merge,major,16}>p_{\rm merge,minor,84}\) would be a more conservative way to compare the classifications. However, there is significant overlap between different classification samples when using the full range (16th and 84th percentiles), so we recommend using the 50th percentile (median) values for simplicity. For instance, in Figure 10, if we were to use the more conservative technique, all of the classifications and stage-specific classifications would overlap. Note that there is overlap between stages and/or merger types, i.e. there will be many galaxies that have \(p_{\rm merge,50}\) values that are greater than 0.5 for multiple different classifications. We discuss this overlap in more detail in SS4.7, where we measure the merger fraction. ### Interpreting cases where the LDA classification disagrees with by-eye classification We acknowledge that as with any merger identification approach that relies on imaging predictors, the individual classifications may disagree with by-eye decisions. We therefore recommend that if the user is working with a relatively small sample they also examine the classifications by eye to identify potential misclassifications. A failure mode of the LDA major merger combined classification, for instance, is classifying equal mass major mergers that happen to be in a symmetric configuration as non-merging. This happens when the merging galaxies also have a low overall concentration. Figure 8: Non-merging galaxies (\(p_{\rm merge}<0.5\)) according to the major merger LDA technique. The inset panels are described in Figure 7. Figure 7: Merging galaxies (\(p_{\rm merge}>0.5\)) according to the major merger LDA technique. The inset panels list the LD1 value and its accompanying \(p_{\rm merge}\) value and CDF value. All panels are \(80\aas@@fstack{\prime\prime}0\times 80\aas@@fstack{\prime\prime}0\). This is relatively rare and can be understood by running the interpretive MergerMonger utilities which reveals which predictors are responsible for the non-merger classification. We also recommend running galaxies with surprising classifications through the compare_classification.py utility in order to examine the results of the different merger stage classifications. Obvious early or late stage equal mass major mergers might be classified as having a low probability of being a major merger by the overall classification (due to the unlikely combination of imaging predictor values) often have a high probability of being a major merger in the pre-coalescence stage. The combined LDA classifications (major and minor) are trained from an ensemble of images, meaning that they are optimized for high accuracy for all stages of the merger. The implication is that the combined classifications are best for determining bulk sample properties such as the overall merger fraction, while the individual stage classifications may be better suited for understanding smaller samples of mergers or for cases where determining the merger stage is important. ### Properties of the LDA mergers The challenge we face in validating our sample of merging galaxies is that there is no gold standard to rely upon for which galaxies are truly mergers. We therefore take the approach of checking for large systematic issues by investigating the global properties of the merger samples. We carry out this analysis in two parts: first, here we compare the properties of the (mass-complete) parent sample to those of the merger samples. Second, in SS4.6, we will compare the properties of the merger samples to those of other merger selection techniques. In Figure 11 we compare the probability density functions (pdfs) for the major (pink) and minor (yellow) merger samples to that of the parent SDSS sample (white) using average S/N, \(r-\)band magnitude, color (\(g-r\)), stellar mass, and redshift. The pdfs are normalized so that all bins from a given distribution sum to a value of one. The mergers have properties that span the full range of properties of the parent distribution. This is a major success when we consider that our training sample of galaxies was limited in these spaces. For instance, the training set of galaxies spanned \(3.9-4.7\times 10^{10}M_{\odot}\) in stellar mass and all galaxies in the training set had their surface brightnesses and apparent sizes adjusted to a redshift of \(z=0.03\). The fact that the LDA techniques identify mergers over a large range in surface brightness, stellar mass, and redshift indicates that the LDA method is successfully able to adjust to a wider span of galaxy properties. Furthermore, we run two-sample Kolmogorov-Smirnov (KS) tests to compare the cumulative distribution functions (constructed from the pdfs) for each property and find that the distributions are statistically indistinguishable. Specifically, we are unable to reject the KS null hypothesis (that the distributions are identical) when we compare the parent distribution to the major and minor merger selection and when we compare the major and minor merger distributions. The implication is that while the major and minor merger classifications are using different imaging properties to identify mergers, they are not significantly biased in any of these properties. This is a massive success of the method; previous studies have uncovered significant biases, especially related to S/N. For instance, Bickley et al. (2021) train a Convolutional Neural Network (CNN) to identify post-merger galaxies in Illustris TNG100. When they test the performance on galaxies in the Canada-France Imaging Survey, they find a deficit of very faint galaxies in the post-merger sample. Their merger technique is slightly biased towards identify more massive, brighter, and higher redshift galaxies (due to the volume-limited nature of the survey, more massive galaxies are more likely to appear at higher redshift). Despite the KS test revealing that the distributions are statistically indistinguishable, we do notice some slight by-eye differences. The major and minor classifications have slight excesses at low (brighter) \(r-\)band magnitudes compared to the parent distribution. To quantify this, we measure the offset in the median value of each major/minor distribution compared to the parent distribution and find values of \(\Delta r=0.11/0.12\), where the major and minor merger distributions are slightly brighter than the parent distribution. The distributions also differ at low redshift, where the major and minor merger distributions tend towards lower redshift values (\(\Delta z=0.002/0.005\)). In terms of mass, the major mergers tend to have higher masses (\(\Delta logM_{*}(M_{\odot})=0.06\)). The brighter major mergers constitute two populations; one is more massive and at higher redshift, while one is less massive and at lower redshift. Both of these populations have slightly lower S/N ratios than the parent sample. These properties could reflect a slight bias for the merger classifications to identify galaxies with lower S/N ratios as mergers, which is the opposite bias as that identified in work such as Bickley et al. (2021). We investigate this potential bias in more depth in SS4.9, where we show that the merger fraction does increase with decreasing S/N when we control for mass and redshift. However, we also show in this section that this trend does not change our finding of a decreasing merger fraction with increasing redshift. Figure 9: Diagnostic classification diagrams for the major (top) and minor (bottom) merger classification results for galaxy E. The three most influential terms and their contribution to LD1 are given in the bottom left panels. We describe how to interpret these leading terms in the text. ### Properties of LDA mergers compared to previous merger samples in SDSS In order to better understand the biases of our technique, we compare the mergers selected using the LDA major merger classification with those from two large SDSS merger samples: GalaxyZoo and the Ackermann et al. (2018) technique (from here on, A18). First, we compare our SDSS merger catalog to the GalaxyZoo selection of mergers in SDSS imaging, which is a large publicly-available catalogs of mergers (Lintott et al., 2008, 2011). We cross-match the GalaxyZoo catalog from DR8 (893,163 galaxies) with our clean DR16 sample and find 570,455 matches. The GalaxyZoo catalog provides \(p\), or probability values, for four morphological categories (mergers, ellipticals, combined spirals, and 'don't know'), corresponding to the percentage of users that selected each morphological category. We identify the morphological category with the highest \(p\) value for each galaxy. We then identify the number of galaxies in each category that have a fiducial major merger probability greater than 0.5 from our classification. We use the major merger classifications from our technique for comparison because the GalaxyZoo classifications are based on visual inspection, which is more likely to identify the more obvious major mergers. The results are as follows: for the GalaxyZoo merger category, 6626/10433 (64%) are LDA major mergers, for the combined spirals, 25467/176213 (14%) are LDA major mergers, for the ellipticals, 54431/378993 (14%), and for the ambiguous category, 1413/4816 (29%). We also build a 'clean' sample of GalaxyZoo mergers, where the fraction of users that classify galaxies as mergers is greater than 95%. Of these, 30/34 (88%) are classified as mergers by our classification. These results are reassuring in two ways: first, the LDA classification returns \(\sim\)2/3 of mergers identified in GalaxyZoo, and second, the fraction of spirals and ellipticals that are identified as mergers by the LDA method are not significantly different. This tells us that the LDA method is not significantly biased as a function of galaxy morphology. We visually inspect mergers according to GalaxyZoo that we classify as nonmergers and find that many of them can be described as double nuclei galaxies without noticeable tidal tails. Some of these galaxies may be nonmergers that are superimposed along the line of Figure 10: Diagnostic diagram for determining the most likely merger type (major or minor) and the most likely merger stages for galaxy E from Figure 1. This diagram is produced by the compare_classifications.py utility. The top left panel shows the galaxy, segmentation map (yellow), and imaging predictor values. The top right panel runs through a diagnosis of merger type, beginning with diagnosing whether \(p_{\rm merge,min,50}\) or \(p_{\rm merge,min,50}\) are greater than 0.5. If so, then the galaxy is identified as a merger. The next step is to identify which is more likely (a major or minor merger), which is determined using the \(p_{\rm merge,50}\) values from each classification. We provide the \(p_{\rm merge,50}\) values for all classifications along with the \(p_{\rm merge,16}\) and \(p_{\rm merge,84}\) values in the following format: \(p_{\rm merge,50}\) (\(p_{\rm merge,16}\)-\(p_{\rm merge,84}\)). Finally, this diagnostic diagram decides which stage is more likely for the major followed by the minor merger classifications. Here, the post-coalescence stage is more likely for the major merger and the pre-coalescence stage is more likely for the minor merger classification. In the bottom panel, the y-axis is used to order the classification results, where different colors correspond to the median, \(p_{\rm merge,50}\) values for each classification and the error bars give the the range between the 16th and 84th percentile of the \(p_{\rm merge}\) value for each classification. sight and some of them may be very early stage mergers (approaching for a first encounter) or gas poor mergers. For these galaxies, the most important major merger predictors, such as shape asymmetry, have low values, resulting in a non-merger identification from the LDA technique. We discuss this particular failure mode of the LDA classification in SS4.4. We next compare the properties of mergers from our classification technique to those identified in the SDSS sample using the A18 technique, which uses transfer learning to retrain a convolutional neural network (CNN) on the Darg et al. (2010) sample of merging galaxies (from GalaxyZoo). A18 use the 3003 merger objects from Darg et al. (2010) as merger examples (\(0.005<z<0.1\)) and 10,000 GalaxyZoo galaxies with \(f_{m}<0.2\) as examples of nonmergers, where \(f_{m}\) is the fraction of users who identify a galaxy as a merger. We cross-match the results from the A18 catalog, which is mass complete down to \(10^{10}M_{\odot}\), with those of our mass-complete LDA classifier (we calculate completeness as a function of redshift, Figure 5), and find an overlap of 98,645 galaxies. From these, we use the same method as A18 to identify galaxies with an average \(p_{m}\) value above 0.95 as merging, where \(p_{m}\) is the output of the CNN classifier. We first compare the overlap of the merger samples. When we measure the performance statistics of our merger sample relative to the A18 classifications (assuming the A18 classifications to be correct), we find an accuracy of 0.85, a precision of 0.11, a recall of 0.78, and an F1 score of 0.20. The precision is low because there are a large number of galaxies that we identify as mergers that A18 does not. We present a few examples of galaxies that we classify as major mergers that A18 does not in Figure 12. Using visual inspection, one of the galaxies in this example (top right) looks like a faint major merger, three appear to be minor mergers (top left3, top middle, and bottom left), and two appear to be post-merger remnants (bottom middle and bottom right). Footnote 3: This merger and others like it could be chance projections along the line of sight. We discuss this caveat of the method in more detail in §5.8. _Figure 12 demonstrates something fundamental about the differences between techniques that are trained using visually-identified samples and the LDA technique presented here; techniques trained using mergers identified by eye are biased towards identifying major mergers in the early or late stages. The LDA technique on the other hand will identify a greater variety of merger stages (including the post-coalescence stage, see the result of longer merger observability timescale from N19)._ We next use the color-mass diagram (Figure 13) to compare the properties of galaxies selected as mergers by the LDA technique (\(p_{\rm merge}>0.5\)) to those of the galaxies selected as mergers by the A18 technique (\(p_{\rm merge}>0.95\)). The cross-matched sample is incomplete at low galaxy stellar mass (\(M_{\ast}<10^{10}M_{\odot}\)) due to the A18 sample, meaning that the parent sample is almost entirely composed of red sequence galaxies. However, over the extent of the cross-matched sample, it is clear that the mergers identified using the LDA method span the same regions of color-mass space as those identified using the A18 method, further verifying that the LDA technique does not introduce significant morphological biases relative to the A18 method. We next bin the color-mass diagram by both stellar mass and color to compare the colors and stellar masses, respectively, of our sample of mergers to the A18 mergers. Using the KS test to compare the merger distributions, we find mergers identified using the LDA technique have similar stellar masses (for a fixed color) and are Figure 11: Probability density functions (pdfs) of the properties of the parent sample of SDSS gal axis (white) compared to properties of the \(p_{\rm merge}>0.5\) major merger sample (pink) and the minor merger sample (yellow). All histograms are normalized so that all bins sum to a value of one. Left to right: the distributions for average S/N ratio, \(r-\)band magnitude, color (\(g-r\)), log stellar mass, and redshift. Using the two-sample KS test, we confirm that the all distributions are statistically indistinguishable. slightly bluer (for a fixed stellar mass) relative to mergers identified using the A18 method. Ackermann et al. (2018) compare their sample of mergers to those of their training set (Darg et al., 2010) and find that their sample tends towards redder colors relative to the GalaxyZoo-identified mergers. We also find that the A18 sample is redder relative to our galaxies. ### Merger fraction We measure the merger fraction (\(f_{\rm merge}\)), which is the fraction of galaxies that have a \(p_{\rm merge}\) value greater than 0.5. We do this for both the major and minor merger classifications, focusing mostly on the major merger fraction in our analysis. For the remainder of the paper, \(f_{\rm merge}\) or'merger fraction' refers to the major merger fraction. We will specify if we are referring to the minor merger fraction. More specifically, a given output merger fraction \(f_{\rm merge}\), is computed from an individual LDA classification that is calibrated using an input prior \(\pi\) and then applied to all of the galaxies in SDSS. Our fiducial values of \(\pi\) for the major/minor merger classifications are 0.1/0.3, respectively. Therefore, the measured (output) merger fraction for the fiducial major merger classification is: \[f_{\rm merge,\pi= fiducial}=\frac{N_{p_{\rm merge>0.5}}}{N_{\rm all}}\] where \(p_{\rm merge}\) is the merger probability for each SDSS galaxy calculated using the major merger classification created using the fiducial prior of \(\pi\) = 0.1, \(N_{p_{\rm merge>0.5}}\) is the number of SDSS galaxies with probability values greater than the threshold of 0.5, and \(N_{\rm all}\) is the number of SDSS galaxies in the sample. We perform this calculation for the 363,644 galaxy subset that are photometrically clean and mass-complete. As we discuss in SS3.4, adjusting the input prior affects the LDA Figure 12: Galaxies classified by the LDA classification as major mergers that are classified by the A18 major merger classification as nonmerging. The yellow line marks the edge of the segmentation mask and the inset panels provide the LD1, \(p_{\rm merge}\) and CDF values for each galaxy. The LDA technique identifies a large fraction of SDSS galaxies as mergers that the A18 technique does not; this is the case even when the A18 threshold is adjusted to a lower value. Figure 13: Color-mass diagram. We cross-match the merger catalog from A18 with the LDA catalog; the parent distribution is shown in black contours. We compare mergers selected using the LDA technique (green) to those selected using the A18 technique (orange). The mergers identified using the LDA technique span the same color and mass ranges as those identified using the transfer learning technique, indicating that the LDA technique does not introduce significant morphological biases in its merger identification. classification and distribution of LD1 and \(p_{\rm merge}\) values. We demonstrate this in Figure 14, where adjusting the prior (\(\pi\), x-axis) affects our measurement of the posterior (merger fraction, \(f_{\rm merge}\)). In order to calculate the overall posterior probability, we employ the Bayesian approach described in SS3.4, marginalizing over the prior probability. The marginalized output merger fraction is the median of the individual merger fractions from each of the 46 priors shown in Figure 14. The error on \(f_{\rm merge}\) is calculated from the standard deviation of the \(f_{\rm merge}\) values for each input prior. Figure 14 demonstrates a flattening of the relationship between the input prior and the output posterior between the range \(0.05<\pi<0.25\) for both the major and minor merger fraction. On the upper end, we rerun this calculation for major merger priors \(\pi>0.5\) and find a similar flattening in the relationship between the prior and posterior. Furthermore, we find that the median major merger fraction is unchanged when we widen the prior to \(0.05<\pi<0.85\). This further justifies the 0.5 cutoff of the uniform prior that we introduced in SS3.4 and assures us that we have used the appropriate prior range to recover the true merger fraction. For each merger classification, we calculate the fiducial values of \(f_{\rm merge}\) (which do not have associated errors) and the marginalized value of \(f_{\rm merge}\) (uses the full posterior distribution of \(p_{\rm merge}\)) for both the clean and the clean and mass-complete samples of SDSS galaxies. We present these results in Table 7 for the major and minor combined classification and the pre- and post-coalescence (1.0 Gyr) classifications for each. Finally, it is important to note that some galaxies will be counted multiple times in this approach to calculating merger fraction. For instance, many galaxies that are classified as major mergers are also classified as minor mergers. The opposite is slightly less common and may be due to an increased minor merger fraction. Quantifying the overall major merger fraction (requiring that \(p_{\rm merge,50,maj}>0.5\)), we find a major merger fraction of 0.21. When we remove all galaxies that are more likely to be minor mergers (\(p_{\rm merge,50,min}>p_{\rm merge,50,min}\)), we find a clean major merger fraction of 0.12. We repeat this calculation for the minor merger fraction and clean minor merger fraction and find values of 0.28 and 0.24, respectively. We find that these contamination fractions remains the same when we adjust the \(p_{\rm merge}\) threshold value we use to define mergers. We also investigate this overlap as it pertains to the calculation of the merger fraction trends with stellar mass and redshift in more depth in SS4.12. We ultimately find that the contamination of minor mergers in the major merger sample does not affect our results about the merger fraction trends. We also find that many galaxies are classified as multiple different stages of mergers. For instance, users should be aware that if they select for major mergers in the early stage (\(p_{\rm merge,maj,early,50}>0.5\)), many of these galaxies will also be included when they select for major mergers in the late stages (\(p_{\rm merge,maj,late,50}>0.5\)). Quantitatively, we find that 0.18 of galaxies are major mergers in the early stage and that this fraction drops to 0.03 after eliminating galaxies that are more likely to be late and post-coalescence stage major mergers. Similarly, 0.19 of galaxies are major mergers in the late stage; when we consider the clean late stage major merger fraction, this fraction drops to 0.13. The post-coalescence stage has a major merger fraction of 0.35, which drops to 32% when only considering clean post-coalescence mergers. The implication is that a significant fraction of early stage mergers are likely to be identified as mergers in other stages. This result also holds for the merger stages of the minor mergers. The unclean/clean early stage minor merger fraction is 0.28/0.14. This figure is 0.24/0.16 for the late stage and 0.44/0.32 for the post-coalescence stage. ### Dependence of the major merger fraction on stellar mass and redshift In this section, we explore how the measured major merger fraction changes with galaxy stellar mass and redshift. In SS4.13 and SSB, we further explore if these dependencies reflect biases of the classification or of the galaxy mass selection. First, in Figure 15, we separate the mass-complete sample into 15 evenly-sized bins in stellar mass (meaning there are the same number of galaxies in each one-dimensional bin) and bins of \(\Delta z=0.02\) in redshift. After binning the distribution, we eliminate bins where the median values of redshift and mass for the galaxies in that bin are significantly different from the bin centers, which we define as \(>1\sigma\) above or below the bin center, where \(\sigma\) is the standard deviation of the values for the galaxies in the bin. This eliminates bins where incompleteness in redshift and/or mass could bias our results. We show the final binning scheme with the number of galaxies in each complete bin in Figure 15. All bins (red) have at least 1000 galaxies. This conservative approach restricts the final sample to 310,012 galaxies. We determine the median and standard deviation of the \(f_{\rm merge}\) value in each bin by marginalizing across all priors. Next, for each redshift bin, we fit a line to the data points at each stellar mass by running a Markov Chain Monte Carlo (MCMC) analysis; we add the standard deviation (error bar) multiplied by a value drawn from a random normal distribution to each \(f_{\rm merge}\) value and use statsmodels to fit a linear regression. We show the key results for the major merger classification in Figure 16 and 17, where we find a positive slope of \(f_{\rm merge}\) with stellar mass and a negative slope with redshift, respectively. The slope of the major merger fraction with mass (Figure 16) is mostly positive; for 6/8 bins this is a significantly positive slope to \(1\sigma\), where \(\sigma\) is the variation in the slope value found via the MCMC Figure 14: Measured merger fraction as a function of the input prior for the major (pink) and minor (yellow) merger classifications for the mass complete sample. We marginalize over the posterior probability (y-axis) to account for the effects of multiple possible input priors (prior probability, x-axis). The horizontal lines and shaded regions show the median and standard deviation of the merger fraction when marginalized over all input priors. The slope of this relationship is flat between a prior range of \(0.05<\pi<0.25\) and that the slope flattens out beyond \(\pi>0.5\). This justifies the chosen prior range of \(0.05<\pi<0.5\) and assures us that this range most likely covers the true merger fraction. iterative analysis. In 2/8 cases, the slope is significantly positive to \(2\sigma\), and in 3/8 cases, it is significantly to \(3\sigma\). The slope of the major merger fraction with redshift (Figure 17) is significantly negative in 13/13 bins to \(1\sigma\) confidence. Considering only the bins that have statistically significant slopes, we find that the value of the slope ranges between \(0.31<\alpha<0.53\) with mass (for the \(z\) bins) and between \(-3.35<\alpha<-1.08\) with redshift (for the mass bins). Generally, the trend is more steeply positive towards higher redshift and more steeply negative towards low and intermediate masses. ### Is S/N confounding the redshift-dependent major merger fraction? A statistical confound is a variable that distorts the apparent causal relationship between the independent and dependent variables because it is independently associated with both. To investigate if S/N is a confound that is causing the apparent negative slope in the major merger fraction with redshift, stratify, or bin, by S/N. We first restrict the S/N range to \(0<\mathrm{S/N}<50\) because galaxies with \(\mathrm{S/N}>50\) have a sparse distribution in the 3D parameter space. This restricts the sample from 363,644 to 305,321 galaxies. We present our results in Figure 18, where redshift is the target independent variable and S/N and mass bins are the y and x-axis of the figure, respectively. We demonstrate that for almost all 2D bins (in mass and S/N), the slope of \(f_{\mathrm{merge}}\) is significantly negative with increasing redshift. In many cases, the slope is slightly more negative than the 2D binning analysis with mass and redshift. We can conclude that a projection of the S/N-dependence of \(f_{\mathrm{merge}}\) does not explain the negative slope with redshift; when the sample is stratified by S/N, the slope of the major merger fraction is even more negative with redshift. We also run this analysis with S/N as the independent variable of interest and find that when we stratify by mass and redshift that the major merger fraction has a mostly negative trend with S/N, meaning that we find higher merger fractions for lower S/N galaxies. This trend is not well fit by a linear relationship; the slope is either flat or negative but very close to flat. This result is distinct from many studies that find a positive trend of merger fraction with S/N, where they are biased to detect brighter galaxies due the merger identification technique's reliance on faint tidal features (e.g. Bickley et al., 2021). Are morphology (bulge-to-total mass ratio) or color confounding the redshift-dependent merger fraction? We investigate if the negative slope of the major merger fraction with redshift could be attributed to a sensitivity to galaxy type. For instance, some studies find a different evolution of the merger fraction with redshift for early-type galaxies (ETGs) and late-type galaxies (e.g. Lin et al., 2008; Lopez-Sanjuan et al., 2012). In some cases, the ETGs have a negative slope with increasing redshift (Lin et al., 2008; Groenewald et al., 2017). To conduct this analysis, we repeat the analysis of the previous section, this time treating bulge-to-total mass ratio (B/T) and color (\(g-r\)) as the suspect confounding variables. This 3D binning analysis is identical to the S/N investigation we describe in SS4.9; here we replace S/N with B/T and \(g-r\) color and re-calculate the major merger fraction. By stratifying by these nuisance parameters, we remove their influence from the other parameters of interest (stellar mass and redshift). We find that the slope of the major merger fraction with mass and redshift does not significantly change as a function of galaxy color or B/T mass ratio. This is strong evidence that neither color nor B/T are responsible for the mass and redshift trends. The exception is our reddest bin, where the slope of the major merger fraction with redshift is flat or positive. It is important to make the distinction that while B/T and color are not a confounding variables that are responsible for the negative redshift dependence, they can still have independent influence on the merger fraction. For instance, when we stratify by mass, redshift, and B/T, we find that the major merger fraction is mostly flat as a function of B/T but increases with B/T for some bins, peaking around a B/T mass ratio of 0.7. When we stratify by mass, redshift, and color, the major merger fraction is positive with \(g-r\), meaning the major merger fraction increases for redder galaxies at high masses and redshifts. At low masses and redshifts, the slope is instead negative or flat. This is \begin{table} \begin{tabular}{c c c c c c c} \hline Priors & Major & Major & Major & Minor & Minor & Minor \\ & All & Pre-coalescence & Post-coalescence (1.0) & All & Pre-coalescence & Post-coalescence (1.0) \\ \hline Fiducial\({}^{a}\), all clean SDSS & 0.18 & 0.18 & 0.30 & 0.37 & 0.27 & 0.39 \\ Flat [\(0.05,0.5\)], all clean SDSS & 0.22\(\pm\)0.04 & 0.22 \(\pm\) 0.03 & 0.36 \(\pm\) 0.05 & 0.31\(\pm\)0.08 & 0.29 \(\pm\) 0.09 & 0.46 \(\pm\) 0.04 \\ Fiducial, mass complete & 0.20 & 0.20 & 0.47 & 0.41 & 0.29 & 0.53 \\ Flat [\(0.05,0.5\)], mass complete & 0.28\(\pm\)0.07 & 0.24 \(\pm\) 0.03 & 0.53 \(\pm\) 0.06 & 0.35\(\pm\)0.09 & 0.32 \(\pm\) 0.10 & 0.60 \(\pm\) 0.04 \\ \hline \end{tabular} \end{table} Table 7: Merger fraction for the full sample of SDSS galaxies using different classification thresholds. \({}^{a}\)The fiducial model is when \(p_{\mathrm{merge}}>0.5\) and the priors are \(\pi=0.1\) and 0.3 for the major and minor mergers, respectively. Figure 15: Redshift and mass distribution of all galaxies in the mass-complete sample. For the analysis in this section, we select mass and redshift bins that have \(>1000\) galaxies and where the mass and redshift distributions are complete (the medians are aligned with the bin center). We outline the final selected bins used for the analysis and annotate the number of galaxies in each bin. consistent with a picture where the major merger fraction increases with B/T and \(g-r\) mostly for higher mass galaxies. ### Dependence of the minor merger fraction on stellar mass and redshift Here we repeat the analysis, instead using the minor merger classification to identify merging galaxies. We show the results for the binned analysis in Figures 19 and 20 for the slope of the merger fraction with mass and with redshift, respectively. We find that the slope of the merger fraction is mostly flat with stellar mass except for two redshift bins where it is negative. The slope of the merger fraction with redshift is flat for all mass bins. In other words, the minor merger fraction shows little dependence on mass or redshift. We discuss the implications of this in SS5.5. ### Accounting for contamination in the major/minor merger samples by minor/major mergers In SS4.8 and 4.11, we empirically measure the merger fraction as a function of stellar mass and redshift for the major and minor merger classifications, respectively. These results include overlap between classifications, since we consider all galaxies with median \(p_{\rm merg}\) values greater than 0.5 as mergers. Here we investigate if these results change when we calculate the merger fraction for the sample of major and minor mergers without overlap between classification. To calculate the clean major and minor merger fraction, we require that \(p_{\rm merg,med}>0.5\) and \(p_{\rm merg,med,maj}>p_{\rm merg,med,min}\) for the major mergers and \(p_{\rm merg,med}>0.5\) and \(p_{\rm merg,med,min}>p_{\rm merg,med,maj}\) for the minor mergers. The second requirement significantly reduces the sample size of major mergers from 86,843 galaxies to 53,573 galaxies and reduces the sample size of minor mergers from 103,907 to 86,837 galaxies. The major merger sample therefore has a greater contamination contribution from minor mergers, which is to be expected given the larger overall merger fraction for minor mergers. When we re-calculate the mass- and redshift-dependent merger fraction for the clean samples, we find similar results. The clean major merger fraction has a positive slope with mass and a negative slope with redshift for all bins. Most slopes are slightly flatter than the not clean case; however, this difference is not statistically significant (to 1\(\sigma\) errors). This slight flattening could be due to a contamination from the minor mergers, where the trend with mass and redshift is flatter. The clean minor merger fraction slopes are consistent to 1\(\sigma\) with the not clean minor merger fraction slopes. In conclusion, while double counting in the major and minor merger samples has a significant effect on the overall number of mergers, double counting does not affect our conclusions about the slope of the major and minor merger fraction with redshift and mass. _The implication is that the slope of the merger fraction is robust to these levels of contamination (38% and 16% of the major and minor merger samples, respectively, are contaminated)._ Figure 16: Linear fits to the binned \(f_{\rm merge}\) values for the major merger classification as a function of stellar mass for bins at fixed redshift (bin spacing is \(\Delta z=0.02\)). We show the average line fit in color and the MCMC iterative fits in grey. We conclude that \(f_{\rm merge}\) has a positive relationship with increasing mass for the majority of the redshift bins. All panels have the same y range. ### Sanity checks As we will address in the discussion section, the result of increasing merger fraction with stellar mass has precedent in the literature. However, the result of decreasing merger fraction with redshift is unprecedented. Given this surprising result, we explore in Appendix B whether the result of decreasing merger fraction with increasing redshift is physical (real) or whether we can attribute it to sample systematics (i.e. mass incompleteness at higher redshift or errors in the mass calculation or determination of the photometric redshift). After running our merger sample through multiple sanity checks in Appendix B, we can conclude that the trend of the major merger fraction increasing with stellar mass (for constant redshift) and decreasing with \(z\) for constant stellar mass is robust to changes in how we measure redshift and stellar mass. It is also robust to changes in how we bin the data for this analysis and how we compute the mass completeness. These steps were all taken to rule out the leading culprits of systematic bias in the sample that could lead to our surprising result of the negative evolution of the major merger fraction with redshift. Finally, we compare our major merger sample to a different merger sample (A18). We find a mostly flat result with redshift for the A18 merger sample. Since we use the same cross-matched sample to rerun the LDA classification and still find a negative trend with redshift for the cross-matched sample, we are able to conclude that this result is not due to peculiarities of the galaxy sample but instead can be attributed to differences due to the merger selection itself. In the absence of mass binning the major merger fraction has an artificial positive trend with redshift We have taken one final step towards understanding the negative trend of the major merger fraction with redshift in the context of other work. Here we run our analysis without mass binning, as other work has done in the past in the absence of enough data to bin and still retrieve a statistically significant result. We re-run the analysis without mass bins to determine the confounding role of the positive mass trend in the redshift slope when we do not control for mass. We additionally experiment with eliminating the completeness correction (of Figure 15) and with using spectroscopic redshifts. We present our results in Figure 21. We find a significant positive slope of the major merger fraction with redshift in all cases where we do not bin for mass. This includes the sample that is mass complete with photometric redshifts (top), the sample that is mass incomplete with photometric redshifts (middle), and the sample that is mass complete with spectroscopic redshifts (bottom). All of plots in this figure use color-derived masses, but we find similar results with SPS-derived stellar masses. _Figure 21 is therefore an important reminder Figure 17: Same as Figure 16 but analyzing the slope of the major merger fraction with redshift for bins of fixed stellar mass. Here the slope is significantly negative with redshift for all mass bins. that what looks like a positive slope with redshift is actually the projection of a positive slope in mass onto redshift._ This figure additionally highlights that while the overall trend is positive, there are different features in each plot produced by the slightly different sample selections. For instance, the peak at low redshift can most likely be attributed to the bias produced by photometric redshift that artificially increases the stellar masses of low mass galaxies. Additionally, the peak at higher redshift in the middle plot (mass incomplete sample) can most likely be attributed to the mass incompleteness of the sample. The conclusions from this section can be directly connected to our overall conclusions from this work. While we cannot completely rule out that our negative trend with redshift is not the result of some other systematic bias or a combination of biases (i.e., confounding factors like mass incompleteness and redshift bias), we can at least clearly show the most simple and likely scenario: that mass binning versus no mass binning produce dramatically different trends of the evolution of the major merger fraction with redshift. This demonstrates the importance of running this type of analysis on large samples of galaxies and with a merger classification technique such as the LDA that demonstrates broad reliability across a range of galaxy types. Both of these elements of this paper were essential to be able to bin the sample in both redshift and mass and do a careful completeness correction. ## 5 Discussion Our mass-complete binning analysis of a large sample of galaxies using a carefully calibrated set of classification techniques allows us to make clear conclusions about the evolution of the merger fraction locally (\(0.03<z<0.19\)). The additional novelty of this study is that the large sample size allows us to do this over a finely spaced grid of redshift and mass bins. Here we discuss our measurements of the mass and redshift-dependent merger fraction in the broader context of previous work. We focus all discussion on predictions and measurements of the merger fraction and reserve all discussion of the merger rate for future work (Simon et al., 2023, in prep). We begin with a brief review of predictions for the local merger fraction from cosmological models (SS5.1) and a review of recent empirical estimates of the merger fraction (SS5.2). We then discuss the implications of our findings of Figure 18: The slope of the major merger fraction with redshift (inset subplot x-axis) for almost all S/N (figure y-axis) and mass (figure x-axis) bins is significantly negative. This indicates that the negative redshift trend for \(f_{\rm merg}\) cannot be attributed to increasing S/N with increasing redshift. the mass and redshift dependence of the major merger fraction for galaxy evolution in the local Universe (SS5.3 and SS5.4, respectively). We also discuss the implications of the distinct results we find for the minor merger fraction evolution (SS5.5). We summarize our precautions throughout this paper to prevent morphological biases in the results in SS5.6. We end with a discussion of the relative strengths of the methodology presented here (SS5.7) as well as the caveats and future work motivated by this study (SS5.8). ### Predictions of the redshift and mass-dependence of the merger fraction from cosmological models The \(\Lambda\)CDM model of structural assembly (e.g. White & Rees 1978) predicts hierarchical, or bottom-up assembly, meaning that mergers assemble smaller halos first followed by more massive halos at later times (e.g. Blumenthal et al. 1984). This predicts a merger rate that evolves with the density of galaxies and space in the Universe. The measured fraction of merging galaxies should therefore increase with redshift back to \(z\sim\) 2 - 3. Additionally, the merger fraction should have a steep dependence on mass in the local Universe, since the most massive galaxies are predicted to assemble at later times. An alternate assembly scenario is cosmic downsizing, where the largest galaxies form early and then stall (e.g. Cowie et al. 1996; Juneau et al. 2005; Treu et al. 2005; Cowie & Barger 2008). Mergers have been invoked as a mechanism to drive this compact star formation followed by rapid quenching. While Bridge et al. (2010) invoke downsizing as a mechanism to drive a negative mass dependence in the merger fraction between redshifts \(0.2<z<1.2\), Estrada-Carpenter et al. (2020) show that the phenomenon of downsizing has a minimum redshift that ranges between \(1.5<z<8\). The implication is that this assembly scenario does not apply to the local Universe. Baryonic evolutionary processes (i.e. feedback) play an important role in galaxy assembly. Simulation work finds that when baryonic feedback is combined with the bottom-up formation model (hierarchical assembly), this can manifest as top-down assembly, i.e. downsizing (Sringer et al. 2009 and references therein). Baryonic feedback suppresses the growth of stellar mass in galaxies above and below \(\sim 10^{11}M_{\odot}\). This results in a higher number of intermediate mass galaxies. If this effect is strong, it could result in more major mergers between equal-mass galaxies locally. In reality, the picture is probably far more complicated than any one of the above formation scenarios. Different processes likely dominate for different mass scales and at various epochs over the age of the Universe. Directly observing the galaxy-galaxy merger fraction as a function of redshift and mass and separating major from minor mergers is therefore critical for constraining the relative contributions of these competing processes. Figure 19: Same as Figure 16 but for the minor merger fraction. ### Reviewing past empirical measurements of the mass- and redshift-dependent merger fraction Characterizing the mass-dependence of the major merger fraction can help us understand how elliptical galaxies and the bulges of galaxies are being built up over different mass ranges. Accurately measuring the mass-dependent merger fraction locally is especially important for anchoring the redshift-dependent merger fraction and directly testing the hierarchical assembly prediction that the most massive galaxies are assembled locally. As with the redshift-dependent merger fraction, previous work in this area relies on either close pair methods (e.g. Xu et al., 2004; Patton and Affield, 2008; Domingue et al., 2009; Xu et al., 2012) or morphological studies (e.g. Bridge et al., 2010; Cassels et al., 2014) to measure the mass-dependent merger fraction. Most studies find a constant or slightly increasing merger fraction with mass (e.g. Xu et al., 2004; Patton and Affield, 2008; Domingue et al., 2009; Xu et al., 2012; Robotham et al., 2014). Of particular note is the work of Robotham et al. (2014), which focuses on galaxies in the GAMA survey (\(0.05<z<0.2\)). They find that the merger fraction increases with mass by a factor of \(\sim\)3 between stellar masses of \(10^{9}<M_{*}(log~{}M_{\odot})<10^{11}\). Other work finds a decreasing fraction with mass (e.g. the morphological studies of Bridge et al., 2010; and Castels et al., 2014). Bridge et al. (2010) (\(0.2<z<1.2\)) claim that the decreasing fraction with mass is due to cosmic downsizing, while Castels et al. (2014) (\(0.001<z<0.2\)) argue that the decrease is due to an increasing observability timescale for lower mass galaxies. A number of studies have measured the major merger fraction and how it trends with redshift. It is important to note that most of the past work that has examined the redshift-dependence of the merger fraction has done so for redshift intervals that do not overlap with our study. The consensus among these studies is mostly for a higher merger fraction at higher redshift (Lin et al., 2008; Conselice et al., 2009; Lopez-Sanjuan et al., 2012; Robotham et al., 2014; Mundy et al., 2017; Mantha et al., 2018; Snyder et al., 2019; Kim et al., 2021), although some studies find a relatively flat merger fraction (Bundy et al., 2009; Jogee et al., 2009; Keenan et al., 2014). Only the GAMA-focused studies of Robotham et al. (2014) and Keenan et al. (2014) have more than one redshift bin below \(z=0.2\). Additionally, due to small sample sizes, the above work is often unable to bin finely in both stellar mass and redshift, and therefore unable to explore both simultaneously. _Our study is unique in that we have a large enough sample size to create bins in both stellar mass and redshift and this is the first study to do so for fine redshift bins locally (\(0.03<z<0.19\))_. ### Implications of a positive mass dependence of the major merger fraction We find a positive relationship between the major merger fraction and stellar mass between a range \(10.5<M_{*}(log~{}M_{\odot})<11.5\). This is consistent with a hierarchical assembly picture, where more massive halos (hence galaxies) are assembling locally. In this case, since we Figure 20: Same as Figure 17 but for the minor merger fraction. observe this trend for all redshift bins, we can conclude that this trend holds for the last \(\sim\)2 Gyrs of galaxy evolution. As mentioned in SS5.1, if baryonic processes such as feedback are coupling with hierarchical assembly, this could manifest as top-down assembly (cosmic downsizing). While we cannot rule this scenario out entirely, we can conclude that if this is happening, it is not strong enough to invert or flatten our observed positive trend for the major merger fraction. ### Implications of a negative redshift dependence for the major merger fraction Our key result is a decreasing major merger fraction with redshift over the range \(0.03<z<0.19\) (Figure 17). The implication is that major mergers become more important in the nearby Universe. This result cannot be explained by either hierarchical assembly or cosmic downsizing. Hierarchical assembly predicts an increase of the merger fraction out to high redshifts, while cosmic downsizing likely does not operate in the local Universe and does not make explicit predictions for the merger fraction. We find it most likely that baryonic feedback is dominating locally, overriding the positive slope predicted by hierarchical assembly. Here we focus mostly on the implications of our finding in the context of past studies and how these results merit a revision of the current techniques used to measure the evolution of the major merger fraction. _Most other close-pair studies find a positive trend of major merger fraction with redshift, yet it is important to note that the majority of these studies do not cover the same redshift range as this work (\(z<0.2\)) and that none of these studies control for both mass and redshift simultaneously._ As we have shown, the major merger fraction varies as a function of both mass and redshift and the mass dependence can project onto the redshift axis, resulting in an artificial positive relationship with redshift. Our recommendation is for the community to: 1) Revisit past analyses of the redshift-dependence of the major merger fraction using a careful mass and redshift binning analysis, and 2) Design future studies that adhesively span the local Universe and the higher redshift Universe. Currently, it is unclear if our findings represent a local inversion in the higher redshift (\(z>0.2\)) trend of a positive evolution of the major merger fraction with redshift or if higher redshift studies will be inverted when mass binning and mass completeness are accounted for. Additionally, many of the past close-pair studies that find a positive slope with redshift for the major merger fraction are sampling from a severely restricted volume. For a more in-depth analysis of the volume probed by various merger fraction studies, see the discussion of the role of cosmic variance in the calculation of the merger fraction in Lopez-Sanjuan et al. (2014). Furthermore, Patton & Atifield (2008) find that the cosmic variance from the SDSS survey is negligible. While cosmic variance is one important consideration, surveys that are limited in volume due to survey size, depth, or additional mass selections, will suffer from the inability to achieve statistically meaningful results from their decreased number statistics because they are unable to finely bin in mass and/or redshift. ### Implications of distinct trends for the minor merger fraction It is noteworthy that the minor merger fraction shows remarkably different mass- and redshift-evolution relative to the major merger Figure 21: Slope of the major merger fraction with redshift when we do not bin for mass. We show the results for the mass complete sample with photometric redshifts (top), for the mass incomplete sample with photometric redshifts (middle), and for the mass complete sample with spectroscopic redshifts (bottom). The black data points and accompanying error bars give the average and standard deviation of the merger fraction for each redshift bin, the red lines show the linear fits under the MCMC realizations (as outlined in §4.8), and the red line is the average line fit with slope and error given at the bottom of the plot. The average slope is positive for all samples, yet experiences a significant downturn at redshifts \(z<0.1\) for the two top plots, which measure redshift using the photometric redshifts. These plots demonstrates the importance of binning for mass; if this is not considered, the positive trend of the major merger fraction with mass is projected onto the redshift axis resulting in an artificial positive trend of the major merger fraction with redshift. fraction. The minor merger fraction has a flat dependence on both of these properties. While the slope is flatter, the error bars on the minor merger fraction tend to be larger than those on the major merger fraction for each bin. This could reflect the decreased accuracy of the minor merger classification. We explore a few explanations for the flat trends: 1) the minor mergers are subject to different structural assembly processes in the local universe (relative to major mergers), 2) the error bars on the minor merger fraction are obfuscating trends that are positive with stellar mass and negative with redshift, or 3) there are some systematic biases at play in the minor merger classification. First, we consider option 1. If this result is not due to a bias but is a physical finding, this demonstrates that minor mergers are about equally important for the assembly of all galaxy masses locally as well as all redshifts within our range. This could further motivate the above explanation explanation that a baryonic process such as feedback is driving the negative redshift trend for the major merger fraction. A process like feedback that increases the fraction of intermediate mass galaxies (hence, increasing the major merger fraction locally) could also lead to a relatively smaller fraction of minor mergers. We next consider options 2 and 3. We observe underlying structure in the mass-dependent minor merger fraction (a peak at intermediate masses). In our analysis of the properties of the different merger classifications (SS4.8), we find that minor mergers have a tendency towards intermediate masses. This could reflect a bias against identifying low mass and high mass galaxies, which could result in a flatter trend for the minor merger fraction since we should expect to miss both low and high mass galaxies. This makes sense given that the minor merger classification relies upon shape asymmetry to identify faint tidal tails or faint companions. In the case of a bright primary galaxy, this task becomes much more difficult for the classification. This bias is also related to a lower accuracy for the minor merger classification (explanation 2). Our hypothesis is that while this slight bias could exist, we find it unlikely that this slight bias alone is driving the flat evolution. In future work, it would be worth exploring the biases related to the minor merger classification; here we choose instead to focus on the biases related to the major merger classification. Regardless of whether it is a physical trend or related to classification biases, the _flatness of both of these relations for the minor merger sample further motivates the importance of separating minor mergers from major mergers; if there is significant minor merger contamination in a major merger sample, this would act to flatten out both the mass- and redshift-dependence, resulting in a flat relationship for both_. While we find that this does not have a significant influence on our results (SS4.12), we recommend that future studies of the redshift dependence of the major merger fraction take this result into consideration. ### Do the limitations of the simulated training set affect the robustness of these results? The LDA training set consists of intermediate mass, initially disk-dominated simulated galaxies with initial stellar masses \(3.9\times 10^{10}<M_{*}(M_{\odot})<4.7\times 10^{10}\) and initial B/T mass ratios \(0.0<B/T<0.2\).4 Since this training set is limited, we have taken measures to minimize potential biases and find no significant impact on our main result of the mass- and redshift-dependence of the major merger fraction. Footnote 4: The stellar masses and B/T mass ratios evolve throughout the time duration of the simulations. For instance, the simulated mergers increase in mass and the major mergers remnants are bulge-dominated (N19). To minimize potential morphological biases, the SDSS galaxies used in our merger fraction analysis are restricted to regions familiar to the LDA classifier using the 'outlier predictor' flag (Figure 1). _While the classifier may be morphologically biased for galaxy morphologies outside of the training set, this does not concern the results presented here, which are limited to morphologies that are familiar to the LDA classifier._ To assess if the galaxies we classify are morphologically biased despite the above precaution, we explore the properties of the SDSS merger sample in SS4.5 and find no distinction in S/N, \(r-\)band magnitude, \(g-r\) color, stellar mass, and redshift between the merger and parent samples. This reflects a major success or our method; that the merger classifications are not biased by galaxy property. In SS4.6 when we compare the fraction of ellipticals and mergers in Galaxy-Zoo classified as LDA major mergers, we find the same fraction (14%), which is further evidence that the technique does not retain a morphological bias. In SS4.10, we confirm that the major merger fraction trends with redshift and stellar mass persist when we control for galaxy morphology (\(g-r\) color or B/T ratio). This means that our results hold for all galaxy morphologies in the photometrically clean sample. Fully investigating the mass and morphological biases of the classification, especially for galaxies with the 'outlier predictor' flag are beyond the scope of this work. Future work could investigate the performance of the classifier across different galaxy morphologies. ### Strengths of this approach For many past studies that focus on measuring the major merger fraction, small number statistics are a concern. Cosmic (or sample) variance due to small fields (i.e. see the discussion of Xu et al. 2012) can result in large error bars, leading to a conclusion of flat redshift or mass evolution of the merger fraction. Of additional concern, many of the close pair studies (which constitute the bulk of this literature) suffer from spectroscopic incompleteness at small angular separation, while morphological methods suffer from surface brightness limitations, and as a result are biased towards identifying high mass gas-rich major mergers only. Many morphological methods also suffer from small sample sizes and with a variety of systematics related to different methodologies. In this work, we begin from a merger identification technique that is based on a set of well-understood simulations of mergers. This technique has four distinct advantages over past merger identification techniques. 1. We are able to calibrate our methodology, which will become critically important in future work (Simon et al. 2023, in prep), where we plan to constrain the merger rate. In order to determine the merger rate, the merger observability timescale is important, which we are able to measure from the set of simulated mergers. 2. Since the technique does not rely on spectroscopic detections, we apply the method to the full SDSS photometric dataset and return the largest-year sample of merging galaxies. With this large sample, we are able to control for both mass and redshift when we measure the merger fraction as a function of both of these quantities, which we have shown is essential. 3. Our technique spans a variety of merger stages, including pre- and post-coalescence stages. It therefore overlaps in stages with both close-pair and morphological studies, which will be crucial for comparing different types of studies when we measure the merger rate. 4. Our technique shows significant gains in accuracy and completeness relative to past work, allowing us to build a more complete (and larger) sample of merging galaxies. ### Caveats and future work There are three types of double counting of mergers that can occur under this classification technique: 1) The overlap between major and minor mergers, 2) The overlap between merger stages, and 3) Sometimes in the early stage of the merger, the technique identifies both galaxies as mergers, which is double counting compared to a close-pair technique. We have already discussed the overlap between merger stages and types in previous sections (SS4.3 and 4.12, respectively). In SS4.3 we find that the early and late stages have significant overlap in classifications but the post-coalescence stage tends to have less overlap; we discuss the implications of this in terms of classification interpretation. In SS4.12 we conclude that the slope of the merger fraction with mass and redshift is unchanged when we account for the double counting of major and minor mergers. While splitting mergers by stage was not a primary focus of the merger fraction analysis in this work, in future work (Simon et al. 2023, in prep), we plan to compare our galaxy sample with the close-pair sample from Simon et al. 2022, in prep, in order to constrain the absolute merger fraction. This will be especially important for the early stage mergers, which are most similar to close pair studies. In addition, in this work we conduct a brief analysis of the overlap of merger type classifications, in other words, the contamination of the major merger fraction by minor mergers. Our focus is primarily on if this affects our findings related to the merger fraction slope with stellar mass and redshift. In future work we plan to characterize the overlapping merger populations. In future work it will also be necessary to further address the third type of double counting. For early stage mergers, we find that the LDA method sometimes (but not always) identifies both galaxies in a pair as merging galaxies. This represents a double count relative to close-pair studies where the duo of merging galaxies would be considered to be one 'pair'. On the other hand, the LDA method also identifies mergers in the late and post-coalescence stages, which boosts our derived merger fraction relative to that of close-pair methods. Both of these considerations mean that directly comparing our method to close-pair studies is difficult. For this reason, in this work, we have not attempted to directly compare the absolute number of mergers and have instead compared the slope of \(f_{\rm merg}\) with stellar mass and redshift. In Simon et al. 2023, in prep, we plan to compare our galaxy sample with the close-pair sample from Simon et al. 2022, in prep, in order to constrain the absolute merger fraction. In this future work (Simon et al. 2023, in prep), we will also be able to determine the calibration factor, \(C_{\rm merg}\), to convert between the close pair fraction and the fraction of close pairs that will ultimately merge. It is also important to mention a fundamental difference between morphologically-reliant merger identification techniques like the LDA technique and spectroscopic-based techniques like the close-pair method. Galaxies like those shown in the top left panel of Figure 12 that are identified as mergers by the LDA technique may in fact be chance projections of unrelated galaxies along the line of sight. Fully characterizing the expected frequency of these chance projections is beyond the scope of this work, although we plan to discuss this in more depth in Simon et al. 2022, in prep, when we compare our merger sample to that of the close pair method. ## 6 Conclusions In this work we apply the merger classification method from Nevin et al. (2019) to the 1.3 million galaxies in the Sloan Digital Sky Survey DR16 photometric catalog. We additionally expand the merger classifications from N19 to include the different stages of the merger in addition to major versus minor classifications. This results in twelve different merger classifications: major and minor and then we further split by stage: early, late, pre-coalescence (includes early and late), and two different post-coalescence classifications (one extends to 0.5 Gyr post-merger, one extends to 1.0 Gyr). We apply all of these classifications to image cutouts from SDSS, calculate the \(p_{\rm merge}\) values and repeat this process for a range of different input priors, marginalizing over these priors to retrieve the posterior distribution of \(p_{\rm merge}\) values for all galaxies for all classifications. We provide these classifications to the reader in the form of online-available tables in addition to an interpretable classification repo known as MergerMonger. In the text we provide examples for how to interpret the results and distinguish between different merger types. We next analyze the properties of the merger samples and compare these properties to other merger samples in the literature. We conclude that the properties of the different types of mergers span the full range of properties of the parent SDSS distribution (in S/N, \(r-\)band magnitude, color, stellar mass, and redshift), which is a major success of the method. We also find that the LDA technique retrieves the majority of the GalaxyZoo and Ackermann et al. (2018) mergers and further identifies a large sample of galaxies as mergers that were missed by these techniques, demonstrating its success in finding less-obvious mergers than visually identified samples. The main goal of this paper is to retrieve the merger fraction (\(f_{\rm merg}\)) as a function of galaxy properties, which we do by measuring stellar masses, carefully building a mass-complete sample (our final sample is 310,012 galaxies), and binning by both stellar mass and redshift. For the major merger sample we find a significantly positive trend (1-3\(\sigma\) confidence) between \(f_{\rm merg}\) and stellar mass and a significantly negative (to 1\(\sigma\) confidence) trend with redshift. We show these key results in Figures 16 and 17, respectively. This trend is robust between stellar masses of \(10.5<M_{\star}(log~{}M_{\odot})<11.6\) and redshifts of \(0.03<z<0.19\). We show that when we do not correct for completeness or bin for mass, the strong dependence of the major merger fraction on mass results in a positive redshift slope, underscoring the importance of a careful binning analysis with a large sample size to recover this result. Examining these results in the context of past theoretical and observational work, we find that the positive trend of the major merger rate with stellar mass agrees with past results and is consistent with a hierarchical assembly scenario for the Universe. On the other hand, this is the first time a study has focused on the measuring the merger fraction locally (\(z<0.2\)) for finely spaced mass and redshift bins, which underscores the uniqueness of the finding of a negative trend for the major merger fraction with redshift. In future work (Simon et al. 2023, in prep) we plan to use these results in combination with the SDSS-derived close pair fraction from Simon et al. 2022, in prep to calculate a merger rate. From this, we can constrain the gravitational wave background from SMBH mergers. ## Acknowledgements To Dr. Aaron Stemo and Dr. Plamen G Krastev for some phenomenal supercomputing support. This research is supported by NSF AST-1714503 and NSF AST-1847938. JS is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2202388. The computations in this paper were run on the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. ## Data Availability All data products detailed in SS4.1 are available on Zenodo5. For the MergerMonger code, see the Github repo 6. This includes all of the analysis utilities used to generate the results of this paper. Footnote 5: DOI: 10.5281/zenodo.7438610 ## References * Ackermann et al. (2018) Ackermann S., Schawinski K., Zhang C., Weigel A. K., Turp M. D., 2018, MNRAS, 479, 415 * Ahumada et al. (2020) Ahumada R., et al., 2020, ApJS, 249, 3 * Amaro-Seoane et al. (2017) Amaro-Seoane P., et al., 2017, arXiv e-prints, p. arXiv:1702.00786 * Arun et al. (2022) Arun K. G., et al., 2022, Living Reviews in Relativity, 25, 4 * Arzumanian et al. (2020) Arzumanian Z., et al., 2020, ApJ, 905, L34 * Astropy Collaboration et al. (2013) Astropy Collaboration et al., 2013, A&A, 558, A33 * Bell et al. (2003) Bell E. F., McIntosh D. H., Katz N., Weinberg M. D., 2003, ApJS, 149, 289 * Bell et al. (2006) Bell E. F., Phleps S., Somerville R. S., Wolf C., Borch A., Meisenheimer K., 2006, ApJ, 652, 270 * Bertin & Arnouts (1996) Bertin E., Arnouts S., 1996, A&AS, 117, 393 * Bertone & Conselice (2009) Bertone S., Conselice C. J., 2009, MNRAS, 396, 2345 * Bickley et al. (2021) Bickley R. W., et al., 2021, MNRAS, 504, 372 * Blanton et al. (2017) Blanton M. R., et al., 2017, AJ, 154, 28 * Bluck et al. (2012) Bluck A. F. L., Conselice C. J., Burigao F., Grutzbauch R., Hoyos C., Mortlock A., Bauer A. E., 2012, ApJ, 747, 34 * Blumenthal et al. (1984) Blumenthal G. R., Faber S. M., Primack J. R., Rees M. J., 1984, Nature, 311, 517 * Bridge et al. (2010) Bridge C. R., Carlberg R. G., Sullivan M., 2010, ApJ, 709, 1067 * Bundy et al. (2009) Bundy K., Fukugita M., Ellis R. S., Target T. A., Belli S., Kodama T., 2009, ApJ, 697, 1369 * Casteels et al. (2014) Casteels K. R. V., et al., 2014, MNRAS, 445, 1157 * Cebrian & Trujillo (2014) Cebrian M., Trujillo I., 2014, MNRAS, 444, 682 * Cisternas et al. (2011) Cisternas M., et al., 2011, ApJ, 726, 57 * Conselice et al. (2009) Conselice C. J., Yang C., Bluck A. F. L., 2009, MNRAS, 394, 1956 * Cowie & Barger (2008) Cowie L. L., Barger A. J., 2008, ApJ, 686, 72 * Cowie et al. (1996) Cowie L. L., Songaila A., Hu E. M., Cohen J. G., 1996, AJ, 112, 839 * Cox et al. (2008) Cox T. J., Jonsson P., Somerville R. S., Primack J. R., Dekel A., 2008, MNRAS, 384, 386 * Darg et al. (2010) Darg D. W., et al., 2010, MNRAS, 401, 1043 * Darvish et al. (2015) Darvish B., Mobasher B., Sobral D., Scoville N., Aragon-Calvo M., 2015, ApJ, 805, 121 * Di Matteo et al. (2005) Di Matteo T., Springel V., Hernquist L., 2005, Nature, 433, 604 * Di Matteo et al. (2008) Di Matteo T., Bournaud F., Martig M., Combes F., Melchior A. L., Semelin B., 2008, A&A, 492, 31 * Domingue et al. (2009) Domingue D. L., Xu C. K., Jarrett T. H., Cheng Y., 2009, ApJ, 695, 1559 * Du et al. (2019) Du C., Li N., Li C., 2019, Research in Astronomy and Astrophysics, 19, 171 * Ellison et al. (2013) Ellison S. L., Mendel J. T., Patton D. R., Scudder J. M., 2013, MNRAS, 435, 3627 * Ellison et al. (2019) Ellison S. L., Viswanathan A., Patton D. R., Bottrell C., McConnachie A. W., Gwyn S., Cuillandre J.-C., 2019, MNRAS, 487, 2491 * Estrada-Carpenter et al. (2020) Estrada-Carpenter V., et al., 2020, ApJ, 898, 171 * Groenewald et al. (2017) Groenewald D. N., Skelton R. E., Gilbank D. G., Loubser S. I., 2017, MNRAS, 467, 4101 * Gunn et al. (2006) Gunn J. E., et al., 2006, AJ, 131, 2332 * Hani et al. (2020) Hani M. H., Gosain H., Ellison S. L., Patton D. R., Torrey P., 2020, MNRAS, 493, 3716 * Hobbs et al. (2010) Hobbs G., et al., 2010, Classical and Quantum Gravity, 27, 084013 * Hopkins et al. (2006) Hopkins P. F., Somerville R. S., Hernquist L., Cox T. J., Robertson B., Li Y., 2006, ApJ, 652, 864 * Hopkins et al. (2008) Hopkins P. F., Cox T. J., Keres D., Hernquist L., 2008, ApJS, 175, 390 * Hopkins et al. (2010) Hopkins P. F., et al., 2010, ApJ, 724, 915 * Jogee et al. (2009) Jogee S., et al., 2009, ApJ, 697, 1971 * Juneau et al. (2005) Juneau S., et al., 2005, ApJ, 619, L135 * Kartaltepe et al. (2007) Kartaltepe J. S., et al., 2007, ApJS, 172, 320 * Kaviraj (2014) Kaviraj S., 2014, MNRAS, 437, L41 * Kaviraj (2014) Kaviraj S., 2014b, MNRAS, 440, 2944 * Keenan et al. (2014) Keenan R. C., et al., 2014, ApJ, 795, 157 * Kim et al. (2021) Kim E., et al., 2021, MNRAS, 507, 3113 * Knapen et al. (2015) Knapen J. H., Cisternas M., Querejeta M., 2015, MNRAS, 454, 1742 * Lin et al. (2004) Lin L., et al., 2004, ApJ, 617, L9 * Lin et al. (2008) Lin L., et al., 2008, ApJ, 681, 232 * Lintott et al. (2008) Lintott C. J., et al., 2008, MNRAS, 389, 1179 * Lintott et al. (2011) Lintott C., et al., 2011, MNRAS, 410, 166 * Lopez-Sanjuan et al. (2009) Lopez-Sanjuan C., Balcells M., Perez-Gonzalez P. G., Barro G., Garcia-Dabo C. E., Gallego J., Zamorano J., 2009, A&A, 501, 505 * Lopez-Sanjuan et al. (2011) Lopez-Sanjuan C., et al., 2011, A&A, 530, A20 * Lopez-Sanjuan et al. (2012) Lopez-Sanjuan C., et al., 2012, A&A, 548, A7 * Lopez-Sanjuan et al. (2014) Lopez-Sanjuan C., et al., 2014, A&A, 564, A127 * Lotz et al. (2008) Lotz J. M., et al., 2008, MNRAS, 391, 3 * Lotz et al. (2011) Lotz J. M., Jonsson P., Cox T. J., Croton D., Primack J. R., Somerville R. S., Stewart K., 2011, ApJ, 742, 103 * Mantha et al. (2018) Mantha K. B., et al., 2018, MNRAS, 475, 1549 * Mendel et al. (2014) Mendel J. T., Simard L., Palmer M., Ellison S. L., Patton D. R., 2014, Moreno J., Torrey P., Ellison S. L., Patton D. R., Bluck A. F. L., Bansal G., Hernquist L., 2015, MNRAS, 488, 1107 * Mueller & Gravitational Observatory (2016) Mueller G., Gravitational Observatory Advisory Team 2016, in APS April Meeting Abstracts. p. J12.002 * Mundy et al. (2017) Mundy C. J., Conselice C. J., Duncan K. J., Almaini O., Haussler B., Hartley W. G., 2017, MNRAS, 470, 3507 * NAMOGrx Collaboration et al. (2015) NAMOGrx Collaboration et al., 2015, ApJ, 813, 65 * Nevin et al. (2019) Nevin R., Blecha L., Comerford J., Greene J., 2019, ApJ, 872, 76 * Nevin et al. (2021) Nevin R., et al., 2021, arXiv e-prints, p. arXiv:2102.02208 * Pan et al. (2019) Pan H.-A., et al., 2019, ApJ, 881, 119 * Patton & Attfield (2008) Patton D. R., Attfield J. E., 2008, ApJ, 685, 235 * Patton et al. (1997) Patton D. R., Pritchet C. J., Yee H. K. C., Ellingson E., Carlberg R. G., 1997, ApJ, 475, 29 * Pearson et al. (2019) Pearson W. J., et al., 2019, A&A, 631, A51 * Pedregosa et al. (2011) Pedregosa F., et al., 2011, Journal of Machine Learning Research, 12, 2825 * Peng et al. (2002) Peng C. Y., Ho L. C., Impey C. D., Rix H.-W., 2002, AJ, 124, 266 * Peng et al. (2010) Peng Y.-j., et al., 2010, ApJ, 721, 193 * Robaina et al. (2010) Robaina A. R., Bell E. F., van der Wel A., Somerville R. S., Skelton R. E., McIntosh D. H., Meisenheimer K., Wolf C., 2010, ApJ, 719, 844 * Robotham et al. (2014) Robotham A. S. G., et al., 2014, MNRAS, 444, 3986 * Rodriguez-Gomez et al. (2015) Rodriguez-Gomez V., et al., 2015, MNRAS, 449, 49 * Rodriguez-Gomez et al. (2019) Rodriguez-Gomez V., et al., 2019, MNRAS, 483, 4140 * Sesana (2013) Sesana A., 2013, Classical and Quantum Gravity, 30, 244009 * Shi et al. (2009) Shi Y., Rieke G., Lotz J., Perez-Gonzalez P. G., 2009, ApJ, 697, 1764 * Simon & Burke-Spolaor (2016) Simon J., Burke-Spolaor J., 2016, ApJ, 826, 11 * Siwek et al. (2020) Siwek M. S., Kelley L. Z., Hernquist L., 2020, MNRAS, 498, 537 * Snyder et al. (2019) Snyder G. F., Rodriguez-Gomez V., Lotz J. M., Torrey P., Quirk A. C. N., Hernquist L., Vogelsberger M., Freeman P. E., 2019, MNRAS, 486, 3702 * Springel (2000) Springel V., 2000, MNRAS, 312, 859 * Stringer et al. (2009) Stringer M. J., Benson A. J., Bundy K., Ellis R. S., Quetin E. L., 2009, MNRAS, 393, 1127 * Tango (2018) Tango O., 2018, GNU Parallel 2018. Ole Tange, doi:10.5281/zenodo.1146014, [https://doi.org/10.5281/zenodo.1146014](https://doi.org/10.5281/zenodo.1146014) * Treu et al. (2005) Treu T., Ellis R. S., Liao T. X., van Dokkum P. G., 2005, ApJ, 622, L5 * Waskom (2021) Waskom M. L., 2021, Journal of Open Source Software, 6, 3021 * White & Rees (1978) White S. D. M., Rees M. J., 1978, MNRAS, 183, 341 * Xu et al. (2004) Xu C. K., Sun Y. C., He X. T., 2004, ApJ, 603, L73 * Xu et al. (2012) Xu C. K., Zhao Y., Scoville N., Capak P., Drory N., Gao Y., 2012, ApJ, 747, 85 * Zibetti et al. (2009) Zibetti S., Charlot S., Rix H.-W., 2009, MNRAS, 400, 1181 * pandas development team (2020) pandas development team T., 2020, pandas-dev/pandas: Pandas, doi:10.5281/zenodo.3509134, [https://doi.org/10.5281/zenodo.3509134](https://doi.org/10.5281/zenodo.3509134) ## Appendix A Photometric versus spectroscopic redshifts Here we explore the effect of using photometric redshifts on the mass calculation and subsequent mass-completeness cut. While we ultimately find that the merger fraction evolution as a function of redshift is unchanged by using spectroscopic masses, we nevertheless find that the mass distributions as a function of redshift bin (Figure 16) are different. Figure 16 shows the mass distribution and 95% completeness cut for five redshift bins with spacings \(\Delta z=0.02\) for color-based stellar masses measured using photometric (left) and spectroscopic redshifts (right). The distributions on the left are distinctly double-peaked which leads us to conclude that photometric-based redshifts are biasing a population of low-redshift galaxies towards higher masses. When we directly compare photometric-based redshift measurements to spectroscopic-based redshift measurements, we find a bias towards higher redshifts among the photometric measurements at low redshift, which could be resulting in a population of boosted masses, hence the artificial double-peaked profile. Appendix B Sanity checks for the result of a negative slope of the major merger fraction with redshift As we will address in the discussion section, the result of increasing merger fraction with stellar mass has precedent in the literature. However, the result of decreasing merger fraction with redshift over the range \(0.03<z<0.19\) is relatively unprecedented. Given this surprising result, we use this section to explore whether this result of decreasing merger fraction with increasing redshift is physical (real) or whether we can attribute it to sample systematics (i.e. mass incompleteness at higher redshift or errors in the mass calculation or determination of the photometric redshift). To test the result of a negative trend with redshift for the major merger fraction, we re-run the major merger fraction measurement in several ways: 1) Using the full, mass-incomplete sample (SSB1), 2) Adjusting the redshift binning scheme (SSB2), 3) Using spectroscopic redshifts (SSB3), 4) Using SPS-derived stellar masses (SSB4), 5) Running the analysis with the A18 mergers (SSB5), and 6) Re-running the analysis with different merger classifications (SSB6). We conclude in SS4.13 by arguing that the decreasing merger fraction with redshift is not a result of sample systematics and is instead a physical result. ### Mass incompleteness We run the major merger fraction calculation for the full sample (i.e. mass incomplete) and find that the trends persist. In this case, the slope of the major merger fraction with mass and redshift are less steep in both cases. Additionally, visually, the trend with redshift is very steep at low redshift followed by a flattening out of the redshift trend at redshifts \(z>0.1\). We hypothesize that this flattening could be due to significant mass incompleteness at high redshift. We discuss this trend in SS4.14. ### Changing bins We adjust the redshift bin sizes and use these new binning schemes to re-create the mass complete sample and re-calculate the mass- and redshift-dependence of the merger fraction. We use five different linear spacing schemes for the redshift bins (\(\Delta z=0.01,0.02,0.03,0.04,0.05\)). We also use an adaptive binning scheme where the redshift bin spacing is determined using a k-means approach; this constructs redshift bins that have the same number of galaxies in each bin. We also rerun all of these calculations for the mass incomplete sample. For all binning schemes we still find a positive slope with mass for the majority of the redshift bins and a negative slope with redshift for the majority of the mass bins. This confirms that the redshift bin spacing and/or the associated number of galaxies in each bin are not responsible for the negative trend of the major merger fraction with redshift. ### Spectroscopic redshifts As described in SS3.6, we cross-match our clean (no photometric flags) merger sample with the Mendel et al. (2014) catalog, which is mass-incomplete. We then use the spectroscopic redshifts from this sample to re-run the color-based mass measurement and redo the mass completeness calculation. Finally, we re-run the major merger fraction analysis. We find that the photometric-based redshifts, which are available for our full sample, exhibit a bias towards higher redshifts (as described in Appendix A), which shifts some galaxies at low redshift out of their redshift bins and results in higher stellar mass estimates. Despite these biases, we still find a significant positive trend for \(f_{\rm merg}\) with mass and a negative trend for \(f_{\rm merg}\) with redshift for the majority of the mass and redshift bins. ### Mendel masses To test the robustness of this result with respect to the mass calculation, we re-run the analysis from the previous section, but instead of color-based stellar masses, we use the stellar masses from Mendel et al. (2014), which are derived using a Sersic decomposition coupled with an SPS-based approach. We also use the spectroscopic redshifts for this analysis. We find that the negative slope of the major merger fraction with respect to redshift is maintained even for the different method of mass measurement, meaning that the color-based approach to mass calculation is not responsible for the negative trend. ### A18 mergers To explore whether the negative trend with redshift is a peculiarity of the sample or due to the merger classification, we re-run the analysis with the merger classification of A18. Cross-matching the A18 sample with the SDSS sample reduces the sample size to 97k galaxies. Since the sample size is significantly reduced, we adjust the mass binning to include fewer bins. Due to this reduced sample size, our investigation spans a smaller range in redshift (\(0.02<z<0.08\)) and stellar mass (\(10.5<M_{*}(log~{}M_{\odot})<11\)). We first verify that the positive slope with stellar mass and the negative slope with redshift persist for the LDA merger classifications. When we use the A18 mergers instead (using a threshold of 0.95), we find that the slope is positive with mass for the three redshift bins but that the slope is no longer universally negative with redshift for the mass bins. Instead, it is positive for two of the mass bins, flat for two of the bins, and negative for the two highest mass bins. By using the same cross-matched galaxy subsample with both the LDA classification and the A18 classification, we can confirm that the negative trend we observe for the LDA sample of mergers is due to the merger classification and not a peculiarity of the sample selection. Given that we have not conducted a full analysis focusing on A18 mergers and are simply comparing them to our merger sample, an explanation of why the merger fraction trends differ for the A18 sample is outside the scope of this paper. However, we can speculate on some differences in merger selection that could lead to these different conclusions. Examining the properties of the two different merger samples, we find some notable differences: The A18 sample of mergers have lower concentrations, higher \(A\) values, lower \(A_{S}\) values, tend to be at higher redshifts, and are redder than the LDA sample. As discussed in SS4.6, in most other properties (i.e. S/N and stellar mass), the merger samples are similar. The A18 sample may be slightly biased towards identifying redder galaxies with higher redshifts, which may result in a higher merger fraction at higher redshifts for some of the mass bins. ### Different classifications Here we calculate the dependence of the merger fraction on mass and redshift for the different merger classifications, including the major merger pre- and post-coalescence classifications and the minor merger combined and pre- and post-coalescence classifications. All variants of the major merger classification give similar results; both pre- and post-coalescence have positive slopes for how the merger fraction trends with mass. The pre-coalescence major merger fraction has a negative evolution with redshift. In the case of the post-coalescence major merger classification, the slope of \(f_{\rm merg}\) with redshift is mostly negative and flat for some mass bins. The minor merger classifications give very different results; as presented in SS4.11, the slope of the combined minor merger fraction with mass is flat (often with a peak at intermediate masses) and it is flat with redshift. The same goes for the pre-coalescence minor mergers and early stage minor mergers. The late stage minor mergers are positive with mass and negative with redshift for most bins, just like the major mergers. The post-coalescence minor mergers are positive with mass and negative or flat with redshift, so very similar to the post-coalescence major mergers. There are two important lessons here. First, that minor mergers do not show mass-dependent or redshift-dependent evolution. We will discuss the physical implications of this in SS5.5. Second, the dependence of \(f_{\rm merg}\) on stellar mass and redshift is affected by merger stage as well as mass ratio. The implication is that studies that identify mergers should pay careful attention to the biases of the merger sample. However, it is important to note that the different stage classifications of the major merger fraction give similar results. Our finding of the negative trend of the major merger fraction with redshift therefore cannot be attributed to a difference in merger observability timescale of our method relative to close pair techniques.
2306.05820
Initial 56Ni Masses in Type Ia Supernovae
We infer initial masses of the synthesized radioactive nickel-56 in a sample of recent Type Ia supernovae applying a new formalism introduced recently by Khatami & Kasen (2019). It is shown that the nickel masses we derive do not differ significantly from previous estimates based on the traditional Arnett-model. We derive the $\beta$ parameter for our sample SNe and show that these are consistent with the fiducial value of $\sim 1.6$ given by Khatami & Kasen (2019) from SN Ia hydrodynamical simulations.
Zsófia Bora, József Vinkó, Réka Könyves-Tóth
2023-06-09T11:42:04Z
http://arxiv.org/abs/2306.05820v1
# Initial \({}^{56}\)Ni Masses in Type Ia Supernovae ###### Abstract We infer initial masses of the synthesized radioactive nickel-56 in a sample of recent Type Ia supernovae applying a new formalism introduced recently by Khatami & Kasen (2019). It is shown that the nickel masses we derive do not differ significantly from previous estimates based on the traditional Arnett-model. We derive the \(\beta\) parameter for our sample SNe and show that these are consistent with the fiducial value of \(\sim 1.6\) given by Khatami & Kasen (2019) from SN Ia hydrodynamical simulations. Type Ia supernovae -- explosive nucleosynthesis - nickel masses + Footnote †: journal: PASP 0000-0002-2882-0885]Zsofia Bora 0000-0002-4880-788X]Jozsef Vinko 0000-0002-4880-788X]Reka Konyves-Toth ## 1 Introduction Type Ia supernovae (SNe Ia) are among the most suitable objects for extragalactic distance measurements (see e.g. Jha et al., 2019, for a recent review). Although the well-known correlation between their peak absolute brightness (e.g. \(M_{B}\) in the \(B\)-band) and the light curve decline rate (e.g. \(\Delta m_{15}\)) is still based on empirical calibrations, it was realized decades ago that the peak brightness is connected to the amount of radioactive nickel (\({}^{56}\)Ni) synthesized during the thermonuclear explosion of the carbon-oxygen white dwarf (C/O WD; Arnett, 1982; Goldstein & Kasen, 2018). Understanding the physics behind the peak brightness - decline rate - nickel mass (\(M_{\rm Ni}\)) connection may have a crucial importance in improving the distance measurement methods. The nickel yield of SNe Ia may also be the smoking gun for the explosion mechanism triggering the thermonuclear runaway in the C/O white dwarf (see e.g. Seitenzahl & Townsley, 2017, and references therein). For example, delayed-detonation models that assume the explosion of near-Chandrasekhar mass (\(M_{Ch}\)) WDs predict higher \({}^{57}\)Ni/\({}^{56}\)Ni abundance ratio than double-detonation or violent merger models in which the exploding WD has sub-\(M_{Ch}\) mass. Graur et al. (2016) found about twice of the solar value for the \({}^{57}\)Ni/\({}^{56}\)Ni mass ratio from the late-time light curve decline rate of SN 2012cg, which led to the conclusion that the exploding star in SN 2012cg was a near-\(M_{Ch}\) WD. On the other hand, Scalzo et al. (2014) pointed out that the observed diversity of both the \({}^{56}\)Ni- and the ejecta masses of SNe Ia (\(0.3<M_{\rm Ni}<0.8\) M\({}_{\odot}\), and \(0.8<M_{\rm ej}<1.5\) M\({}_{\odot}\), respectively) suggests that sub-\(M_{Ch}\) explosion mechanisms, such as the double detonation or the violent merger scenario are responsible for the bulk of the observed SNe Ia (see e.g. Maoz et al., 2014, for a review of explosion models). Similar conclusions have been presented by numerous other studies, e.g. in Childress et al. (2015), Dhawan et al. (2018), Scalzo et al. (2019), Wygoda et al. (2019) and Konyves-Toth et al. (2020). The estimation of the initial amount of \(M_{\rm Ni}\) synthesized during the explosion is traditionally based on Arnett's rule (Arnett, 1982), which states that at the moment of maximum luminosity the instantaneous energy input from the radioactive \({}^{56}\)Ni \(\rightarrow\)\({}^{56}\)Co \(\rightarrow\)\({}^{56}\)Fe decay chain is equal to the peak bolometric luminosity. Since the energy generation rate from radioactive Ni-decay depends linearly on the initial amount \({}^{56}\)Ni, it provides a relatively easy way of inferring \(M_{\rm Ni}\)(Stritzinger et al., 2006, e.g.). Modeling the unblended Fe and Co features in the nebular spectra of SNe Ia is an alternative method of deriving the Ni-mass (e.g. Mazzali et al., 1997; Childress et al., 2015). Stritzinger et al. (2006) showed that these two methods, i.e. Arnett's rule and the nebular spectral modeling, provide consistent Ni-mass estimates. Recently, Khatami and Kasen (2019) found that Arnett's rule has limited accuracy. The main reason for this is that in the classical Arnett-model the energy density profile of the ejecta is assumed to be self-similar immediately after explosion. In reality, however, if the heating source (\({}^{56}\)Ni) is central, like in core-collapse SNe, then some time is needed to reach self-similarity. For more evenly mixed sources (like SNe Ia), self-similarity is reached earlier, so Arnett's rule applies to these objects quite well. Khatami and Kasen (2019) found that for a central heating source, Arnett's rule underestimates the peak luminosity, while for more evenly mixed heating it may give an overestimate. They also presented a new relation between the peak luminosity and peak time of SN light curves that does not assume self-similarity. In this paper we compare the nickel mass estimates from the formulae of Khatami and Kasen (2019) to ones derived from recent observations of SNe Ia based on Arnett's rule (Konyves-Toth et al., 2020). In Section 2 we review the methodology of getting nickel mass estimates from the observations, then the results from the two methods are compared to each other in **Section 3**. In Section 4 we calibrate the value of \(\beta\) using two other methods for determining the nickel mass of Type Ia SNe, which are independent from Arnett's rule. Finally, Section 5 summarizes our conclusions. ## 2 Methods In the original, self-similar model of a SN Ia, assuming homologous expansion of a constant density ejecta, Arnett's rule takes the following form: \[L_{\rm peak}=\alpha L_{\rm heat}(t_{\rm peak}), \tag{1}\] where \(L_{\rm heat}(t_{\rm peak})\) is the heating function at the moment of the luminosity peak, and \(\alpha\sim 1\) is a correction factor that accounts for small deviations from the assumptions of the Arnett-model. For SNe Ia \(L_{heat}\) is assumed to be the usual exponential form of the decay chain of \({}^{56}\)Ni \(\rightarrow\)\({}^{56}\)Co \(\rightarrow\)\({}^{56}\)Fe. **If \(\varepsilon_{\rm Ni}=7.9\times 10^{43}\) erg s\({}^{-1}\) and \(\varepsilon_{\rm Co}=1.45\times 10^{43}\) erg s\({}^{-1}\) are the heating rates and \(t_{\rm Ni}=8.8\) days and \(t_{\rm Co}=111.3\) days are the e-folding timescales of \({}^{56}\)Ni and \({}^{56}\)Co (e.g. Branch and Wheeler, 2017),** then the heating function takes the form of \[L_{\rm heat}(t)=\frac{M_{\rm Ni}}{M_{\odot}}\cdot\left[(\varepsilon_{\rm Ni} -\varepsilon_{\rm Co})e^{-t/t_{\rm Ni}}+\varepsilon_{\rm Co}e^{-t/t_{\rm Co }}\right]. \tag{2}\] Since the heating function depends linearly on \(M_{\rm Ni}\), it can be expressed simply as \[M_{\rm Ni}\ =\ \frac{L_{\rm peak}}{\alpha Q(t_{\rm peak})}, \tag{3}\] where \(Q(t)\) is the time-dependent part of the \(L_{\rm heat}(t)\) function in Eq.2. The caveat of this simple approach is that the inferred \(M_{\rm Ni}\) depends critically on \(t_{\rm peak}\) and \(L_{\rm peak}\), which must be known accurately to get a reasonable result. If the photometric sampling is sparse, it may be difficult to get a precise \(M_{\rm Ni}\). To overcome this difficulty, one possibility is to fit the entire bolometric light curve with the prediction of the Arnett-model (e.g. Valenti et al., 2008; Chatzopoulos et al., 2012). In this case the theoretical luminosity can be expressed as \[L(t)=\frac{2}{t_{\rm m}^{2}}(1-e^{-(t_{\gamma}/t)^{2}})\int_{0}^{t}t^{\prime}L _{\rm heat}(t^{\prime})e^{(t^{\prime}-t)^{2}/t_{\rm m}^{2}}dt^{\prime}, \tag{4}\] where \(t_{\rm m}\) is the mean light curve timescale (close to, but not equal with \(t_{\rm peak}\)) and \(t_{\gamma}\) is the timescale for the gamma-ray leaking (Arnett, 1982; Valenti et al., 2008; Chatzopoulos et al., 2012). Inserting Equation 2 into Equation 4 and fitting it to the bolometric light curve around the peak, one can get the best-fit estimate for \(M_{\rm Ni}\). Khatami and Kasen (2019) inferred a new relation between the peak luminosity of a SN and the spatial and temporal distribution of its radioactive heating source: \[L_{\rm peak}=\frac{2}{\beta^{2}t_{\rm peak}^{2}}\int_{0}^{\beta t_{\rm peak}}t ^{\prime}L_{\rm heat}(t^{\prime})dt^{\prime}, \tag{5}\] where \(\beta\) is a numerical parameter that is related to the spatial distribution of heating, while \(L_{\rm heat}(t)\), again, describes the type of heating that powers the SN ejecta. It can have many forms depending on the SN type (see Khatami and Kasen, 2019, for a list of different sources). Inserting \(L_{heat}(t)\) from Equation 2 into Equation 5, \(L_{\rm peak}\) can be expressed as1 Footnote 1: In the original publication of Khatami and Kasen (2019) this formula was written with \(1-\frac{\beta t_{\rm peak}}{t_{\rm SN}}\) instead of \(1-\left(\frac{\beta t_{\rm peak}}{t_{\rm SN}}+1\right)\) in the first part of the sum within the square brackets. Later it has been corrected in a new version uploaded to arxiv.org. We present and use the correct version here. \[L_{\rm peak} =\frac{2M_{\rm Ni}\varepsilon_{\rm Ni}t_{\rm SN}^{2}}{\beta \sigma_{\rm peak}^{2}}\cdot\left[\left(1-\frac{\varepsilon_{\rm Co}}{ \varepsilon_{\rm Ni}}\cdot\frac{t_{\rm Co}}{t_{\rm Co}-t_{\rm Ni}}\right)\cdot\right. \tag{6}\] \[\left.\left(1-\left(\frac{\beta t_{\rm peak}}{t_{\rm SN}}+1\right) e^{-\frac{\beta t_{\rm peak}}{t_{\rm SN}}}\right)+\frac{t_{\rm Co}^{2}\,\varepsilon_{\rm Co}}{t_{ \rm SN}^{2}}\cdot\right.\] \[\left.\frac{t_{\rm Co}}{t_{\rm Co}-t_{\rm Ni}}\cdot\left(1-\left( \frac{\beta t_{\rm peak}}{t_{\rm Co}}+1\right)e^{-\frac{\beta t_{\rm peak}}{t_{ \rm Co}}}\right)\right]\] Equation 6 describes the dependence of the peak luminosity on \(t_{\rm peak}\), i.e. the rise time to the bolometric maximum light, if the SN is powered by the radioactive decay of \({}^{56}\)Ni and \({}^{56}\)Co. Again, the peak luminosity is linearly proportional to \(M_{\rm Ni}\), but the connection between \(L_{\rm peak}\) and \(t_{\rm peak}\) is not as simple as in Equation 1. Figure 1 shows the \(L_{\rm peak}\) vs. \(t_{\rm peak}\) relation **for \(\beta=1.666\pm 0.188\) (solid and dotted lines) assuming different values of \(M_{\rm Ni}\), as well as the observed data given in Table 1.** ## 3 Comparison with Observations In this paper we use the peak time - luminosity relation expressed in Equation 6 to infer the initial \({}^{56}\)Ni mass of 16 SNe Ia studied recently by Konyves-Toth et al. (2020). They estimated ejecta masses, initial nickel masses and other parameters by fitting bolometric light curves of 16 SNe Ia with the Arnett-model, as described in Section 2. Table 1 summarizes the parameters for the sample SNe collected from Konyves-Toth et al. (2020): here \(t_{\rm peak}\) is the rise time from explosion to maximum luminosity in days, \(L_{\rm peak,obs}\) is the peak luminosity determined from the observations. Applying Equation 6 to these data, we derived new \(M_{Ni}\) values using the \(t_{\rm peak}\) and \(L_{\rm peak,obs}\) values from Table 1. We adopted \(\beta=1.6\) as recommended by Khatami & Kasen (2019) for SNe Ia based on radiation hydrodynamical simulations. The results are shown in Table 2 as \(M_{Ni}^{KK}\). Uncertainties are derived via propagating the errors listed in Table 1. Figure 2 shows the comparison between the masses derived from Arnett's rule, and the ones estimated using Equation 6. Figure 1: \(L_{\rm peak}\) as a function of \(t_{\rm peak}\) (Eq. 6) for \(\beta=1.666\)\(\pm\) 0.188 (as derived in Section 4) for different values of \(M_{Ni}\). \(L_{peak}\) is marked with solid lines, the dotted lines are showing the uncertainty caused by the standard deviation of \(\beta\). The \(L_{peak,obs}\) values from Table 1 are shown as well. \begin{table} \begin{tabular}{c c c c} \hline \hline Object & \(t_{\rm peak}\) & \(L_{\rm peak,obs}\) & \(t_{\gamma}\) \\ & [days] & [\(10^{43}\) erg/s] & [days] \\ \hline SN 2011fe & 16.59 \(\pm\)0.06 & 1.22 \(\pm\)0.12 & 37.603 \(\pm\)0.670 \\ Gaia16alq & 19.92 \(\pm\)0.42 & 1.71 \(\pm\)0.17 & 46.669 \(\pm\)0.717 \\ SN 2016asf & 15.08 \(\pm\)2.58 & 1.59 \(\pm\)0.16 & 39.192 \(\pm\)1.329 \\ SN 2016bln & 17.42 \(\pm\)0.34 & 1.83 \(\pm\)0.18 & 44.508 \(\pm\)1.125 \\ SN 2016coj & 14.17 \(\pm\)0.26 & 1.15 \(\pm\)0.11 & 32.967 \(\pm\)0.863 \\ SN 2016eoa & 14.07 \(\pm\)3.05 & 1.38 \(\pm\)0.14 & 39.038 \(\pm\)0.935 \\ SN 2016ffh & 14.03 \(\pm\)1.16 & 1.86 \(\pm\)0.19 & 40.521 \(\pm\)0.926 \\ SN 2016gcl & 17.79 \(\pm\)2.18 & 1.50 \(\pm\)0.15 & 43.623 \(\pm\)1.182 \\ SN 2016ixb & 15.64 \(\pm\)2.09 & 1.14 \(\pm\)0.11 & 30.520 \(\pm\)1.345 \\ SN 2017cts & 14.23 \(\pm\)1.10 & 1.58 \(\pm\)0.16 & 42.485 \(\pm\)0.959 \\ SN 2017erp & 17.96 \(\pm\)0.09 & 1.94 \(\pm\)0.19 & 37.603 \(\pm\)1.084 \\ SN 2017fgc & 16.21 \(\pm\)0.24 & 1.74 \(\pm\)0.17 & 45.398 \(\pm\)0.941 \\ SN 2017fms & 14.04 \(\pm\)0.50 & 1.02 \(\pm\)0.10 & 34.612 \(\pm\)0.731 \\ SN 2017hjy & 16.29 \(\pm\)0.49 & 1.65 \(\pm\)0.16 & 39.484 \(\pm\)0.840 \\ SN 2017gif & 19.58 \(\pm\)0.93 & 0.91 \(\pm\)0.09 & 34.554 \(\pm\)1.193 \\ SN 2018oh & 14.86 \(\pm\)0.86 & 1.57 \(\pm\)0.16 & 44.654 \(\pm\)0.928 \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters of the sample SNe Ia, adopted from Könyves-Töth et al. (2020). Figure 2: Comparison of the new nickel masses derived from Equation 6 to the original ones from Arnett’s rule in Table 1. **The dotted line shows the 1:1 relation, while the solid line is the best-fit linear relation between the two datasets.** These new nickel masses scatter somewhat around the ones derived by Konyves-Toth et al. (2020) from Arnett's rule, but the differences are within the uncertainties, so the two datasets are generally consistent. The SN that deviates the most from the previous estimates is SN 2017erp, which may have an overestimated \(M_{\rm Ni}\) due to its peculiar reddening (see also Konyves-Toth et al., 2020). Its early red color was also reported by Li et al. (2021), and even though it was classified as a normal-velocity (NV) supernova, they also noted that it had the highest early-phase velocity within the group. Thus, if the nickel mass of SN 2017erp was previously overestimated, the new \(M_{\rm Ni}\) based on equation 6 might be a better estimate, since it is lower than the previous one from Arnett's rule. The best-fit linear relation between \(M_{\rm Ni}^{\rm Arnett}\) and \(M_{\rm Ni}^{\rm KK}\), plotted with a solid blue line in Figure 2, has a slope of \(0.880\pm 0.068\), but if we omit SN 2017erp from the sample the slope becomes \(0.967\pm 0.075\), which is very close to the 1:1 relation (shown by the dotted line). In Figure 3 we show the linear relationship between \(L_{\rm peak,obs}\) and the new nickel masses **(\(M_{\rm Ni}^{\rm KK}\))**. It also demonstrates the diversity of the peak luminosities and the corresponding nickel masses, similar to Scalzo et al. (2014, 2019, 2019) and Konyves-Toth et al. (2020). A linear fit to these data resulted in a relation of \[M_{\rm Ni}^{\rm KK}\ =\ 0.425(\pm 0.038)\cdot L_{\rm peak,obs}\ -\ 0.041(\pm 0.053), \tag{7}\] which can be used to estimate the initial nickel masses directly from the measured peak luminosities. The validity of the new nickel masses estimated above is probed by using the published data on SN 2011fe. SN 2011fe was one of the most thoroughly studied SNe Ia in the last decade, because it was a nearby, very bright event discovered only a few hours after explosion. There are numerous nickel mass estimates for SN 2011fe in the literature based on different methods. For example, Pereira et al. (2013) estimated \(M_{\rm Ni}\) from spectrophotometric observations, similar to Mazzali et al. (2015), who used optical and NIR spectra to determine the initial nickel and iron masses of the ejecta of SN 2011fe. Scalzo et al. (2014) determined \(M_{\rm Ni}\) from the peak bolometric luminosity, similar to our approach, while Childress et al. (2015) used the flux of the [CoIII] \(\lambda 5893\) nebular emission feature. More recently, Dhawan et al. (2016) related the phase of the secondary maximum of the near-infrared (NIR) light curves to the bolometric peak luminosity, from which they applied Arnett's rule and delayed-detonation models to determine the initial \(M_{\rm Ni}\). As noted earlier, Konyves-Toth et al. (2020) also gave an estimate for \(M_{\rm Ni}\) by fitting the Arnett-model to the whole bolometric light curve. All of these values are collected in Table 3, and their mean is \(M_{\rm Ni}=0.50\pm 0.08M_{\odot}\). This is within a \(1\sigma\) agreement with the new value of \(M_{\rm Ni}^{KK}=0.496\pm 0.051M_{\odot}\) given above in Table 2. ## 4 A closer look at the beta parameter \begin{table} \begin{tabular}{c c c c c c} \hline Object & \(M_{\rm Ni}^{\rm Arnett}\) & \(M_{\rm Ni}^{\rm tail}\) & \(M_{\rm Ni}^{\rm 15}\) & \(M_{\rm Ni}^{\rm avg}\) & \(M_{\rm Ni}^{\rm KK}\) \\ & [\(M_{\odot}\)] & [\(M_{\odot}\)] & [\(M_{\odot}\)] & [\(M_{\odot}\)] & [\(M_{\odot}\)] \\ \hline SN 2011fe & 0.567 \(\pm\)0.042 & 0.581 \(\pm\)0.049 & 0.59 \(\pm\)0.051 & 0.579 \(\pm\)0.047 & 0.496 \(\pm\)0.051 \\ Gaia16alq & 0.744 \(\pm\)0.055 & 0.768 \(\pm\)0.098 & 0.781 \(\pm\)0.07 & 0.764 \(\pm\)0.074 & 0.796 \(\pm\)0.08 \\ SN 2016asf & 0.597 \(\pm\)0.149 &... & 0.64 \(\pm\)0.055 & 0.618 \(\pm\)0.102 & 0.602 \(\pm\)0.064 \\ SN 2016bln & 0.789 \(\pm\)0.097 & 0.802 \(\pm\)0.094 & 0.822 \(\pm\)0.071 & 0.804 \(\pm\)0.087 & 0.771 \(\pm\)0.088 \\ SN 2016coj & 0.401 \(\pm\)0.053 &... & 0.398 \(\pm\)0.035 & 0.4 \(\pm\)0.044 & 0.416 \(\pm\)0.047 \\ SN 2016eoa & 0.482 \(\pm\)0.103 &... & 0.507 \(\pm\)0.05 & 0.494 \(\pm\)0.076 & 0.497 \(\pm\)0.127 \\ SN 2016ffh & 0.573 \(\pm\)0.078 &... & 0.527 \(\pm\)0.125 & 0.55 \(\pm\)0.102 & 0.668 \(\pm\)0.107 \\ SN 2016gcl & 0.689 \(\pm\)0.164 & 0.692 \(\pm\)0.102 & 0.719 \(\pm\)0.098 & 0.7 \(\pm\)0.121 & 0.642 \(\pm\)0.123 \\ SN 2016ixb & 0.483 \(\pm\)0.064 &... & 0.525 \(\pm\)0.098 & 0.504 \(\pm\)0.081 & 0.443 \(\pm\)0.088 \\ SN 2017cts & 0.539 \(\pm\)0.063 & 0.558 \(\pm\)0.081 & 0.553 \(\pm\)0.099 & 0.55 \(\pm\)0.081 & 0.573 \(\pm\)0.089 \\ SN 2017erp & 0.975 \(\pm\)0.083 &... & 1.074 \(\pm\)0.12 & 1.024 \(\pm\)0.102 & 0.836 \(\pm\)0.087 \\ SN 2017fgc & 0.692 \(\pm\)0.047 &... & 0.701 \(\pm\)0.061 & 0.696 \(\pm\)0.054 & 0.695 \(\pm\)0.077 \\ SN 2017fms & 0.36 \(\pm\)0.029 &... & 0.385 \(\pm\)0.034 & 0.372 \(\pm\)0.032 & 0.367 \(\pm\)0.046 \\ SN 2017hj & 0.688 \(\pm\)0.057 & 0.69 \(\pm\)0.065 & 0.7 \(\pm\)0.069 & 0.693 \(\pm\)0.064 & 0.661 \(\pm\)0.081 \\ SN 2017igf & 0.42 \(\pm\)0.051 & 0.409 \(\pm\)0.043 & 0.447 \(\pm\)0.04 & 0.425 \(\pm\)0.045 & 0.418 \(\pm\)0.057 \\ SN 2018oh & 0.598 \(\pm\)0.059 & 0.614 \(\pm\)0.081 & 0.566 \(\pm\)0.073 & 0.593 \(\pm\)0.071 & 0.588 \(\pm\)0.084 \\ \hline \end{tabular} \end{table} Table 2: Initial \({}^{56}\)Ni masses from the different methods used in this paper. \(M_{\rm Ni}^{\rm Arnett}\) is based on Eq. 3, collected from Köonyves-Töth et al. (2020). \(M_{\rm Ni}^{\rm tail}\) and \(M_{\rm Ni}^{\rm 15}\) refers to the tail-mass and \(t_{15}\)-mass, respectively, as shown in Section 4. \(M_{\rm Ni}^{\rm avg}\) is the average of the previous three columns, while \(M_{\rm Ni}^{\rm KK}\) refers to the masses inferred from Equation 6. The new formula by Khatami and Kasen (2019) **(Equation 6)** also introduces the \(\beta\) parameter, which is connected with the spatial distribution of heating, recombination effects, and opacity. They found that the different distributions of heating only changes the value of \(\beta\), thus, **Equation 6** still holds true. This means that the same formula, with different \(\beta\) values and heating functions, can be used to describe the peak luminosity of a wide variety of objects. **If we have independent measurements for \(M_{\rm Ni}\) and \(L_{\rm peak}\) then we can apply Equation 6 to infer \(\beta\) for each object. The original value of \(\beta\), given by**Khatami and Kasen (2019) for Type Ia SNe as \(\beta\sim 1.6\), is based on numerical simulations of SN Ia explosions.** In this section, we use two independent methods to determine the nickel masses, and consequently, the values of \(\beta\) for our sample SNe. ### Tail luminosity method It is well-known (Valenti et al., 2008; Scalzo et al., 2014; Afsariardchi et al., 2021) that the late-phase light curve (at \(t>60\) days after explosion) for a SN ejecta powered by the Ni-Co-Fe radioactive decay can be expressed as: \[L=L_{\gamma}(1-e^{-t_{\gamma}^{2}/t^{2}})+L_{\rm pos,KE}, \tag{8}\] where \(t_{\gamma}\) is, again, the timescale for gamma-ray leakage, while \(L_{\gamma}\) is the luminosity released in the form of gamma-rays: \[L_{\gamma}=\frac{M_{Ni}}{\rm M_{\odot}}\left(C_{Ni}e^{-\frac{t}{t_{Ni}}}+0.96 8\cdot C_{Co}e^{-\frac{t}{t_{Co}}}\right), \tag{9}\] \(L_{\rm pos,KE}\) gives the luminosity due to the thermalization of the kinetic energy of positrons released during the Co-decay: \[L_{\rm pos,KE}=\frac{M_{Ni}}{\rm M_{\odot}}\left[0.032\cdot C_{Co}\left(e^{- \frac{t}{t_{Co}}}-e^{-\frac{t}{t_{Ni}}}\right)\right]. \tag{10}\] Comparing these formulae with Equation 2, we set \(t_{Ni}=8.8\) days, \(t_{Co}=111.3\) days, \(C_{Ni}=\varepsilon_{\rm Ni}-\varepsilon_{\rm Co}=6.45\cdot 10^{43}\) erg s\({}^{-1}\) and \(C_{Co}=\varepsilon_{\rm Co}=1.45\cdot 10^{43}\) erg s\({}^{-1}\) (see also Branch and Wheeler, 2017). \(t_{\gamma}\) was given in Table 3 by Konyves-Toth et al. (2020), so the only free parameter in fitting the light curve tail is \(M_{Ni}\). A caveat of this method is that it provides appropriate nickel masses only for data beyond \(t\sim 60\) days, thus, we could obtain nickel masses only for those objects that had the late part of their light curve covered by observations. We list the results of fitting Equation 8 to our data in Table 2 as \(M_{\rm Ni}^{\rm tail}\). ### The \(t_{15}\) method Sukhbold (2019) found that the bolometric luminosity is equal to \(L_{\gamma}\) at \(t_{15}=t_{peak}+15\) days, similar to Arnett's rule. Using this, it is possible to determine the nickel mass by measuring the bolometric luminosity at \(t_{15}\): \[L_{bol}(t_{15})\approx\frac{M_{Ni}}{\rm M_{\odot}}\left(C_{Ni}e^{-\frac{t_{15 }}{t_{Ni}}}+C_{Co}e^{-\frac{t_{15}}{t_{Co}}}\right)(1-e^{-t_{\gamma}^{2}/t_{15 }^{2}}) \tag{11}\] \(L_{bol}(t_{15})\) for each SN was determined by interpolating the bolometric light curves to \(t_{15}\). The nickel masses found this way can be seen in Table 2 as \(M_{\rm Ni}^{15}\). It is seen from Table 2 that the nickel masses inferred from both the tail luminosity method and the \(t_{15}\) method are very similar to those obtained from Arnett's rule. Their consistency is further illustrated in Figure 4, where \(M_{\rm Ni}^{\rm tail}\) and \(M_{\rm Ni}^{15}\) are plotted against \(M_{\rm Ni}^{\rm Arnett}\). All data points are closer to the 1:1 relation (shown as a dotted line) within their uncertainties. The slope of the Figure 3: The new \({}^{56}\)Ni masses against the observed peak luminosities. \begin{table} \begin{tabular}{c c} \hline \(M_{\rm Ni}\) & Source \\ \([M_{\odot}]\) & \\ \hline 0.53 \(\pm\) 0.11 & Pereira et al. (2013) \\ 0.42 \(\pm\) 0.08 & Scalzo et al. (2014a) \\ 0.47 \(\pm\) 0.05 & Mazzali et al. (2015) \\ 0.500 \(\pm\) 0.026 & Childress et al. (2015) \\ 0.52 \(\pm\) 0.15 & Dhawan et al. (2016) \\ 0.567 \(\pm\) 0.054 & Könyves-Töth et al. (2020) \\ \hline 0.50 \(\pm\) 0.08 & mean \\ 0.496 \(\pm\) 0.05 & this work \\ \hline \end{tabular} \end{table} Table 3: Estimates for the initial \({}^{56}\)Ni mass of SN 2011fe from different methods. best-fit linear relationship, \(1.047\pm 0.038\), is consistent with the identity relation (see Fig. 4). Since all nickel masses found from different methods agree within their uncertainties, we use their mean value (shown as \(M_{\rm Ni}^{\rm avg}\) in Table 2) for measuring the \(\beta\) parameter. We find that \(\beta\) varies between 1.2 to 2.1 (Table 4), while the mean value is 1.666 \(\pm\) 0.188, which is reasonably close (within 1\(\sigma\)) to 1.6 proposed by Khatami and Kasen (2019). The inferred \(\beta\) values are plotted in Figure 5 for each SN, while their distribution is shown in Figure 6. moderate (not too central, not too shallow) \({}^{56}\)Ni distribution. Our empirical results presented here confirm this statement. ## 5 Conclusions We use previously published data (Konyves-Toth et al., 2020) to give estimates for the initial masses of radioactive nickel in 16 Type Ia supernovae using a new formula published by Khatami and Kasen (2019), which relies on the relationship between the peak luminosity and peak time without assuming self-similar energy distribution within the ejecta. We compare our results with previous nickel mass estimates for SN 2011fe from the literature (see Table 3), and find very good agreement. Our new nickel masses are in a \(1\sigma\) agreement with those derived by others. Previous estimates for the initial nickel mass in SNe Ia were mostly carried out by using Arnett's rule. Our results (Figure 3) show that the new formula by Khatami and Kasen (2019) gives consistent nickel masses with those estimated from a radiation-diffusion Arnett-model, while also taking into account the spatial distribution of heating that can be different in each case. Similar to Scalzo et al. (2019), and Konyves-Toth et al. (2020), we find that the \({}^{56}\)Ni masses show diversity, suggesting that the ejecta masses are also inhomogeneous. Finally, we give an approximate estimate for the \(\beta\) parameter of each studied SN (Figures 5 and 6), and find good agreement with the mean value (1.6) given by Khatami and Kasen (2019) from SN Ia simulations. ## Acknowledgments This work is part of the project "Transient Astrophysical Objects" GINOP 2.3.2-15-2016-00033 of the National Research, Development and Innovation Office (NKFIH), Hungary, funded by the European Union, **and it was also supported by the NKFIH/OTKA FK-134432 grant.** We thank the anonymous referee for the useful comments and suggestions that helped improving our paper. Numpy (Harris et al., 2020), Matplotlib (Hunter, 2007), Astropy (Astropy Collaboration et al., 2013)
2308.13536
Implicit ZCA Whitening Effects of Linear Autoencoders for Recommendation
Recently, in the field of recommendation systems, linear regression (autoencoder) models have been investigated as a way to learn item similarity. In this paper, we show a connection between a linear autoencoder model and ZCA whitening for recommendation data. In particular, we show that the dual form solution of a linear autoencoder model actually has ZCA whitening effects on feature vectors of items, while items are considered as input features in the primal problem of the autoencoder/regression model. We also show the correctness of applying a linear autoencoder to low-dimensional item vectors obtained using embedding methods such as Item2vec to estimate item-item similarities. Our experiments provide preliminary results indicating the effectiveness of whitening low-dimensional item embeddings.
Katsuhiko Hayashi, Kazuma Onishi
2023-08-15T07:58:22Z
http://arxiv.org/abs/2308.13536v1
# Implicit ZCA Whitening Effects of Linear Autoencoders for Recommendation ###### Abstract Recently, in the field of recommendation systems, linear regression (autoencoder) models have been investigated as a way to learn item similarity. In this paper, we show a connection between a linear autoencoder model and ZCA whitening for recommendation data. In particular, we show that the dual form solution of a linear autoencoder model actually has ZCA whitening effects on feature vectors of items, while items are considered as input features in the primal problem of the autoencoder/regression model. We also show the correctness of applying a linear autoencoder to low-dimensional item vectors obtained using embedding methods such as Item2vec to estimate item-item similarities. Our experiments provide preliminary results indicating the effectiveness of whitening low-dimensional item embeddings. ## 1 Introduction E-commerce has become an indispensable part of our daily lives. Recommender systems help users find products they want to buy on e-commerce sites and have a wide range of applications, such as in movie recommendations and product recommendations. Collaborative filtering (CF) is one of the most widely used approaches in recommender systems. Nearest neighbor CF approaches are divided into user-based and item-based ones. In item-based collaborative filtering (ICF), recommendations to users are made by finding items that are similar to other items that a given user has already had an interaction with. Therefore, the similarity measure between two items plays an important role in the ICF approach. While early ICF models used such statistical measures as Pearson correlation and cosine similarity [21, 11, 5], model-based methods [15, 9, 18] have recently been investigated as a way to learn item similarity. In particular, ICF models based on linear regressions (autoencoders) [15, 18, 8, 14] have achieved current state-of-the-art performances on several benchmark datasets for implicit recommendation. As to the reason for the empirical success of linear autoencoders, in this study, we reveal that they actually have ZCA whitening [10] effects on recommendation data. The whitening transformation removes correlations between the feature dimensions of the item vectors, and the item-item similarity estimated from the transformed vectors improves the quality of the recommendations. Our finding also has the following empirical contribution: * **Whitening Item Embeddings**: In the field of natural language processing, it is known that whitening word embeddings improves the performance of various retrieval and similarity tasks [19, 7]. Analogously, embedding methods such as Item2vec [1], which learns a latent semantic feature vector representation of items, are useful in the field of information retrieval. Thus, whitening item embeddings is expected to improve recommendation performance. Our experiments provide preliminary results that show the effectiveness of whitening low-dimensional item embeddings. Notation and PreliminariesVectors are represented by boldface lowercase letters, e.g., \(\mathbf{a}\). \(\mathbf{0}_{D}\) and \(\mathbf{1}_{D}\) are \(D\)-dimensional vectors of zeros and ones, respectively. Matrices are represented by boldface capital letters, e.g., \(\mathbf{A}\). The \(i\)-th row of a matrix \(\mathbf{A}\) is represented by \(\mathbf{a}_{i:}\), and the \(j\)-th column of \(\mathbf{A}\) is represented by \(\mathbf{a}_{:j}\). The element \((i,j)\) of a matrix \(\mathbf{A}\) is denoted by \(a_{ij}\). \(\mathbf{A}^{\mathrm{T}}\) and \(\mathbf{A}^{-1}\) denote the transpose and inverse of a matrix \(\mathbf{A}\), respectively. \(\mathbf{I}_{D}\) denotes the \(D\)-dimensional identity matrix. \(\mathrm{diag}(\mathbf{A})\) is the diagonal of a square matrix \(\mathbf{A}\). \(\mathrm{diagMat}(\mathbf{a})\) denotes the diagonal matrix whose diagonal is the vector \(\mathbf{a}\). Item-based Neighborhood Model Let \(U\) and \(I\) be sets of users and items, respectively. Like in many papers on recommender systems [15, 18, 5], we consider implicit feedback data. Here, the user-item interaction matrix \(\mathbf{X}\) can be considered to be a binary one: \[\mathbf{X}=\left(\begin{array}{cccc}x_{11}&x_{12}&\cdots&x_{1|I|}\\ x_{21}&x_{22}&\cdots&x_{2|I|}\\ \vdots&\vdots&\ddots&\vdots\\ x_{|U|1}&x_{|U|2}&\cdots&x_{|U||I|}\end{array}\right)\in\{0,1\}^{|U|\times|I|}.\] where \(x_{ui}=1\) represents that there is an interaction between user \(u\) and item \(i\). If there is no interaction between \(u\) and \(i\), then \(x_{ui}=0\). To make recommendations for a user \(u\), item-based neighborhood collaborative filtering (ICF) models [11] require pre-computed similarities associated with each item-item pair from \(\mathbf{X}\). We denote an item-item similarity matrix as \(\mathbf{B}\in\mathbb{R}^{|I|\times|I|}\), where \(b_{ij}\) is the similarity between two items \(i\) and \(j\). The item set that the user \(u\) has interacted with is represented by \(\mathbf{y}_{(u)}\in\{0,1\}^{|I|}\), where the \(j\)-th element is 1 if there is an interaction between \(u\) and \(j\), otherwise 0. Accordingly, ICF models simply compute user \(u\)'s preference score \(s_{ui}\) on item \(i\) as the following dot-product \(s_{ui}=\mathbf{y}_{(u)}^{\mathrm{T}}\mathbf{b}_{:i}\), where \(\mathbf{b}_{:i}\) is the \(i\)-th column of \(\mathbf{B}\). ## 3 Shallow Linear Autoencoders As the previous section shows, a key step in ICF methods is to estimate the item-item similarity matrix \(\mathbf{B}\in\mathbb{R}^{|I|\times|I|}\) from \(\mathbf{X}\in\{0,1\}^{|U|\times|I|}\). While early ICF approaches used such statistical measures as Pearson correlation and cosine similarity [21, 11, 5], model-based methods [15, 9, 18] have recently been investigated as a way to learn item similarity. In this section, we introduce shallow linear autoencoder models [15, 18] that learn the item-item similarity matrix as a regression problem. ### Linear Autoencoder with L2 Regularization Linear autoencoders can be trained with the multivariate least squares fitting approach. The linear algebraic formulation can be represented as a ridge linear regression problem: \[\widehat{\mathbf{B}}=\underset{\mathbf{B}}{\arg\min}\Big{\{}||\mathbf{X}- \mathbf{X}\mathbf{B}||_{F}^{2}+\lambda||\mathbf{B}||_{F}^{2}\Big{\}} \tag{1}\] where \(\lambda>0\) is a regularization parameter. The closed form solution of the above equation is given as \[\widehat{\mathbf{B}}=(\mathbf{X}^{\mathrm{T}}\mathbf{X}+\lambda\mathbf{I}_{|I| })^{-1}\mathbf{X}^{\mathrm{T}}\mathbf{X} \tag{2}\] or, equivalently [17], \[\widehat{\mathbf{B}}=\mathbf{X}^{\mathrm{T}}(\mathbf{X}\mathbf{X}^{\mathrm{T} }+\lambda\mathbf{I}_{|U|})^{-1}\mathbf{X}. \tag{3}\] For a proof of the equivalence between Eqs.(2) and (3), we refer the read to the Appendix A of the paper [3] (in the case of \(|I|>|U|\)) and our Appendix A (in the case of \(|U|>|I|\)). Note that though the minimization of Eq.(1) is achieved in an obvious way (\(\mathbf{B}=\mathbf{I}_{|I|}\)), \(\mathrm{diag}\left(\mathbf{B}\right)=\mathbf{0}_{|I|}\) is imposed as a constraint condition in practical linear autoencoder models like SLIM [15] and EASE [18]. ### EASE: Linear Autoencoder with Diagonal Constraints [18] Steck [18] introduced a linear autoencoder model, called EASE. The objective function of EASE for learning the item-item similarity matrix \(\mathbf{B}\) is: \[\widehat{\mathbf{B}}_{\mathrm{EASE}}=\underset{\mathbf{B}}{\arg \min}\Big{\{}||\mathbf{X}-\mathbf{X}\mathbf{B}||_{F}^{2}+\lambda||\mathbf{B}|| _{F}^{2}\Big{\}}\] \[\mathrm{s.t.}\quad\mathrm{diag}\left(\mathbf{B}\right)=\mathbf{0} _{|I|}.\] The optimization problem can be solved with the following Lagrangian: \[L=||\mathbf{X}-\mathbf{X}\mathbf{B}||_{F}^{2}+\lambda||\mathbf{B}||_{F}^{2}+2 \boldsymbol{\alpha}^{\mathrm{T}}\,\mathrm{diag}\left(\mathbf{B}\right)\] where \(\boldsymbol{\alpha}=[\alpha_{1},\ldots,\alpha_{|I|}]^{\mathrm{T}}\) is the vector of Lagrange multipliers. By setting the derivative to zero, we derive the estimate of the similarity matrix \(\mathbf{B}\): \[\widehat{\mathbf{B}}_{\mathrm{EASE}}=(\mathbf{X}^{\mathrm{T}}\mathbf{X}+ \lambda\mathbf{I}_{|I|})^{-1}(\mathbf{X}^{\mathrm{T}}\mathbf{X}-\mathrm{diag }\mathrm{Mat}\left(\boldsymbol{\alpha}\right)).\] By imposing the constraint \(\mathrm{diag}(\widehat{\mathbf{B}}_{\mathrm{EASE}})=0\) and defining \(\widehat{\mathbf{P}}=(\mathbf{X}^{\mathrm{T}}\mathbf{X}+\lambda\mathbf{I}_{|I|})^ {-1}\), \(\boldsymbol{\alpha}\) is determined as \[\boldsymbol{\alpha}=\mathbf{1}_{|I|}\oslash\mathrm{diag}(\widehat{\mathbf{P}}) -\lambda\mathbf{1}_{|I|}.\] Steck [18] showed that the solution can be derived in the following closed form: \[\widehat{\mathbf{B}}_{\mathrm{EASE}}=\mathbf{I}_{|I|}-\widehat{\mathbf{P}} \,\mathrm{diagMat}\left(\mathbf{1}_{|I|}\oslash\mathrm{diag}(\widehat{\mathbf{P}})\right)\] where \(\oslash\) denotes elementwise division. ## 4 Relationship between Linear Autoencoder and ZCA Whitening ### ZCA Whitening (Zero-phase Component Analysis) Whitening is an operation that eliminates correlations between features in a data sample. That is, correlation between any two components \(x_{i}\) and \(x_{j}\) of a sample vector \(\mathbf{x}=[x_{1},\ldots,x_{D}]\) in the \(D\)-dimensional feature space is reduced to zero. In the following, we assume that a feature vector of each sample is centered; i.e., \(\frac{1}{D}\sum_{i=1}^{D}x_{i}=0\). Given \(N\) data samples \(\mathbf{X}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{N}]\in\mathbb{R}^{D\times N}\), the correlation coefficients between features are represented by the following covariance matrix: \[\Phi_{\mathbf{X}}=\frac{1}{N}\mathbf{X}\mathbf{X}^{\mathrm{T}}.\] We now consider a linear transformation \(\mathbf{P}\in\mathbb{R}^{D\times D}\) of feature vectors: \[\mathbf{w}_{n}=\mathbf{P}\mathbf{x}_{n}\quad(n=1,\ldots,N).\] Accordingly, the covariance matrix of the transformed vectors \(\mathbf{W}=[\mathbf{w}_{1},\ldots,\mathbf{w}_{N}]\) is defined as: \[\Phi_{\mathbf{W}}=\frac{1}{N}\mathbf{W}\mathbf{W}^{\mathrm{T}}.\] The purpose of whitening is to find a projection matrix \(\mathbf{P}\) that makes the covariance matrix \(\Phi_{\mathbf{W}}\) be the identity matrix \(\mathbf{I}_{D}\). If \(\Phi_{\mathbf{W}}=\mathbf{I}_{D}\), \(\mathbf{P}\) must satisfy the following equation: \[\mathbf{P}^{\mathrm{T}}\mathbf{P}=\Phi_{\mathbf{X}}^{-1}. \tag{4}\] \(\mathbf{P}\) that satisfies Eq.(4) can be represented by using the eigenvectors \(\mathbf{U}\) of \(\mathbf{X}\mathbf{X}^{\mathrm{T}}\). \[\mathbf{X}\mathbf{X}^{\mathrm{T}}=\mathbf{U}\mathbf{\Sigma}\mathbf{U}^{\mathrm{T}}\] where \(\mathbf{U}\) is a square \(D\times D\) matrix whose \(i\)-th column is the \(i\)-th eigen vector of \(\mathbf{X}\mathbf{X}^{\mathrm{T}}\), and \(\mathbf{\Sigma}\) is a diagonal matrix whose diagonal elements are the corresponding eigenvalues, \(d_{ii}=\lambda_{i}\). Since \(\mathbf{\Sigma}\) is an orthogonal matrix \((\mathbf{U}^{\mathrm{T}}\mathbf{U}=\mathbf{U}\mathbf{U}^{\mathrm{T}}=\mathbf{ I}_{D})\), \(\Phi_{\mathbf{X}}^{-1}\) can be denoted by \(\mathbf{U}\mathbf{\Sigma}^{-1}\mathbf{U}^{\mathrm{T}}\). In zero-phase component Analysis (ZCA) or ZCA whitening [10], we use \[\mathbf{P}_{\mathrm{ZCA}}=\mathbf{U}\mathbf{\Sigma}^{-1/2}\mathbf{U}^{\mathrm{ T}}\] as a transformation matrix \(\mathbf{P}\). In practice, noise \(\epsilon\) is often added to the diagonal of \(\mathbf{\Sigma}\): \[\mathbf{P}_{\mathrm{ZCA}}=\mathbf{U}(\mathbf{\Sigma}+\epsilon\mathbf{I}_{D})^ {-1/2}\mathbf{U}^{\mathrm{T}}.\] ### Whitening Effects of Linear Autoencoders Linear Autoencoder with L2 RegularizationFirst, we consider the ZCA whitening transformation of the user-item interaction matrix \(\mathbf{X}\in\{0,1\}^{|U|\times|I|}\) (here, we assume \(|U|>|I|\)): \[\mathbf{W}=\mathbf{P}_{\mathrm{ZCA}}\mathbf{X}\] where \(\mathbf{P}_{\mathrm{ZCA}}=\mathbf{U}(\mathbf{\Sigma}+\epsilon\mathbf{I}_{|U|} )^{-1/2}\mathbf{U}^{\mathrm{T}}\). Note that \(\mathbf{U}\) and \(\mathbf{\Sigma}\) are the eigen vector and eigenvalue matrices of \(\mathbf{X}\mathbf{X}^{\mathrm{T}}\). After this whitening transformation, we simply consider \[\widehat{\mathbf{B}}_{\mathrm{ZCA}}=\mathbf{W}^{\mathrm{T}}\mathbf{W}\] as an item-item similarity matrix for ICF-based recommender systems. Now let us rewrite \(\widehat{\mathbf{B}}_{\mathrm{ZCA}}\): \[\widehat{\mathbf{B}}_{\mathrm{ZCA}} = \mathbf{W}^{\mathrm{T}}\mathbf{W}=(\mathbf{P}_{\mathrm{ZCA}} \mathbf{X})^{\mathrm{T}}(\mathbf{P}_{\mathrm{ZCA}}\mathbf{X})\] \[= \mathbf{X}^{\mathrm{T}}\big{(}\mathbf{U}(\mathbf{\Sigma}+\epsilon \mathbf{I}_{|U|})^{-1/2}\mathbf{U}^{\mathrm{T}}\big{)}\big{(}\mathbf{U}( \mathbf{\Sigma}+\epsilon\mathbf{I}_{|U|})^{-1/2}\mathbf{U}^{\mathrm{T}}\big{)} \mathbf{X}\] \[= \mathbf{X}^{\mathrm{T}}\big{(}\mathbf{U}(\mathbf{\Sigma}+\epsilon \mathbf{I}_{|U|})^{-1}\mathbf{U}^{\mathrm{T}}\big{)}\mathbf{X}\] \[= \mathbf{X}^{\mathrm{T}}\big{(}\mathbf{U}\mathbf{\Sigma}\mathbf{U} ^{\mathrm{T}}+\epsilon\mathbf{U}\mathbf{U}^{\mathrm{T}}\mathbf{U}\mathbf{U}^{ \mathrm{T}}\big{)}^{-1}\mathbf{X}\] \[= \mathbf{X}^{\mathrm{T}}\big{(}\mathbf{X}\mathbf{X}^{\mathrm{T}}+ \epsilon\mathbf{I}_{|U|}\big{)}^{-1}\mathbf{X}\qquad(=\mathrm{Eq.}(3))\] \[= \big{(}\mathbf{X}^{\mathrm{T}}\mathbf{X}+\epsilon\mathbf{I}_{|I| }\big{)}^{-1}\mathbf{X}^{\mathrm{T}}\mathbf{X}\qquad(=\mathrm{Eq.}(2)).\] Refer to our Appendix A for details on the final transformation. This result clearly shows a connection between the shallow linear autoencoder and ZCA whitening. We can see that linear autoencoders actually have implicit ZCA whitening-like effects on feature vectors of items1, while items are considered as input features in the primal problem of the autoencoder models. Footnote 1: In general, linear autoencoder models for recommendation data do not assume that input data are centered. Linear Autoencoder with Diagonal ConstraintsAs pointed out in the paper [14], the solution of EASE can be divided into two terms: regularization and diagonal constraints. The former regularization part is equivalent to the solution of linear autoencoder with L2 regularization: \[\widehat{\mathbf{B}}_{\text{EASE}}=\underbrace{\left(\mathbf{X}^{\text{T}} \mathbf{X}+\lambda\mathbf{I}_{|I|}\right)^{-1}\mathbf{X}^{\text{T}}\mathbf{X} }_{\widehat{\mathbf{B}}_{\text{ZCA}}}-\left(\mathbf{X}^{\text{T}}\mathbf{X}+ \lambda\mathbf{I}_{|I|}\right)^{-1}\text{diagMat}(\boldsymbol{\alpha}).\] This result shows that EASE also has implicit whitening effects on item vectors. From [14], it is known that the latter diagonal constraint part plays a role that penalizes the impact of unpopular items. In future work, we will further investigate the role of the diagonal constraint part from the view point of the data preprocessing stage. Whitening Item EmbeddingsSeveral studies [1, 6, 8] have shown that a latent semantic representation of items is useful for estimating item similarity. We consider item embeddings \(\mathbf{E}\in\mathbb{R}^{D\times|I|}\), where \(D<|I|\) is the dimension size of the item embedding. Our finding on a relationship between the linear autoencoder and ZCA whitening shows the correctness of using a linear autoencoder on item embeddings: \[\widehat{\mathbf{B}}=\underset{\mathbf{B}}{\arg\min}\Big{\{}||\mathbf{E}- \mathbf{E}\mathbf{B}||_{F}^{2}+\lambda||\mathbf{B}||_{F}^{2}\Big{\}}.\] The final solution is \[\widehat{\mathbf{B}}=(\mathbf{E}^{\text{T}}\mathbf{E}+\lambda\mathbf{I}_{|I|}) ^{-1}\mathbf{E}^{\text{T}}\mathbf{E}.\] In the case of \(D<|I|\), by using Lemma 9 of the paper [3], we can transform the above equation into \[\widehat{\mathbf{B}}=\mathbf{E}^{\text{T}}(\mathbf{E}\mathbf{E}^{\text{T}}+ \lambda\mathbf{I}_{D})^{-1}\mathbf{E}.\] Moreover, we can further transform this equation as follows: \[\widehat{\mathbf{B}} = \mathbf{E}^{\mathrm{T}}(\mathbf{E}\mathbf{E}^{\mathrm{T}}+\lambda \mathbf{I}_{D})^{-1}\mathbf{E}\] \[= \mathbf{E}^{\mathrm{T}}(\mathbf{U}\mathbf{\Sigma}\mathbf{U}^{ \mathrm{T}}+\lambda\mathbf{U}\mathbf{I}_{D}\mathbf{U}^{\mathrm{T}})^{-1} \mathbf{E}\] \[= \mathbf{E}^{\mathrm{T}}\mathbf{U}(\mathbf{\Sigma}+\lambda\mathbf{ I}_{D})^{-1}\mathbf{U}^{\mathrm{T}}\mathbf{E}\] \[= \mathbf{E}^{\mathrm{T}}\mathbf{U}(\mathbf{\Sigma}+\lambda\mathbf{ I}_{D})^{-1/2}\mathbf{U}^{\mathrm{T}}\mathbf{U}(\mathbf{\Sigma}+\lambda\mathbf{I}_{D})^{- 1/2}\mathbf{U}^{\mathrm{T}}\mathbf{E}\] \[= (\mathbf{P}_{\mathrm{ZCA}}\mathbf{E})^{\mathrm{T}}(\mathbf{P}_{ \mathrm{ZCA}}\mathbf{E})\] where \(\mathbf{U}\mathbf{\Sigma}\mathbf{U}^{\mathrm{T}}\) is the eigenvalue decomposition of \(\mathbf{E}\mathbf{E}^{\mathrm{T}}\) and \(\mathbf{P}_{\mathrm{ZCA}}\) denotes \(\mathbf{U}(\mathbf{\Sigma}+\lambda\mathbf{I}_{D})^{-1/2}\mathbf{U}^{\mathrm{T}}\). The result shows that the linear autoencoder implicitly decorrelates latent features of item embeddings through the ZCA whitening process. ## 5 Experiments ### Datasets and Evaluation Metrics We conducted experiments on two publicly available datasets: * MovieLens 20 Million (ML-20M) [4]: 136,677 users and 20,108 movies with about 10.0 million interactions, * Netflix Prize (Netflix) [2]: 463,435 users and 17,769 movies with about 56.9 million interactions, For a fair comparison, we followed the experimental settings used in [12] and kept the same pre-processing steps2. Footnote 2: The program code is provided by the authors of [13]: [https://github.com/samlobel/RaCT_CF/blob/master/setup_data.py](https://github.com/samlobel/RaCT_CF/blob/master/setup_data.py). The experiments considered two ranking metrics, Recall@\(R\) and the truncated NDCG (NDCG@\(R\)), where \(R\) is a cut-off hyper-parameter [12]. Here, \(\omega(r)\) is defined as the item at rank \(r\), \(\mathbb{I}[\cdot]\) as the indicator function, and \(\mathcal{I}_{u}\) as the held-out unobserved items that a user \(u\) will interact. Recall@\(R\) for user \(u\) is \[\text{Recall@}R:=\sum_{r=1}^{R}\frac{\mathbb{I}[\omega(r)\in\mathcal{I}_{u}]}{ \min(R,|\mathcal{I}_{u}|)}.\] The truncated discounted cumulative gain (DCG@\(R\)) is \[\text{DCG@}R:=\sum_{r=1}^{R}\frac{2^{\text{I}[\omega(r)\in\mathcal{I}_{u}]}-1}{ \log{(r+1)}}.\] NDCG@\(R\) is obtained by dividing DCG@\(R\) by its best possible value where all the held-out items are ranked at the top. ### Results In section 4.2, we showed that linear autoencoders have ZCA whitening-like effects on item feature vectors. In our experiments, we investigated how linear autoencoders improve the quality of item embeddings. Using a singular value decomposition \(\mathbf{X}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\text{T}}\), we computed item embeddings \(\mathbf{E}=\mathbf{\Sigma}^{1/2}\mathbf{V}^{\text{T}}\in\mathbb{R}^{D\times|I|}\). To estimate item similarity, we applied a linear autoencoder with L2 regularization (**AE**) and **EASE** to item embeddings, respectively. We also tried a simple inner product \(\mathbf{E}^{\text{T}}\mathbf{E}\) as a baseline item-item similarity matrix. The number of dimensions \(D\) was set to 800 and 4,000 for the ML-20M and Netflix datasets, respectively. The hyperparameter \(\lambda\) of the linear autoencoders was fixed to 200 in all settings. Note that an advantage of using low-dimensional vectors is the computational savings regarding the calcula \begin{table} \begin{tabular}{r r r r r} \hline & \multicolumn{2}{c}{**ML-20M**} & \multicolumn{2}{c}{**Netflix**} \\ Model & Recall@20 & NDCG@100 & Recall@20 & NDCG@100 \\ \hline **Embed** (Inner Product) & 0.279 & 0.317 & 0.246 & 0.279 \\ **Embed+AE** & 0.372 & 0.402 & 0.353 & 0.385 \\ **Embed+EASE** & 0.361 & 0.394 & 0.341 & 0.375 \\ \hline *SLIM [15] & 0.370 & 0.401 & 0.347 & 0.379 \\ **EASE [18] & 0.391 & 0.420 & **0.362** & **0.393** \\ *WMF [6] & 0.360 & 0.386 & 0.316 & 0.351 \\ *CDAE [20] & 0.391 & 0.418 & 0.343 & 0.376 \\ * & **0.434* * & 0.357 & 0.392 \\ \hline \end{tabular} \end{table} Table 1: Comparison of recommendation performances: * and * * denote results transcribed from [12] and [18], respectively. tion of the inverse matrix; i.e., \((\mathbf{EE}^{\mathrm{T}}+\mathbf{I}_{D})^{-1}\in\mathbb{R}^{D\times D}\) is much more efficient than \((\mathbf{X}^{\mathrm{T}}\mathbf{X}+\lambda\mathbf{I}_{|I|})^{-1}\in\mathbb{R}^{ |I|\times|I|}\). Tab.1 compares recommendation performances. Our results clearly show that linear autoencoders improve the quality of the item embeddings. In particular, **Embed+AE** outperformed **Embed+EASE**. We think this is because a low-rank approximation (item embeddings) eliminates the effects of unpopular items, while the diagonal constraint part in the solution of EASE also plays the same role, as noted in Section 4.2. Therefore, we think that the normal **AE** already has sufficient ability to improve the quality of the item embeddings. ## 6 Conclusion In this paper, we have shown that linear autoencoder models have ZCA whitening-like effects on recommendation data. This finding ensures the correctness of applying a linear autoencoder model to low-dimensional item embedding vectors. Our initial experiments also reveal the effectiveness of whitening low-dimensional item embeddings. In future work, we will try other methods to make item embeddings, such as Item2vec [1] and GloVe [16].